Exploding cows in Minecraft…

Last weekend I was at Womad festival, helping kids fire exploding cows from catapults in Minecraft. Not my usual line of work as CTO, or typical festival experience for that matter!

I was volunteering with Devoxx4Kids who organise events worldwide where children can develop computer games, program robots and also have an introduction to electronics. CERN had invited Devoxx4Kids to take part in the workshops happening at the Physics Pavilion.

We ran 3 packed out workshops across the weekend, with children ranging from about aged 6 to 13. While there was a whole range of knowledge levels, almost everyone was familiar with Scratch — and they most definitely knew far more about Minecraft than me!

Warm up before a session!

The workshops involved writing some Java using Minecraft Forge and Eclipse in order to introduce a catapult into the Minecraft world, understand the impact of angles on how far the catapult could fire, and ultimately throw some surprisingly explosive animals!

As volunteers, we were split around 50:50 between those that had a technical background or not — it wasn’t about showing off our own technical knowledge — more asking questions and helping the children stay on track with the activity. A particular shout out to Cesar and Dan, whose hard work meant the rest of us could just turn up on the day!

It was humbling to see how well our attendees all tackled the challenge — their thoughtfulness on variable names for their animal of choice, and somewhat more destructively, delight at changing how big an explosion to create when it landed!

While it was only a small taster, hopefully it reinforced the realisation (for both parents and children!) that by coding they can actively change the world they experience in these games, and perhaps continue to grow an interest in technology.

This document has been composed with the free HTML edior. Click here to give it a try.

Starting a remote working journey

Today I head to Gran Canaria for a month. Not for holiday, but to work. I’ll be leaving my friends and work colleagues back in London, whilst trying to convince them that this isn’t all about sitting on a beach and surfing all day long.

I’ve always read with admiration and a fair dose of jealousy the stories from various digital nomads around the web. Free to go where they will, work as they please. And yet I’ve never felt able to take the plunge.

While I’m only dipping a toe in to start, this is as much a company challenge as a personal one. At FundApps, we’ve grown to a team of 8 so far, all based in London. We want to foster a great place to work, and realise creating a remote-working friendly environment is a big pull for many people (as it is for ourselves). We’re also planning to expand into the US, and so we know we’ll *have* to soon deal with the practicalities of asynchronous working with a remote team in a different timezone.

We’ve grown as a clustered, centralized unit based in London without having to address these kinds of questions up front — so we’re now having to retrofit a remote-friendly culture. Working from home a day or two a week is pretty easy, when there’s still enough in person interaction to cover up any cracks in your approach to remote working. But when you take away that regular in-person contact, all that effortless information you pick up in the office fades away.

How do you make sure everyone knows what’s going on? Feels included? Feels part of a coherent company culture? How do you keep learning and sharing knowledge? How do you hire and interview? On a personal level, how does the reality stack up? How do you replace the personal contact that you’d normally have in the office with colleagues?

I know to do this well will be hard, especially with the rest of the team still being a core coherent unit back in London. But I’m hoping this will be an opportunity to learn a lot — and share the experience, both from a personal standpoint, and as a startup founder.

Onwards! I have a flight to catch.

PS I would love to hear your own thoughts. Do you care about remote working? What have you tried? What’s worked or not? Or what’s putting you off, or holding you back from trying it — either personally or at your company?

Integrating NDepend metrics into your Build using F# Make & TeamCity

NDepend is an analysis tool giving you all kinds of code quality metrics, but also tools to drill down into dependencies, and query and enforce rules over your code base.

There’s a version that integrates with Visual Studio, but there’s also a version that runs on the console to generate static reports, and enforce any code rules you might have written.

I wanted to see how easy it would be to combine all of this and use NDepend to generate actionable reports and metrics on our code base – not just now, but how it shifts over time.

To do this, you need to

  1. Run your unit tests via a code coverage tool, such as dotCover. This has a command line version bundled with TeamCity which you are free to use directly.
  2. Run NDepend with your code coverage files and NDepend project file
  3. Store NDepend metrics from build-to-build so it can track trends over time

I’ve covered step 1 in my previous post on generating coverage reports using dotCover . I recommend you read that first!

We can then extend this code to feed the code coverage output into NDepend.

Downloading your dependencies

I’ve already covered this in the previous post I mentioned, so using the same helper method, we can also download our NDepend executables from a HTTP endpoint, and ensure we have the appropriate license key.

<pre>Target "EnsureDependencies" (fun _ ->
    ensureToolIsDownloaded "dotCover" "https://YourLocalDotCoverDownloadUrl/dotCoverConsoleRunner.2.6.608.466.zip"
    ensureToolIsDownloaded "nDepend" "https://YourLocalDotCoverDownloadUrl/NDepend_5.2.1.8320.zip"
    CopyFile (toolsDir @@ "ndepend" @@ "NDependProLicense.xml") (toolsDir @@ "NDependProLicense.xml")
)

Generating the NDepend coverage file

Before running NDepend itself, we also need to generate a coverage file in the correct format. I’ve simply added another step to the “TestCoverage” target I defined previously:

DotCoverReport (fun p -> { p with
Source = artifactsDir @@ "DotCover.snapshot"
Output = artifactsDir @@ "DotCover.xml"
ReportType = DotCoverReportType.NDependXml })

Generating the NDepend report

Now we can go and run NDepend itself. You’ll need to have already generated a sensible NDepend project file, which means the command line arguments are pretty straightforward:

Target "NDepend" (fun _ ->

NDepend (fun p -> { p with
ProjectFile = currentDirectory @@ "Rapptr.ndproj"
CoverageFiles = [artifactsDir @@ "DotCover.xml" ]
})
)

I’m using an extension I haven’t yet submitted to F# Make yet, but you can find the code here, and just reference it at the top of your F# make script using

#load @"ndepend.fsx"
open Fake.NDepend

After adding a dependency between the NDepend and EnsureDependencies target, then we’re all good to go!

Recording NDepend trends using TeamCity

To take this one step further, and store historical trends with NDepend, we need to persist a metrics folder across analysis runs. This could be a shared network drive, but in our case we actually just “cheat” and use TeamCity’s artifacts mechanism.

Each time our build runs, we store the NDepend output as an artifact – and restore the artifacts from the previous successful build the next time we run. Before this was a bit of a pain but as of TeamCity 8.1 you can now reference your own artifacts to allow for incremental-style builds.

In our NDepend configuration in TeamCity, ensure the artifacts path (under general settings for that build configuration) includes the NDepend output. For instance

artifacts/NDepend/** => NDepend.zip

go to Dependencies, and add a new artifact dependency. Select the same configuration in the drop down (so it’s self referencing), and select “Last Finished Build”. Then add a rule to extract the artifacts and place them to the same location that NDepend will run in the build, for instance

NDepend.zip!** => artifacts/NDepend

TeamCity Report tabs

Finally, you can configure TeamCity to display an NDepend report tab for the build. Just go to “Report tabs” in the project (not build configuration) settings, and add NDepend using the start page “ndepend.zip!NDependReport.html” (for instance).

Hope that helps someone!

Code coverage using dotCover and F# make

I’ve previously depended a little too much on TeamCity to construct our build process, but have been increasingly shifting everything to our build scripts (and therefore source control).

We’ve been using F# make – an awesome cross platform build automation tool like make & rake.

As an aside (before you ask): The dotCover support in TeamCity is already excellent – as you’d expect – but if you want to use these coverage files elsewhere (NDepend, say), then you can’t use the out-of-the-box options very easily.

Downloading your dependencies

We’re using NUnit and MSpec to run our tests, and so in order to run said tests, we need to ensure we have the test runners available. Rather than committing them to source control, we can use F# make’s support for restoring NuGet packages.

RestorePackageId (fun p -> { p with OutputPath = "tools"; ExcludeVersion = true; Version = Some (new Version("2.6.3")) }) "NUnit.Runners"

DotCover is a little trickier, as there’s no NuGet package available (the command line exe is bundled with TeamCity). So, we use the following helper and create an F# make target called “EnsureDependencies” to download our dotCover and NDepend executables from a HTTP endpoint:

let ensureToolIsDownloaded toolName downloadUrl =
    if not (TestDir (toolsDir @@ toolName)) then
        let downloadFileName = Path.GetFileName(downloadUrl)
        trace ("Downloading " + downloadFileName + " from " + downloadUrl)
        let webClient = new System.Net.WebClient()
        webClient.DownloadFile(downloadUrl, toolsDir @@ downloadFileName)
        Unzip (toolsDir @@ toolName) (toolsDir @@ downloadFileName)

Target "EnsureDependencies" (fun _ ->
    ensureToolIsDownloaded "dotCover" "https://YourLocalDotCoverDownloadUrl/dotCoverConsoleRunner.2.6.608.466.zip"
    <code>RestorePackageId (fun p -> { p with OutputPath = "tools"; ExcludeVersion = true; Version = Some (new Version("2.6.3")) }) "NUnit.Runners"

Generating the coverage reports

Next up is creating a target to actually run our tests and generate the coverage reports. We’re using the DotCover extensions in F# Make that I contributed a little while back. As mentioned, we’re using NUnit and MSpec which adds a little more complexity – as we must generate each coverage file separately, and then combine them.

Target "TestCoverage" (fun _ ->

  let filters = "-:*.Tests;" # exclude test assemblies from coverage stats
  # run NUnit tests via dotCover
  !! testAssemblies
      |> DotCoverNUnit (fun p -> { p with
                                      Output = artifactsDir @@ "NUnitDotCover.snapshot"
                                      Filters = filters }) nunitOptions
  # run the MSpec tests via dotCover
  !! testAssemblies
      |> DotCoverMSpec (fun p -> { p with
                                      Output = artifactsDir @@ "MSpecDotCover.snapshot"
                                      Filters = filters }) mSpecOptions
  # merge the code coverage files
  DotCoverMerge (fun p -> { p with
                                Source = [artifactsDir @@ "NUnitDotCover.snapshot";artifactsDir @@ "MSpecDotCover.snapshot"]
                                Output = artifactsDir @@ "DotCover.snapshot" })
  # generate a HTML report
  # you could also generate other report types here (such as NDepend)
  DotCoverReport (fun p -> { p with
                                Source = artifactsDir @@ "DotCover.snapshot"
                                Output = artifactsDir @@ "DotCover.htm"
                                ReportType = DotCoverReportType.Html })
)

All that’s left is to define the dependency hierarchy in F# make:

"EnsureDependencies"
==> "TestCoverage"

And off you go – calling your build script with the “TestCoverage” target should run all your tests and generate the coverage reports.

SSL Termination and Secure Cookies/requireSSL with ASP.NET Forms Authentication

If you’re running a HTTPS-only web application, then you probably have requireSSL set to true in your web.config like so:

<httpCookies requireSSL="true" httpOnlyCookies="true"

With requireSSL set, any cookies ASP.NET sends with the HTTP response – in particular, the forms authentication cookies – will have the “secure” flag set. This ensures that they will only be sent to your website when being accessed over HTTPS.

What happens if you put your web application behind a load balancer with SSL termination? In this case, ASP.NET will see the request coming in as non-HTTPS (Request.IsSecureConnection always returns false) and refuse to set your cookies:

“The application is configured to issue secure cookies. These cookies require the browser to issue the request over SSL (https protocol). However, the current request is not over SSL.”

Fortunately, we have a few tricks up our sleeve:

  1. If the HTTPS server variable is set to ‘on’, ASP.NET will think we are over HTTPS
  2. The HTTP_X_FORWARDED_PROTO header will contain the original protocol running at the load balancer (so we can check that the end connection is in fact HTTPS)

With this knowledge, and the rewrite module available in IIS 7 upwards, we can set up the following:

    <rewrite>
        <rules>
            <rule name="HTTPS_AlwaysOn" patternSyntax="Wildcard">
                <match url="*" />
                <serverVariables>
                    <set name="HTTPS" value="on" />
                </serverVariables>
                <action type="None" />
                <conditions>
                    <add input="{HTTP_X_FORWARDED_PROTO}" pattern="https" />
                </conditions>
            </rule>
        </rules>
    </rewrite>

You’ll also need to add HTTPS to the list of allowedServerVariables in the applicationHost.config (or through the URL Rewrite config)

        <rewrite>
            <allowedServerVariables>
                <add name="HTTPS" />
            </allowedServerVariables>
        </rewrite>

With thanks to Levi Broderick on the ASP.NET team who sent me in the right direction to this solution!

AppData location when running under System user account

As it took far too much Googling to find this, if you need to access the AppData folder for the System account, go here:

C:\Windows\System32\config\systemprofile\AppData\Local
C:\Windows\SysWOW64\config\systemprofile\AppData\Local
I hit this because we needed to clear the NuGet package cache for a TeamCity build agent which was running as a service under the System account.

Get ASP.NET auth cookie using PowerShell (when using AntiForgeryToken)

At FundApps we run a regular SkipFish scan against our application as one of our tools for monitoring for security vulnerabilities. In order for it to test beyond our login page, we need to provide a valid .ASPXAUTH cookie (you’ve renamed it, right?) to the tool.

Because we want to prevent Cross-site request forgeries to our login pages, we’re using the AntiForgeryToken support in MVC. This means we can’t just post our credentials to the login url and fetch the cookie that is returned. So here’s the script we use to fetch a valid authentication cookie before we call SkipFish with its command line arguments:

Using Gulp – packaging files by folder

GulpJS is a great Node-based build system following in the footsteps of Grunt but with (in my opinion) a much simpler and more intuitive syntax. Gulp takes advantage of the streaming feature of NodeJs which is incredibly powerful, but means in order for you to get the most out of Gulp, you certainly need some understanding of what is going on underneath the covers.

As I was getting started with Gulp, I had a set of folders, and wanted to minify some JS files grouped by folder. For instance:

/scripts
/scripts/jquery/*.js
/scripts/angularjs/*.js

and want to end up with

/scripts
/scripts/jquery.min.js
/scripts/angularjs.min.js

and so on. This wasn’t immediately obvious at the time (I’ve now contributed this example back to the recipes), as it requires some knowledge of working with underlying streams.

To start with, I had something like this:

var gulp = require('gulp');
var concat = require('gulp-concat');
var rename = require('gulp-rename');
var uglify = require('gulp-uglify');

var scriptsPath = './src/scripts/';

gulp.task('scripts', function() {
    return gulp.src(path.join(scriptsPath, 'jquery', '*.js'))
      .pipe(concat('jquery.all.js'))
      .pipe(gulp.dest(scriptsPath))
      .pipe(uglify())
      .pipe(rename('jquery.min.js'))
      .pipe(gulp.dest(scriptsPath));
});

Which gets all the JS files in the /scripts/jquery/ folder, concatenates them, saves them to a /scripts/jquery.all.js file, then minifies them, and saves it to a /scripts/jquery.min.js file.

Simple, but how can we do this for multiple folders without manually modifying our gulpfile.js each time? Firstly, we need a function to get the folders in a directory. Not pretty, but easy enough:

function getFolders(dir){
    return fs.readdirSync(dir)
      .filter(function(file){
        return fs.statSync(path.join(dir, file)).isDirectory();
      });
}

This is JavaScript after all, so we can use the map function to iterate over these.


   var tasks = folders.map(function(folder) {

The final part of the equation is creating the same streams as before. Gulp expects us to return the stream/promise from the task, so if we’re going to do this for each folder, then we need a way to combine these. The concat function in the event-stream package will combine streams for us, and end only once all it’s combined streams have completed:

var es = require('event-stream');
...
return es.concat(stream1, stream2, stream3);

The catch is it expects streams to be listed explicitly in it’s arguments list. If we’re using map then we’ll end up with an array, so we can use the JavaScript apply function :

return es.concat.apply(null, myStreamsInAnArray);

Putting this all together, and we get the following:

var fs = require('fs');
var path = require('path');
var es = require('event-stream');
var gulp = require('gulp');
var concat = require('gulp-concat');
var rename = require('gulp-rename');
var uglify = require('gulp-uglify');

var scriptsPath = './src/scripts/';

function getFolders(dir){
    return fs.readdirSync(dir)
      .filter(function(file){
        return fs.statSync(path.join(dir, file)).isDirectory();
      });
}

gulp.task('scripts', function() {
   var folders = getFolders(scriptsPath);

   var tasks = folders.map(function(folder) {
      return gulp.src(path.join(scriptsPath, folder, '/*.js'))
        .pipe(concat(folder + '.js'))
        .pipe(gulp.dest(scriptsPath))
        .pipe(uglify())
        .pipe(rename(folder + '.min.js'))
        .pipe(gulp.dest(scriptsPath));
   });

   return es.concat.apply(null, tasks);
});

Hope this helps someone!

Forms Authentication loginUrl ignored

I hit this issue a while back, and someone else just tripped up on it so thought it was worth posting here. If you’ve got loginUrl in your Forms Authentication configuration in web.config set, but your ASP.NET Forms or MVC app has suddenly started redirecting to ~/Account/Login for no apparent reason, then the new simpleMembership(ish) provider is getting in the way. This seems to happen after updating the MVC version, or installing .NET 4.5.1 at the moment.

Try adding the following to your appSettings in the web.config file:

<add key="enableSimpleMembership" value="false"/>

which resolved the issue for me. Still trying to figure out with Microsoft why this is an issue.

Achieving an A+ grading at Qualys SSL Labs (Forward Secrecy in IIS)

At FundApps we love the SSL Labs tool from Qualys for checking best practice on our SSL implementations. They recently announced a bunch of changes introducing stricter security requirements for 2014, and a new A+ grade – so I was curious what it would take to achieve the new A+ grading. There are a few things required to now achieve A grading and then beyond:

  • TLS 1.2 required
  • Keys must be 2048 bits and above
  • Secure renegotiation
  • No RC4 on TLS 1.1 and 1.2 (RC4 has stuck around longer than it would be liked in order to mitigate the BEAST attack)
  • Forward secrecy for all browers that support it
  • HTTP Strict Transport Security with a long max age (Qualsys haven’t defined exactly what this is, but we use a 1 year value).

We’re using IIS so the focus of this entry is how to achieve an A+ grading in IIS 7/8.

Forward Secrecy & Best Practice Ciphers

Attention to Forward Secrecy has been increasing in recent time – the key benefit being if, say, the NSA obtain your keys in the future, this will not compromise previous communications that were encrypted using session keys derived from your long term key.

To set up support for Forward Secrecy, the easiest approach (in a Windows/IIS world) is to download the latest version of the IIS Crypto tool. This makes it really easy to get your SSL Ciphers in the right order and the correct ones enabled rather than messing directly with the registry.

Once downloaded, if you click the ‘Best Practice’ option, this will enable ECHDE as the preferred cipher (required for forward secrecy). The tool does also keep SSL 3.0, RC4 and 3DES enabled in order to support IE 6 on Windows XP. If you don’t require this, you can safely disable SSL 3.0, TLS_RSA_WITH_RC4_128_SHA and TLS_RSA_WITH_3DES_EDE_CBC_SHA in the cipher list. We also disable MD5.

HTTP Strict Transport Security

The other part of the equation is enabling a HTTP Strict Transport header. The idea with this is to stop man-in-the-middle attacks whereby they transparently convert a secure HTTPS connection to a plain HTTP connection. Visitors can see the connection is insecure, but there is no way of knowing that the connection *should* have been secure. By adding a HTTP Strict Transport header (which is remembered by the browser and stored for a specified period), then provided first communication with the server is not tampered with (by stripping out the header), the browser will prevent non-secure communication from then on.

Doing this is simple – but you need to ensure that you only return a Strict-Transport-Security header on a HTTPS connection. Any requests on HTTP should *not* have this header, and should be 301 redirect-ing to the HTTPS version. especially if your website only responds to HTTPS requests in the first place (we use a seperate website to redirect from non-HTTPS requests).

In our case, we have a seperate website already responsible for the non-HTTP redirection, so it was simply a case of adding the following in our system.webServer section of the web.config

<system.webServer>
  <httpProtocol>
    <customHeaders>
       <add name="Strict-Transport-Security" value="max-age=31536000" />
    </customHeaders>
  </httpProtocol>
</system.webServer>

If you have to deal with both HTTPS and non-HTTPS, then implementation section on WikiPedia gives an example of how.

The end result? An A+ grading from the SSL Labs tool.