Category Archives: Coding

Get ASP.NET auth cookie using PowerShell (when using AntiForgeryToken)

At FundApps we run a regular SkipFish scan against our application as one of our tools for monitoring for security vulnerabilities. In order for it to test beyond our login page, we need to provide a valid .ASPXAUTH cookie (you’ve renamed it, right?) to the tool.

Because we want to prevent Cross-site request forgeries to our login pages, we’re using the AntiForgeryToken support in MVC. This means we can’t just post our credentials to the login url and fetch the cookie that is returned. So here’s the script we use to fetch a valid authentication cookie before we call SkipFish with its command line arguments:

Using Gulp – packaging files by folder

GulpJS is a great Node-based build system following in the footsteps of Grunt but with (in my opinion) a much simpler and more intuitive syntax. Gulp takes advantage of the streaming feature of NodeJs which is incredibly powerful, but means in order for you to get the most out of Gulp, you certainly need some understanding of what is going on underneath the covers.

As I was getting started with Gulp, I had a set of folders, and wanted to minify some JS files grouped by folder. For instance:

/scripts
/scripts/jquery/*.js
/scripts/angularjs/*.js

and want to end up with

/scripts
/scripts/jquery.min.js
/scripts/angularjs.min.js

and so on. This wasn’t immediately obvious at the time (I’ve now contributed this example back to the recipes), as it requires some knowledge of working with underlying streams.

To start with, I had something like this:

var gulp = require('gulp');
var concat = require('gulp-concat');
var rename = require('gulp-rename');
var uglify = require('gulp-uglify');

var scriptsPath = './src/scripts/';

gulp.task('scripts', function() {
    return gulp.src(path.join(scriptsPath, 'jquery', '*.js'))
      .pipe(concat('jquery.all.js'))
      .pipe(gulp.dest(scriptsPath))
      .pipe(uglify())
      .pipe(rename('jquery.min.js'))
      .pipe(gulp.dest(scriptsPath));
});

Which gets all the JS files in the /scripts/jquery/ folder, concatenates them, saves them to a /scripts/jquery.all.js file, then minifies them, and saves it to a /scripts/jquery.min.js file.

Simple, but how can we do this for multiple folders without manually modifying our gulpfile.js each time? Firstly, we need a function to get the folders in a directory. Not pretty, but easy enough:

function getFolders(dir){
    return fs.readdirSync(dir)
      .filter(function(file){
        return fs.statSync(path.join(dir, file)).isDirectory();
      });
}

This is JavaScript after all, so we can use the map function to iterate over these.


   var tasks = folders.map(function(folder) {

The final part of the equation is creating the same streams as before. Gulp expects us to return the stream/promise from the task, so if we’re going to do this for each folder, then we need a way to combine these. The concat function in the event-stream package will combine streams for us, and end only once all it’s combined streams have completed:

var es = require('event-stream');
...
return es.concat(stream1, stream2, stream3);

The catch is it expects streams to be listed explicitly in it’s arguments list. If we’re using map then we’ll end up with an array, so we can use the JavaScript apply function :

return es.concat.apply(null, myStreamsInAnArray);

Putting this all together, and we get the following:

var fs = require('fs');
var path = require('path');
var es = require('event-stream');
var gulp = require('gulp');
var concat = require('gulp-concat');
var rename = require('gulp-rename');
var uglify = require('gulp-uglify');

var scriptsPath = './src/scripts/';

function getFolders(dir){
    return fs.readdirSync(dir)
      .filter(function(file){
        return fs.statSync(path.join(dir, file)).isDirectory();
      });
}

gulp.task('scripts', function() {
   var folders = getFolders(scriptsPath);

   var tasks = folders.map(function(folder) {
      return gulp.src(path.join(scriptsPath, folder, '/*.js'))
        .pipe(concat(folder + '.js'))
        .pipe(gulp.dest(scriptsPath))
        .pipe(uglify())
        .pipe(rename(folder + '.min.js'))
        .pipe(gulp.dest(scriptsPath));
   });

   return es.concat.apply(null, tasks);
});

Hope this helps someone!

Forms Authentication loginUrl ignored

I hit this issue a while back, and someone else just tripped up on it so thought it was worth posting here. If you’ve got loginUrl in your Forms Authentication configuration in web.config set, but your ASP.NET Forms or MVC app has suddenly started redirecting to ~/Account/Login for no apparent reason, then the new simpleMembership(ish) provider is getting in the way. This seems to happen after updating the MVC version, or installing .NET 4.5.1 at the moment.

Try adding the following to your appSettings in the web.config file:

<add key="enableSimpleMembership" value="false"/>

which resolved the issue for me. Still trying to figure out with Microsoft why this is an issue.

Achieving an A+ grading at Qualys SSL Labs (Forward Secrecy in IIS)

At FundApps we love the SSL Labs tool from Qualys for checking best practice on our SSL implementations. They recently announced a bunch of changes introducing stricter security requirements for 2014, and a new A+ grade – so I was curious what it would take to achieve the new A+ grading. There are a few things required to now achieve A grading and then beyond:

  • TLS 1.2 required
  • Keys must be 2048 bits and above
  • Secure renegotiation
  • No RC4 on TLS 1.1 and 1.2 (RC4 has stuck around longer than it would be liked in order to mitigate the BEAST attack)
  • Forward secrecy for all browers that support it
  • HTTP Strict Transport Security with a long max age (Qualsys haven’t defined exactly what this is, but we use a 1 year value).

We’re using IIS so the focus of this entry is how to achieve an A+ grading in IIS 7/8.

Forward Secrecy & Best Practice Ciphers

Attention to Forward Secrecy has been increasing in recent time – the key benefit being if, say, the NSA obtain your keys in the future, this will not compromise previous communications that were encrypted using session keys derived from your long term key.

To set up support for Forward Secrecy, the easiest approach (in a Windows/IIS world) is to download the latest version of the IIS Crypto tool. This makes it really easy to get your SSL Ciphers in the right order and the correct ones enabled rather than messing directly with the registry.

Once downloaded, if you click the ‘Best Practice’ option, this will enable ECHDE as the preferred cipher (required for forward secrecy). The tool does also keep SSL 3.0, RC4 and 3DES enabled in order to support IE 6 on Windows XP. If you don’t require this, you can safely disable SSL 3.0, TLS_RSA_WITH_RC4_128_SHA and TLS_RSA_WITH_3DES_EDE_CBC_SHA in the cipher list. We also disable MD5.

HTTP Strict Transport Security

The other part of the equation is enabling a HTTP Strict Transport header. The idea with this is to stop man-in-the-middle attacks whereby they transparently convert a secure HTTPS connection to a plain HTTP connection. Visitors can see the connection is insecure, but there is no way of knowing that the connection *should* have been secure. By adding a HTTP Strict Transport header (which is remembered by the browser and stored for a specified period), then provided first communication with the server is not tampered with (by stripping out the header), the browser will prevent non-secure communication from then on.

Doing this is simple – but you need to ensure that you only return a Strict-Transport-Security header on a HTTPS connection. Any requests on HTTP should *not* have this header, and should be 301 redirect-ing to the HTTPS version. especially if your website only responds to HTTPS requests in the first place (we use a seperate website to redirect from non-HTTPS requests).

In our case, we have a seperate website already responsible for the non-HTTP redirection, so it was simply a case of adding the following in our system.webServer section of the web.config

<system.webServer>
  <httpProtocol>
    <customHeaders>
       <add name="Strict-Transport-Security" value="max-age=31536000" />
    </customHeaders>
  </httpProtocol>
</system.webServer>

If you have to deal with both HTTPS and non-HTTPS, then implementation section on WikiPedia gives an example of how.

The end result? An A+ grading from the SSL Labs tool.

Migrating old websites & Rewrite maps in IIS 7

If you’re migrating to a new website and need to map old IDs to new IDs, I’ve just discovered that the UrlRewrite plugin in IIS has a great feature I hadn’t come across before called rewriteMaps. This means instead of writing a whole bunch of indentical looking rewrite rules, you can write one – and then simply list the ID mappings.

The syntax of the RegEx takes a bit of getting used to, but in our case we needed to map

/(various|folder|names|here)/display.asp?id=[ID]

to a new website url that looked like this:

/show/[NewId]

You can define a rewriteMap very simply – most examples I saw included full URLs here, but we just used the ID maps directly:

<rewriteMaps>
  <rewriteMap name="Articles">
    <add key="389" value="84288" />
    <add key="525" value="114571" />
    <add key="526" value="114572" />
  </rewriteMap>
</rewriteMaps>

You can reference a rewriteMap using {MapName:{SomeCapturedValue}}, so if SomeCapturedValue equalled 525 then you’d get back 114571 in the list above.

Because we’re looking to match a querystring based id, and you can’t match queryString parameters in the primary match clause, we needed to add a condition, and then match on that captured condition value instead, using an expression like this:

http://www.newdomain.com/show/{Articles:{C:1}}/

The final rule XML follows:

<rule name="Redirect rule for Articles" stopProcessing="true">
  <match url="(articles|java|dotnet|xml|databases|training|news)/display\.asp" />
  <conditions>
    <add input="{QUERY_STRING}" pattern="id=([0-9]+)" />
  </conditions>
  <action type="Redirect" url="http://www.developerfusion.com/show/{Articles:{C:1}}/" appendQueryString="false" />
</rule>

Saving thumbnails in the original file format with C#

I tripped up on a strange quirk working with the Image and ImageFormat classes recently. The intention was simple – load an Image object from an existing graphic, generate a thumbnail, and save it out in the original format. The Image class in .NET includes a handy “RawFormat” property indicating the correct format to save out in. So far, so easy. Except the object that RawFormat was returning didn’t seem to match any supported ImageFormat, and the Guid was one character out. For example, when loading a JPEG, you got:

b96b3caa-0728-11d3-9d7b-0000f81ef32e

when the Guid for ImageFormat.Jpeg.Guid was in fact

b96b3cae-0728-11d3-9d7b-0000f81ef32e

It turns out that the “RawFormat” seems to change to an internal format the moment you start modifying the original image. So the simple trick is to save the value of the RawFormat property first, do your modifications, and then save out the image using the original RawFormat value.

Automatically tracking outbound links in Google Analytics

Google Analytics supports a nifty feature called “Events”, which is designed to allow you to track non-pageview type events. This is particularly helpful if you have an AJAX type interface on which you want to gather statistics, but another use I’ve found handy is to track clicks on external links to other sites. If you’re using the asyncronous version of the tags (if not, why not), then you should have some code that uses the window._gaq variable. In order to track events aside from the initial page view, you simply need to call the following each time you want to record an event:

window._gaq.push(['_trackEvent','Event Category', 'Event Value']);

Using your favourite javascript library (mine are jQuery and Mootools), it’s easy to hook this up to automatically fire Google event tracking for any external hyperlinks on the site. We simply look for all a tags with href attributes that begin with “http://”. Then, if the href doesn’t contain our current hostname, then we assume it’s an external link. Obviously the logic could be adjusted for your particular needs!

Here’s the mootools version:

document.getElements('a[href^="http://"]').addEvent('click',function(link) {
  var href = link.target.href;
  if(href.indexOf(window.location.host) < 0) {
    window._gaq.push(['_trackEvent','Outbound Links', href]);
  }
});

Or for the jQuery fans amongst you, just swap the first line for:

$('a[href^="http://"]').bind('click',function(link) {

Now, after 24 hours or so, if you check out the "Event Tracking" page under "Content" in Google Analytics, you'll see an outbound link category listing all the external links clicked on the site (assuming Javascript is turned on, of course).

Side note: previous versions recommended by Google included a window.timeout before allowing the redirect to take place - in order to ensure the request to Google to record the click goes out first - as far as I can establish, this is no longer necessary.

Side note 2: By tracking outbound clicks this will affect your bounce rate figures. Essentially an event counts as another activity, so if a visitor lands on one page, and then clicks an external link, that will not count as a "bounce". Whether this is a fair reflection of bounces or not depends on your viewpoint - but something to bear in mind. Unfortunately there's currently no way to log an event that doesn't affect the bounce rate.

Detecting 404 errors after a new site design

We recently re-designed Developer Fusion and as part of that we needed to ensure that any external links were not broken in the process. In order to monitor this, we used the awesome LogParser tool. All you need to do is open up a command prompt, navigate to the directory with your web site’s log files in, and run a query like this:

"c:\program files (x86)\log parser 2.2\logparser" "SELECT top 500 cs-uri-stem,COUNT(*) as Computed FROM u_ex*.log WHERE sc-status=404 GROUP BY cs-uri-stem order by COUNT(*) as Computed desc" -rtp:-1 > topMissingUrls.txt

And you’ve got a text file with the top 500 requested URLs that are returning 404. Simple!

Posting to Facebook Page using C# SDK from offline app

If you want to post to a facebook page using the Facebook Graph API and the Facebook C# SDK, from an “offline” app, there’s a few steps you should be aware of.

First, you need to get an access token that your windows service or app can permanently use. You can get this by visiting the following url (all on one line), replacing [ApiKey] with your applications Facebook API key.


http://www.facebook.com/login.php?api_key=[ApiKey]&connect_display=popup&v=1.0

&next=http://www.facebook.com/connect/login_success.html&cancel_url=http://www.facebook.com/connect/login_failure.html
&fbconnect=true&return_session=true&req_perms=publish_stream,offline_access,manage_pages&return_session=1
&sdk=joey&session_version=3

In the parameters of the URL you get redirected to, this will give you an access key. Note however, that this only gives you an access key to post to your own profile page. Next, you need to get a separate access key to post to the specific page you want to access. To do this, go to


https://graph.facebook.com/[YourUserId]/accounts?access_token=[AccessTokenFromAbove]

You can find your user id in the URL when you click on your profile image. On this page, you will then see a list of page IDs and corresponding access tokens for each facebook page. Using the appropriate pair,you can then use code like this:

var app = new Facebook.FacebookApp(_accessToken);
var parameters = new Dictionary
{
    { "message",  promotionInfo.TagLine },
    { "name" ,  promotionInfo.Title },
    { "description" ,  promotionInfo.Description },
    { "picture", promotionInfo.ImageUrl.ToString() },
    { "caption" ,  promotionInfo.TargetUrl.Host },
    { "link" ,  promotionInfo.TargetUrl.ToString() },
    { "type" , "link" },
};
app.Post(_targetId + "/feed", parameters);

And you’re done!

Applying app.config transformations (in the same way as web.config)

Visual Studio 2010 doesn’t have the same support for app.config files in the way that their web projects do, in order to vary connection strings and other configuration settings for different release modes – a real shame. You can vote on the issue here. In the meantime though, the ASP.NET team have a fix, detailed here.

All you need to do is save their custom targets file, add an imports tag immediately before the closing tag:

  ...
  <Import Project="$(MSBuildExtensionsPath)\Custom\TransformFiles.targets" />
</Project>

And add a TransformOnBuild metadata property to each config file you want transformed. So

<None Include="app.config" />

becomes

<None Include="app.config">
  <TransformOnBuild>true</TransformOnBuild>
</None>

(note you don’t need to do this on the configuration specific config files such as app.release.config). Then you can write your app.Release.config and similar files in the same way you do for web.config files. Sweet!