Category Archives: Uncategorized

Integrating NDepend metrics into your Build using F# Make & TeamCity

NDepend is an analysis tool giving you all kinds of code quality metrics, but also tools to drill down into dependencies, and query and enforce rules over your code base.

There’s a version that integrates with Visual Studio, but there’s also a version that runs on the console to generate static reports, and enforce any code rules you might have written.

I wanted to see how easy it would be to combine all of this and use NDepend to generate actionable reports and metrics on our code base – not just now, but how it shifts over time.

To do this, you need to

  1. Run your unit tests via a code coverage tool, such as dotCover. This has a command line version bundled with TeamCity which you are free to use directly.
  2. Run NDepend with your code coverage files and NDepend project file
  3. Store NDepend metrics from build-to-build so it can track trends over time

I’ve covered step 1 in my previous post on generating coverage reports using dotCover . I recommend you read that first!

We can then extend this code to feed the code coverage output into NDepend.

Downloading your dependencies

I’ve already covered this in the previous post I mentioned, so using the same helper method, we can also download our NDepend executables from a HTTP endpoint, and ensure we have the appropriate license key.

<pre>Target "EnsureDependencies" (fun _ ->
    ensureToolIsDownloaded "dotCover" "https://YourLocalDotCoverDownloadUrl/dotCoverConsoleRunner.2.6.608.466.zip"
    ensureToolIsDownloaded "nDepend" "https://YourLocalDotCoverDownloadUrl/NDepend_5.2.1.8320.zip"
    CopyFile (toolsDir @@ "ndepend" @@ "NDependProLicense.xml") (toolsDir @@ "NDependProLicense.xml")
)

Generating the NDepend coverage file

Before running NDepend itself, we also need to generate a coverage file in the correct format. I’ve simply added another step to the “TestCoverage” target I defined previously:

DotCoverReport (fun p -> { p with
Source = artifactsDir @@ "DotCover.snapshot"
Output = artifactsDir @@ "DotCover.xml"
ReportType = DotCoverReportType.NDependXml })

Generating the NDepend report

Now we can go and run NDepend itself. You’ll need to have already generated a sensible NDepend project file, which means the command line arguments are pretty straightforward:

Target "NDepend" (fun _ ->

NDepend (fun p -> { p with
ProjectFile = currentDirectory @@ "Rapptr.ndproj"
CoverageFiles = [artifactsDir @@ "DotCover.xml" ]
})
)

I’m using an extension I haven’t yet submitted to F# Make yet, but you can find the code here, and just reference it at the top of your F# make script using

#load @"ndepend.fsx"
open Fake.NDepend

After adding a dependency between the NDepend and EnsureDependencies target, then we’re all good to go!

Recording NDepend trends using TeamCity

To take this one step further, and store historical trends with NDepend, we need to persist a metrics folder across analysis runs. This could be a shared network drive, but in our case we actually just “cheat” and use TeamCity’s artifacts mechanism.

Each time our build runs, we store the NDepend output as an artifact – and restore the artifacts from the previous successful build the next time we run. Before this was a bit of a pain but as of TeamCity 8.1 you can now reference your own artifacts to allow for incremental-style builds.

In our NDepend configuration in TeamCity, ensure the artifacts path (under general settings for that build configuration) includes the NDepend output. For instance

artifacts/NDepend/** => NDepend.zip

go to Dependencies, and add a new artifact dependency. Select the same configuration in the drop down (so it’s self referencing), and select “Last Finished Build”. Then add a rule to extract the artifacts and place them to the same location that NDepend will run in the build, for instance

NDepend.zip!** => artifacts/NDepend

TeamCity Report tabs

Finally, you can configure TeamCity to display an NDepend report tab for the build. Just go to “Report tabs” in the project (not build configuration) settings, and add NDepend using the start page “ndepend.zip!NDependReport.html” (for instance).

Hope that helps someone!

Code coverage using dotCover and F# make

I’ve previously depended a little too much on TeamCity to construct our build process, but have been increasingly shifting everything to our build scripts (and therefore source control).

We’ve been using F# make – an awesome cross platform build automation tool like make & rake.

As an aside (before you ask): The dotCover support in TeamCity is already excellent – as you’d expect – but if you want to use these coverage files elsewhere (NDepend, say), then you can’t use the out-of-the-box options very easily.

Downloading your dependencies

We’re using NUnit and MSpec to run our tests, and so in order to run said tests, we need to ensure we have the test runners available. Rather than committing them to source control, we can use F# make’s support for restoring NuGet packages.

RestorePackageId (fun p -> { p with OutputPath = "tools"; ExcludeVersion = true; Version = Some (new Version("2.6.3")) }) "NUnit.Runners"

DotCover is a little trickier, as there’s no NuGet package available (the command line exe is bundled with TeamCity). So, we use the following helper and create an F# make target called “EnsureDependencies” to download our dotCover and NDepend executables from a HTTP endpoint:

let ensureToolIsDownloaded toolName downloadUrl =
    if not (TestDir (toolsDir @@ toolName)) then
        let downloadFileName = Path.GetFileName(downloadUrl)
        trace ("Downloading " + downloadFileName + " from " + downloadUrl)
        let webClient = new System.Net.WebClient()
        webClient.DownloadFile(downloadUrl, toolsDir @@ downloadFileName)
        Unzip (toolsDir @@ toolName) (toolsDir @@ downloadFileName)

Target "EnsureDependencies" (fun _ ->
    ensureToolIsDownloaded "dotCover" "https://YourLocalDotCoverDownloadUrl/dotCoverConsoleRunner.2.6.608.466.zip"
    <code>RestorePackageId (fun p -> { p with OutputPath = "tools"; ExcludeVersion = true; Version = Some (new Version("2.6.3")) }) "NUnit.Runners"

Generating the coverage reports

Next up is creating a target to actually run our tests and generate the coverage reports. We’re using the DotCover extensions in F# Make that I contributed a little while back. As mentioned, we’re using NUnit and MSpec which adds a little more complexity – as we must generate each coverage file separately, and then combine them.

Target "TestCoverage" (fun _ ->

  let filters = "-:*.Tests;" # exclude test assemblies from coverage stats
  # run NUnit tests via dotCover
  !! testAssemblies
      |> DotCoverNUnit (fun p -> { p with
                                      Output = artifactsDir @@ "NUnitDotCover.snapshot"
                                      Filters = filters }) nunitOptions
  # run the MSpec tests via dotCover
  !! testAssemblies
      |> DotCoverMSpec (fun p -> { p with
                                      Output = artifactsDir @@ "MSpecDotCover.snapshot"
                                      Filters = filters }) mSpecOptions
  # merge the code coverage files
  DotCoverMerge (fun p -> { p with
                                Source = [artifactsDir @@ "NUnitDotCover.snapshot";artifactsDir @@ "MSpecDotCover.snapshot"]
                                Output = artifactsDir @@ "DotCover.snapshot" })
  # generate a HTML report
  # you could also generate other report types here (such as NDepend)
  DotCoverReport (fun p -> { p with
                                Source = artifactsDir @@ "DotCover.snapshot"
                                Output = artifactsDir @@ "DotCover.htm"
                                ReportType = DotCoverReportType.Html })
)

All that’s left is to define the dependency hierarchy in F# make:

"EnsureDependencies"
==> "TestCoverage"

And off you go – calling your build script with the “TestCoverage” target should run all your tests and generate the coverage reports.

Cisco VPN Client for Windows 8

There isn’t currently a version of Cisco’s VPN client that supports Windows 8, and after installation I received an error message complaining that the “VPN Client failed to enable virtual adapter.”.

Fortunately, there is a way to get this “legacy” VPN client to work, with a small registry change:

  • Open up the registry editor by typing regedit in Run prompt
  • Browse to the Registry Key HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\CVirtA
  • Edit the DisplayName entry and remove the leading characters from the value data upto “%;” i.e.
    • For x86, change the value data from something like “@oem8.inf,%CVirtA_Desc%;Cisco Systems VPN Adapter” to “Cisco Systems VPN Adapter”
    • For x64, change the value data from something like “@oem8.inf,%CVirtA_Desc%;Cisco Systems VPN Adapter for 64-bit Windows” to “Cisco Systems VPN Adapter for 64-bit Windows”

Then you can try connecting again – this did the trick for me.

Disabling Chrome’s Metro app in Windows 8

At time of writing, if you replace IE with Chrome on Windows 8 then Chrome installs both a desktop and a Metro version of itself. Personally, as most of my time is spent in the desktop, I’d rather Chrome just always opened there.

There’s currently an open issue on the chromium website, but in the meantime there’s a relatively simple workaround. You just need to open up regedit, navigate to

HKEY_CLASSES_ROOT\ChromeHTML\shell\open\command

and then rename/remove the DelegateExecute entry. Then Chrome will always open in desktop mode – problem solved!

MSDTC gotcha’s with Virtual Machines

Setting up some new infrastructure with a web and seperate db tier, I was hit with the usual MSDTC woes.

Error messages progressed bit by bit as I opened things up:

Attempt #1: The partner transaction manager has disabled its support for remote/network transactions.

Attempt #2: Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.

Attempt #3: The MSDTC transaction manager was unable to push the transaction to the destination transaction manager due to communication problems. Possible causes are: a firewall is present and it doesn’t have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers.

I couldn’t get past the final error though. DTCPing is a very useful tool if you’re struggling with this, along with this TechNet article on what settings should be in place. One warning popped up that sent me in the right direction:

WARNING:the CID values for both test machines are the same while this problem won’t stop DTCping test, MSDTC will fail for this

As it happens, both machines were from an identical VM clone, and therefore had identical “CID” values. You can check this by going to HKEY_CLASSES_ROOT\CID. Look for the key that has a description of “MSDTC”.

Having found Brian’s article who had done the hard work previously, this set me on my way – essentially you just need to uninstall and reinstall MSDTC on both of the machines. The following worked for me:

  1. Run “msdtc -uninstall” (from an admin prompt)
  2. Run “msdtc -install”
  3. Reconfigure MSDTC again from Component Services\My Computer\Distributed Transaction Coordinator\Local DTC (right click, properties)

And off you go… (don’t forget to enable the predefined DTC rules for local hosts in advanced firewall settings too)

Migrating old websites & Rewrite maps in IIS 7

If you’re migrating to a new website and need to map old IDs to new IDs, I’ve just discovered that the UrlRewrite plugin in IIS has a great feature I hadn’t come across before called rewriteMaps. This means instead of writing a whole bunch of indentical looking rewrite rules, you can write one – and then simply list the ID mappings.

The syntax of the RegEx takes a bit of getting used to, but in our case we needed to map

/(various|folder|names|here)/display.asp?id=[ID]

to a new website url that looked like this:

/show/[NewId]

You can define a rewriteMap very simply – most examples I saw included full URLs here, but we just used the ID maps directly:

<rewriteMaps>
  <rewriteMap name="Articles">
    <add key="389" value="84288" />
    <add key="525" value="114571" />
    <add key="526" value="114572" />
  </rewriteMap>
</rewriteMaps>

You can reference a rewriteMap using {MapName:{SomeCapturedValue}}, so if SomeCapturedValue equalled 525 then you’d get back 114571 in the list above.

Because we’re looking to match a querystring based id, and you can’t match queryString parameters in the primary match clause, we needed to add a condition, and then match on that captured condition value instead, using an expression like this:

http://www.newdomain.com/show/{Articles:{C:1}}/

The final rule XML follows:

<rule name="Redirect rule for Articles" stopProcessing="true">
  <match url="(articles|java|dotnet|xml|databases|training|news)/display\.asp" />
  <conditions>
    <add input="{QUERY_STRING}" pattern="id=([0-9]+)" />
  </conditions>
  <action type="Redirect" url="http://www.developerfusion.com/show/{Articles:{C:1}}/" appendQueryString="false" />
</rule>

Determining if an assembly is x64 or x86

After encountering a strange deployment issue today, eventually it was tracked down to an x86 assembly being deployed to a x64 process. There’s a tool included with Visual Studio called corflags that was helpful here. Open up a Visual Studio command prompt, type corflags.exe assemblyname.dll and you’ll see something like this:

Version : v4.0.20926
CLR Header: 2.5
PE : PE32
CorFlags : 11
ILONLY : 1
32BIT : 1
Signed : 1

for a 32 bit assembly, and

Version : v4.0.20926
CLR Header: 2.5
PE : PE32
CorFlags : 9
ILONLY : 1
32BIT : 0
Signed : 1

for a “Any CPU” assembly. There’s more details on everything these fields mean in Brian Peek’s excellent blog post on the topic.

NServiceBus audit queues

Being new to the world of NServiceBus, I just thought I’d share a few gotcha’s as I experience them.

When everything’s up and running there’s no easy way to see what’s going on as messages appear and disappear from the normal message queue very quickly. You can use an audit queue to log all messages appearing on a queue. To do this, in your app config you simply need to use the ForwardReceivedMessagesTo attribute, like so:

<UnicastBusConfig ForwardReceivedMessagesTo="MyAuditQueue@MachineName">
....
</UnicastBusConfig>

NServiceBus won’t automatically create an audit queue, so when you do so manually.

You can do this in code using:

NServiceBus.Utils.MsmqUtilities.CreateQueueIfNecessary(QueueName)

Alternatively, you can create it using the admin interface, but you need to ensure it has the same settings and permissions as the NServiceBus queues. Notably, that SYSTEM has permissions on the queue, and that it is transactional (if your queue is) – otherwise your audit queue will remain empty!

Deploying windows services using MsDeploy

Running MsDeploy is awesome for automated deployments of websites, but it’s also possible to use it to deploy other applications to the file system – such as associated windows services. You just need to jump through a few more hoops to get things up and running.

I’m using TeamCity for our integration server, but the basic steps will work regardless of the system you are using. I tend to set up TeamCity to have a general “Build entire solution” configuration. This builds the entire project in release mode, and performs any config transformations you need (check out my post here if you to transform app.config files for your service).

Next, for each component and configuration we want to deploy (ie website to staging, website to production, services to staging, services to production), I create a new build configuration, with a dependency on the “build entire solution” configuration. This means we can assume that the build has completed successfully.

After the build, there’s a few steps that need to complete:

  • Stop the existing service and uninstall it
  • Copy over the output from the build to the target deployment server
  • Install the new service and start it

Stopping and starting the services

For the first and last steps, we can define two simple batch files for each, with a hard coded path of where we’ll install the service on the target server.

MyServiceName.PreSync.cmd

net stop MyServiceName
C:\Windows\Microsoft.NET\Framework\v4.0.30319\InstallUtil.exe /u /name=MyServiceName “C:\Program Files\PathTo\MyServiceName.exe”
sleep 20

MyServiceName.PostSync.cmd

C:\Windows\Microsoft.NET\Framework\v4.0.30319\InstallUtil.exe /name=MyServiceName "C:\Program Files\PathTo\MyServiceName.exe"
net start MyServiceName

These should be saved in source control as part of your project resources (I put them in a Deploy folder), and so accessible from the build server. These are very basic at the moment – they could equally be PowerShell scripts doing far more complicated things or accepting configurable parameters – but this will do us for our example scenario!

We will use MsDeploy’s preSync and postSync commands to execute these batch files before and after it performs the synchronization on the file system.

MsDeploy command

Let’s now take a look at the MsDeploy command needed:

"tools/deploy/msdeploy.exe" -verb:sync -preSync:runCommand="%system.teamcity.build.checkoutDir%\tools\deploy\MyServiceName.PreSync.cmd",waitInterval=30000 -source:dirPath="%system.teamcity.build.checkoutDir%\src\MyServiceName\bin\%env.Configuration%" -dest:computerName=https://stagingserver:8172/msdeploy.axd?site=DummyWebSiteName,userName=%env.UserName%,password=%env.Password%,authType=basic,dirPath="C:\Program Files\MyWindowsService\" -allowUntrusted -postSync:runCommand="%system.teamcity.build.checkoutDir%\tools\Deploy\CodeConversion-PostSyncCommand.cmd",waitInterval=30000

Let’s just break this down:

  • verb:sync – we are syncing!
  • preSync:runCommand – before we perform the deployment, we can pass the path to a batch file that will be streamed to the deployment server and executed. By default, this will be run under a restricted local service account (“The WMSvc uses a Local Service SID account that has fewer privileges than the Local Service account itself.” – from MSDN).
  • source:dirPath – this sets the path we want to copy files from. We’re using a parametrized build template in TeamCity to pass in the full path to the source directory, and the current configuration)
  • dest:computerName – this is actually several parameters combined. I tried various permutations, and this is what worked best for me. I’m not using NTLM authentication here (so authType=basic) because my staging and production servers are on an external network. The username and password are for an IIS Management Service user that we’ll set up in a minute (and are also parametrized by TeamCity – but you could hard code them here).
  • allowUntrusted – allows MsDeploy to accept the unsigned certificate from our target server. You don’t need this if you’re using an SSL certificate from a trusted authority.
  • postSync:runCommand – the command we run after a successful deployment.

There’s one gotcha with the preSync and postSync operations at the moment – any error codes returned by preSync or postSync (such as being unable to install the service or start it), the whole MsDeploy action still return success. I haven’t found a nice way round this yet – you’d have to write some powershell script to parse the output and detect errors. Microsoft know about the issue so hopefully it will be fixed in the next release.

Configuring MsDeploy

Before we try and run this command, we need to set up a few things on the target server we are deploying to. I’m assuming you’re already using MsDeploy to deploy websites, and so you can already see IIS Management Service, IIS Manager Permissions, IIS Manager Users, and Management Service Delegation appearing as options under “Management” in your main IIS server configuration screen.

  • Create a new IIS user from the IIS Manager Users screen. Alternatively, you can create a Windows user and use that instead.
  • Even though we’re installing a service, we still need a target IIS website to associate our credentials with. This could be a dedicated empty website (it doesn’t need to be running) or an existing one. Make sure you replace “DummyWebSiteName” in the command above with the name of the actual website you choose. The underlying path doesn’t matter, as we override the target path as part of our MsDeploy command.
  • Go into “IIS Manager Permissions” for the dummy website you are using, click “Allow user” and select either the IIS or Windows user you created above.
  • Next, go into “Management Service Delegation”. We need to create two permissions – one so we can deploy the files to the file system, and another so we can run the pre/post sync commands. For the first, click “Add Rule”, select “Blank Rule” and then type “contentPath” in the providers field, * in the actions, set the Path to the one where you are going to deploy the service to. Save that, and add another blank rule.
  • For this second rule, type “runCommand” in the providers field, “*” in actions, and choose “SpecificUser” under the Run As… Identity Type field. We need to run under elevated permissions in order to stop/start services and install them. Choose a user account that has these credentials.

File and user account permissions

In order for everything to work, we need to ensure that MsDeploy can access the folder we’re deploying to. We also need to extend the Local Service account so that it can impersonate a more elevated user in order to run the console commands necessary to stop/start and install services (note there are security implications for this – see MSDN for more details.).

  • Add read/write access to Local Service account to the target deployment folder
  • Run the following command on the console

sc privs wmsvc SeChangeNotifyPrivilege/SeImpersonatePrivilege/SeAssignPrimaryTokenPrivilege/SeIncreaseQuotaPrivilege

  • Finally, you need to restart the Web Management Service for this to take effect.

If all has been set up correctly, you should now be all good to go – services will automatically deploy and get started!

Ignoring/preserving files

In a similar fashion to when deploying websites, you may find you wish to preserve logging folders and similar during deployment. You can do this by adding some additional parameters to the MsDeploy command. For instance:

-skip:objectName=filePath,skipAction=Delete,absolutePath=\\Logs\\.*$

will preserve any files in the Logs directory.

Common error messages & troubleshooting

When starting out with MsDeploy it’s likely you’ll hit a fair number of permission denied errors – without too much more information. Logging is your friend.

Request logging - enabled through the Management Service configuration window in IIS, you will find requests logged to %SystemDrive%\Inetpub\logs\WMSvc

Failed request tracing – enabled through the Management Service Delegation configuration window, click “Edit Feature Settings” and “Enable failed request tracing logs”. You will find these at C:\inetpub\logs\wmsvc\TracingLogFiles\W3SVC1

Web Management Service Tracing - enabled through a registry key, described on MSDN.

Below I’ve included some common error messages and some possible causes.

“Connected to the destination computer (“xyz”) using the Web Management Service, but could not authorize. Make sure that you are using the correct user name and password, that the site you are connecting to exists, and that the credentials represent a user who has permissions to access the site.”

Probably because the username and password you are using are invalid (they haven’t been set up) or do not have permissions set for the particular “dummy” website you are targeting.

“Could not complete an operation with the specified provider (“runCommand”) when connecting using the Web Management Service. This can occur if the server administrator has not authorized the user for this operation.”

Most likely you have not set up the correct delegated services through the Management Service Delegation window – either no runCommand permissions have been set, or the delegated user doesn’t have permissions to run the command.

Could not complete an operation with the specified provider (“dirPath”) when connecting using the Web Management Service. This can occur if the server administrator has not authorized the user for this operation.

Either you haven’t set the dirPath permissions via the Management Service Delegation window, or the Local Service account does not have read/write access to the specified directory.

Error during ‘-preSync’. An error occurred when the request was processed on the remote computer. The server experienced an issue processing the request. Contact the server administrator for more information.

This occurred for me if you haven’t given the Web Management Service permissions to impersonate another user using the sc privs described above, or you have, but haven’t restarted the service yet.

Info: Updating runCommand. Warning: Access is denied. Warning: The process ‘C:\Windows\system32\cmd.exe’ (command line ‘/c “C:\Windows\ServiceProfiles\LocalService\AppData\Local\Temp\giz2t0kb.0ay.cmd”‘) exited with code ’0×1′.

This occurred for me if I had set the Management Service Delegation for runCommand, but left the service running as it’s built-in identity rather than “RunAs”… “Specific user”.

I hope this helps someone!