Yesterday I discovered Package Explorer for opening up OpenXML documents, validating and editing. Today I have found OpenXml SDK Tool.

SDK Tool is a great software from Microsoft for every developer working with OpenXml. SDK Tool is similar to Package Explorer, but you don’t have edit functionality. And validation messages are not as good. But you get an instant access to documentation about any possible element you can imagine.
And the killer function is CODE GENERATION! It generates C# code for any element that is available on the page.

You can create a Word Document, open it up in SDK Tool and it will produce you a full C# listing to create exactly the same document from your code! This is just fantastic! Using this tool today I was discovering how things done in a simple way: add page numbers to an empty Word-doc, open it up in SDK Tool, find the part about page numbers (probably this was the hardest part), copy the generated code, slightly refactor and I had page numbers in my generated documents. All it took about 15 minutes and I did not use Google or any other documentation.

Anyway, download SDK Tool from here: http://www.microsoft.com/en-gb/download/details.aspx?id=30425. Select only SDK Tool to download, you won’t need anything else available on the page.

Once installed find Open XML SDK Productivity Tool in your start menu and open a Word document.

You can select any element of the document and see documentation to that, with all available child node types

Documentation

You can run validation of any element in the document:

Validation

And the promised killer feature – Generated code:

See_Generated_Code

Another quite useful function is to compare the documents. Yesterday I had to do that to find out how a particular feature is implemented. Now, when you have a generated code, this may be not so useful, but you can compare 2 Word documents and it’ll show you the differences in containing XML.

Stackoverflow is useful until you have a quite specific question and you have used Google before asking a question. My last 11 questions were left with no answers for one or another reason.

Today was not an exception. My question about debugging process during OpenXml development was left unnoticed. So I had to figure out for myself! Sigh!

UPDATE: I have discovered Open XML SDK Tool that adds a lot of features to this game

Anyway, I digress. My current task involves generating a MS Word document from C#. For that I’m using OpenXml library available in .Net. This is my first time touching OpenXml and maybe I’m talking about basic stuff, but this was not easily googleable.

Many times after writing a Word file, I try to open it and see this message:

The file .docx cannot be opened because there are problems with the contents. Details: Unspecified error

The file .docx cannot be opened because there are problems with the contents. Details: Unspecified error

This is really frustrating. The error message sometimes gives you a column inside of the xml file to look at, but does not say what exactly is wrong with the document. You look hard on generated xml, but can’t see anything. Because your eyes bleed from the angle brackets. Eventually you figure out that you try to do something silly, like add Text to Paragraph without having a Run or add a Paragraph directly to a TableRow without a TableCell (if you’ve done some OpenXml, you’ll know what I’m talking about!).

I thought there should be a better way to validate documents, and there was! Right on the page where I took my documentation from, there was a link to a page that talked about document validation! How did I not see that!?

Here it is for your consideration: http://msdn.microsoft.com/en-us/library/office/bb497334(v=office.15).aspx

Turns out that OpenXml library has a OpenXmlValidator class that does what is says on the tin – validates your OpenXml documents.

Being hard about my tests, I instantly rewrote the sample into a test that takes a Stream object with OpenXml generated document and outputs the validation errors:

using System;
using System.IO;
using System.Linq;
using DocumentFormat.OpenXml.Packaging;
using DocumentFormat.OpenXml.Validation;


public static class WordDocumentValidator
{
    public static void ValidateWordDocument(Stream wordDocumentStream)
    {
        using (var wordprocessingDocument = WordprocessingDocument.Open(wordDocumentStream, false))
        {
            var validator = new OpenXmlValidator();
            var validationErrors = validator.Validate(wordprocessingDocument).ToList();
            var errorMessage = String.Format("There are {0} validation errors with document", validationErrors.Count);

            if (validationErrors.Any())
            {
                Console.WriteLine(errorMessage);
                Console.WriteLine();
            }

            foreach (var error in validationErrors)
            {
                Console.WriteLine("Description: " + error.Description);
                Console.WriteLine("ErrorType: " + error.ErrorType);
                Console.WriteLine("Node: " + error.Node);
                Console.WriteLine("Path: " + error.Path.XPath);
                Console.WriteLine("Part: " + error.Part.Uri);
                if (error.RelatedNode != null)
                {
                    Console.WriteLine("Related Node: " + error.RelatedNode);
                    Console.WriteLine("Related Node Inner Text: " + error.RelatedNode.InnerText);
                }
                Console.WriteLine();
                Console.WriteLine("==============================");
                Console.WriteLine();
            }

            if (validationErrors.Any())
            {
                throw new Exception(errorMessage);
            }
        }

    }
}

And this Validator should be used like this:

using System.IO;
using DocumentFormat.OpenXml;
using DocumentFormat.OpenXml.Packaging;
using DocumentFormat.OpenXml.Wordprocessing;
using Xunit;


public class ValidatorTestSample
{
    // here you generate your Word document
    public static Stream GenerateValidDocument()
    {
        var memoryStream = new MemoryStream();
        using (var wordDocument = WordprocessingDocument.Create(memoryStream, WordprocessingDocumentType.Document))
        {
            // Add a main document part. 
            var mainPart = wordDocument.AddMainDocumentPart();

            // Create the document structure and add some text.
            mainPart.Document = new Document();
            var body = mainPart.Document.AppendChild(new Body());
            var paragraph = body.AppendChild(new Paragraph());
            var run = paragraph.AppendChild(new Run());
            run.AppendChild(new RunProperties(
                    new FontSize() { Val = "40" },
                    new RunFonts() { Ascii = "Helvetica" }
                ));
            run.AppendChild(new Text("Create text in body - CreateWordprocessingDocument"));
        }

        return memoryStream;
    }

    [Fact]
    public void GenerateValidDocument_Always_CreatesValidatedDocument()
    {
        var result = GenerateValidDocument();

        WordDocumentValidator.ValidateWordDocument(result);
    }
}

This is very simplistic approach of the test, but it shows the general idea. This exact code would give you failed test with output like this:

There are 1 validation errors with document

Description: The element has unexpected child element 'http://schemas.openxmlformats.org/wordprocessingml/2006/main:rFonts'.
ErrorType: Schema
Node: DocumentFormat.OpenXml.Wordprocessing.RunProperties
Path: /w:document[1]/w:body[1]/w:p[1]/w:r[1]/w:rPr[1]
Part: /word/document.xml
Related Node: DocumentFormat.OpenXml.Wordprocessing.RunFonts
Related Node Inner Text: 

It took me a while to figure out why the test was failing. It was saying that the problem with RunProperties node, but says that RunFonts element is unexpected in RunProperties. But this is rendered perfectly fine if you open in MS Word. The issue here that validator has a specified order of the properties in which they must be provided. So if you swap places how you add properties to the run:

run.AppendChild(new RunProperties(
        new RunFonts() { Ascii = "Helvetica" },
        new FontSize() { Val = "40" }
    ));

This test passes. Strange, but at least I know exactly where to look for the problem!

But to be honest, my validator test is not how I found out about the order-preference for the validator. That was an Open-Source project Open XML Package Explorer. The software allows you to open OpenXml documents in their XML form, without unleashing Zip Archiver and Notepad++.

This is how exactly the same documents looks in Package Explorer:

Package_Explorer_Validation

Here you can see generated XML in a readable format. Plus you can run a validator on your document and have a slightly better validation error message presented to you. This is very good for manual tweaking your stuff, making sure that works and then converting that to a code. And here I figured out that order of properties does matter.

So, to conclude, for automatic validation of your documents use the validation class provided. To manually figure out what is wrong use Package Explorer.

Entity Framework migrations are very clever. But cleverness can come out from the wrong end sometimes. Over last 2 days I’ve spent at least a day debugging my migration scripts and not getting anywhere.

The trouble was in EF thinking that there is a migration pending, but nothing given to me to migrate: when I run Add-Migration, all I got was empty migration:

public void Up()
{
}
public void Down()
{
}

But still, on every attempt to use domain context, I was thrown an exception:

Unable to update database to match the current model because there are pending changes and automatic migration is disabled. Either write the pending model changes to a code-based migration or enable automatic migration. Set DbMigrationsConfiguration.AutomaticMigrationsEnabled to true to enable automatic migration.

That was soul destroying. I’ve spent hours banging my head against the keyboard with no results.

When I had to fix other problems I kinda put out the fire temporarily by saying

Database.SetInitializer<DomainContext>(null);

This is a hack, it makes the exception to go away, but does not fix the problem with migration. So I only put that in place as a quick fix to move forward a bit.

Today I added source code from Entity Framework into my project and stepped through EF code. That was fun! I say EF is very complex and very-well written. I was impressed with over 13K tests!

I eventually found the issue I’ve had with migration. I think this is a bug in EF 6.1 and I’ll try re-create it and will file a bug-report.

But I’ve learned a lot by looking on EF code. And I’ll share some of my findings with you.

How Entity Framework Migrations work on the inside

We’ve all run commands Add-Migration and Update-Database. And when it works – it is a fantastic tool (did I say that before?). But when it breaks, you are pretty much screwed. And it helps to understand what is happening behind the scenes.

EF tracks all executed migrations on database in table __MigrationHistory that looks like this:

migrations_table

If you run EF before version 6, you would not have ContextKey column – this is a new feature in EF6+.

Please look on Model column – this is some binary data stored there. All the time I worked with EF, I always thought this is a hash of model state. Turns out this is not a hash. This is just GZip-compressed XML string. And you can read it. Sometimes it is useful to read through the model definition to understand what happens and why you get an incorrect result in your migration script.

To read it you need to GZip-decompress this string:

[TestCase("MyMigration")]
public void DecompressDatabaseMigration(String migrationName)
{
    const string ConnectionString = // connection string to DB with migrations
    var sqlToExecute = String.Format("select model from __MigrationHistory where migrationId like '%{0}'", migrationName);

    using (var connection = new SqlConnection(ConnectionString))
    {
        connection.Open();

        var command = new SqlCommand(sqlToExecute, connection);

        var reader = command.ExecuteReader();
        if (!reader.HasRows)
        {
            throw new Exception("Now Rows to display. Probably migration name is incorrect");
        }

        while (reader.Read())
        {
            var model = (byte[])reader["model"];
            var decompressed = Decompress(model);
            Console.WriteLine(decompressed);
        }
    }
}

/// <summary>
/// Stealing decomposer from EF itself:
/// http://entityframework.codeplex.com/SourceControl/latest#src/EntityFramework/Migrations/Edm/ModelCompressor.cs
/// </summary>
public virtual XDocument Decompress(byte[] bytes)
{
    using (var memoryStream = new MemoryStream(bytes))
    {
        using (var gzipStream = new GZipStream(memoryStream, CompressionMode.Decompress))
        {
            return XDocument.Load(gzipStream);
        }
    }
}

This code connects to database, reads migration information from the database, decompresses it and prints out into console. Alternatively you can review your generated migrations without database, only from generated migration classes:

[Test]
public void DecompressMigrationEncoding()
{
    var migrationClass = (IMigrationMetadata)new MyMigration();
    var target = migrationClass.Target;
    var xmlDoc = Decompress(Convert.FromBase64String(target));
    Console.WriteLine(xmlDoc);
}

You can view the whole class in this gist.

Looking on plain-text schema is useful, how it sees EF is useful for debugging. That’s how I fixed my problem.

Schema Comparing

Before I thought that EF looks on database and compares it’s schema with what it has in the model. And then generates migration script based on differences. Oh, boy, how wrong was I!

EF does not actually cares about your database state. It only cares about __MigrationHistory table and records in there. If you update your database to the latest migration, and drop all the tables apart from __MigrationHistory, EF will be convinced that your database is in a good shape.

When you run Add-Migration command from nuget console, EF reaches for __MigrationHistory tables, grabs the latest record from there, decompresses the xml with model state and compares xml-schema with what it has in the model. If the schemas do not match, it creates the script to update your database to the latest state.

So if you like to manually update generated migration scripts, stop it! This is a path to madness, you’ll eventually hit the same problem I had: schema in xml records got disconnected from the actual database schema and it took a lot of time to figure out where the differences are.

Sample Solution

To make all the above more understandable, I’ve created a tiny sample solution with command line project. Look through the code for more details of how you can view the xml-schema.

You can execute the sample. It’ll create a database in your local .sqlexpress instance of SqlServer, execute one migration on it and then will print out xml schema extracted from source code than from __MigrationHistory table in the freshly created database.

I certainly hope this write-up will help somebody to debug their EF Migration problem.

EfMigrationSample
Title: EfMigrationSample ( click)
Caption:
Filename: EfMigrationSample.zip
Size: 10 MB

Unit tests are fine and dandy, but if you persist your data in database, persistence code must be tested as well. Some possible bugs will never be picked up only by unit tests. One of the issues I’ve been burned by is entity validation, another is a missing foreign key relationship.

Validation issues can burn you very easy. Look at this bit of code:

public class Person 
{
    [Key]
    public Guid PersonId { get; set; }

    [Required]
    public String FirstName { get; set; }

    [Required]
    public String LastName { get; set; }
}

public void CreatePerson(string firstName, string lastName)
{
    var newPerson = new Person();
    newPerson.FirstName = firstName;

    // dbContext was injected via constructor
    dbContext.Persons.Add(newPerson); 
    dbContext.SaveChanges();
}

NB: This is just a sample code. Nowhere in my projects we do such things.

Do you see the problem with this piece of code? Last Name on Person was not populated. This issue should be picked up by unit tests, but let’s suppose this is not picked up. What would happen when you try to save this object into database? You’d get DbEntityValidationException thrown with one of the validation errors saying LastName field is required because there is [Required] attribute is decorating this property. You will not see this issue until you hit the database. If missing LastName is not picked up by unit tests (which in this case should happen), you will only see the exception only during run-time.

Also there is another bunch of good reasons to do database tests. But most of my db-tests check if query is valid for the query and if I get any exceptions with random combination of query parameters as null.

There have been a few write-ups about how to do testing in DB. Most notable are from Jimmy Bogard: Strategies for isolating the database in tests and Isolating database data in integration tests

We have tried most of the recommended approaches, and now (I think) we have settled on something that is a combination of approaches, so worth writing about.

Repeatability and Independence

One of the conditions for unit-tests is repeatability. You should be able to run your tests many times over without implications. Database tests are special in their own way. Usually for DB-tests to work you need to have some data pre-seeded in your test-database. And then you do some manipulations with data. So for the next test to succeed, you need to have the same starting data. One of the ways to do that is to delete all the data from DB and re-seed with known state. Deleting ALL data from your database can be tricky – you need to know graph of table dependencies. You can have a manual script that deletes data in the right order. But with any kind of scale in your database and this becomes a chore. Imaging a hundred-odd tables and organising a delete script for that? I have tried – not my idea of fun.

You can find a script that deletes data for you in an orderly fashion, but that also did not work for me.

Better way is not to write any data to the database, so you won’t have to delete it later! Just start a transaction before every DB test, do you writes and then roll-back after test is finished.

NUnit does not have in-built way of doing transactions and rollbacking. So I rolled my own attribute:

using System;
using System.Transactions;
using NUnit.Framework;


/// <summary>
/// Rollback Attribute wraps test execution into a transaction and cancels the transaction once the test is finished.
/// You can use this attribute on single test methods or test classes/suites
/// </summary>
public class RollbackAttribute : Attribute, ITestAction
{
    private TransactionScope transaction;

    public void BeforeTest(TestDetails testDetails)
    {
        transaction = new TransactionScope();
    }

    public void AfterTest(TestDetails testDetails)
    {
        transaction.Dispose();
    }

    public ActionTargets Targets
    {
        get { return ActionTargets.Test; }
    }
}

Simple. Before every test SQL transaction is started. After every test the transaction is disposed, not committed. And actual test would look like this:

[Test, Rollback]
public void Do_Your_Writes()
{
    dbContext.Persons.Add(new Person());
    dbContext.SaveChanges();
} 

And after the test execution, nothing is saved to the database. So if all your tests are wrapped into a transaction, you know your data in the database is always at the same state.

XUnit does have it’s own [AutoRollback] attribute that does the same work.

Changing Database Structure

Database structure is yet another issue with the db-tests. You must make sure that test-database is always at the same state as your production database. I was ignoring this for a long time, until recently, when I had to come up with a better way of dealing with change.

If you run Entity Framework, it is silly not to use Database Migrations – very useful tool. EF can build a database from scratch given the schema. Or you can run all your migration scripts to get to the latest state of the database exactly as in production. In the past I have been rebuilding the test-database from the context, ignoring the migrations. And I have been doing that for every test. Reason for that is to guarantee that tests would not fail if executed in random order: each test took responsibility of rebuilding db if required. The process for every test looked like this:

  • Check if DB is in a good shape
  • If schema does not match domain models, drop and re-create the DB
  • Run the test.

Checking the state of database for every test was very slow. Even though only the first executed test dropped-created database, every other test was checking the schema. Schema checking was excessive and expensive – schema stayed untouched during the tests run.

A better approach is to always have your test database in a good shape and let test just run, presuming the schema is good.

For that just re-build your database only once after every change (honestly, how often do you change your schema?) And to re-build your test-database, use your migrations.

Here is what I’ve started to use recently to re-buld the test-database:

using System;
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using System.Data.Entity.Migrations;
using MyApp.Tests.Stubs;
using MyApp.Services.Configuration;
using MyApp.Data;
using NUnit.Framework;


public class DatabaseSetup
{
    // you don't want any of these executed automatically
    [Test, Ignore("Only for manual execution")]
    public void Wipe_And_Create_Database()
    {
        var connectionString = new StubConfiguration().GetDatabaseConnectionString();

        // drop database first
        ReallyDropDatabase(connectionString);

        // Now time to create the database from migrations
        // MyApp.Data.Migrations.Configuration is migration configuration class 
        // this class is crated for you automatically when you enable migrations
        var initializer = new MigrateDatabaseToLatestVersion<DomainContext, MyApp.Data.Migrations.Configuration>();

        // set DB initialiser to execute migrations
        Database.SetInitializer(initializer);

        // now actually force the initialisation to happen
        using (var domainContext = new DomainContext(connectionString))
        {
            Console.WriteLine("Starting creating database");
            domainContext.Database.Initialize(true);
            Console.WriteLine("Database is created");
        }

        // And after the DB is created, you can put some initial base data 
        // for your tests to use
        // usually this data represents lookup tables, like Currencies, Countries, Units of Measure, etc
        using (var domainContext = new DomainContext(connectionString))
        {
            Console.WriteLine("Seeding test data into database");
            // discussion for that to follow
            SeedContextForTests.Seed(domainContext);
            Console.WriteLine("Seeding test data is complete");
        }
    }

    // this method is only updates your DB to latest migration.
    // does the same as if you run "Update-Database" in nuget console in Visual Studio
    [Test, Ignore("Only for manual execution")]
    public void Update_Database()
    {
        var connectionString = new StubConfiguration().GetDatabaseConnectionString();

        var migrationConfiguration = new MyApp.Migrations.Configuration();

        migrationConfiguration.TargetDatabase = new DbConnectionInfo(connectionString, "System.Data.SqlClient");

        var migrator = new DbMigrator(migrationConfiguration);

        migrator.Update();
    }


    /// <summary>
    /// Drops the database that is specified in the connection string.
    /// 
    /// Drops the database even if the connection is open. Sql is stolen from here:
    /// http://daniel.wertheim.se/2012/12/02/entity-framework-really-do-drop-create-database-if-model-changes-and-db-is-in-use/
    /// </summary>
    /// <param name="connectionString"></param>
    private static void ReallyDropDatabase(String connectionString)
    {
        const string DropDatabaseSql =
        "if (select DB_ID('{0}')) is not null\r\n"
        + "begin\r\n"
        + "alter database [{0}] set offline with rollback immediate;\r\n"
        + "alter database [{0}] set online;\r\n"
        + "drop database [{0}];\r\n"
        + "end";

        try
        {
            using (var connection = new SqlConnection(connectionString))
            {
                connection.Open();

                var sqlToExecute = String.Format(DropDatabaseSql, connection.Database);

                var command = new SqlCommand(sqlToExecute, connection);

                Console.WriteLine("Dropping database");
                command.ExecuteNonQuery();
                Console.WriteLine("Database is dropped");
            }
        }
        catch (SqlException sqlException)
        {
            if (sqlException.Message.StartsWith("Cannot open database"))
            {
                Console.WriteLine("Database does not exist.");
                return;
            }
            throw;
        }
    }
}

You will say, these tests do not assert anything. That’s true! These are not really tests, only marked to be tests. I use test framework to create a small snippets of code to be executed in separation from the rest of the application. And that is why these methods are marked as ignored.

So now after every new db-migration added you manually execute Update_Database() method. That would update local test-database. And this should work for all developers.

Granted, this approach involves one extra step for developers – they will have to manually execute one of the methods to be able to run db-tests locally. Yes, but this is a small price to pay for such an optimisation. With the old approach, when every test checked the database schema, 80ish database tests took 6 minutes to execute. Now our whole suite of 1300 tests run in 40 seconds on dev-machines. That is a huge improvement!

Execute Db-Tests on Build Server vs local execution

Every developer has his own instance of SQL Server Express running on their own rig. But we keep connection strings identical for everyone (using Windows Authentication). For db-test we have a separate database for every developer and one extra on a build server. Connection string for test database is taken from app.config file.

Connection string on build-server is different from developers connection strings, so we transform app.config in the same way you can do web.config transformation.

To do app.config transformation you’ll need to have SlowCheetah nuget package. Latest version of this nuget is taking care of many things for you, so transformation is mostly trouble-free experience.

Anyway, I digress. On the build server you need something or somebody to run the the Wipe_And_Create_Database() method for you. And now the black magic comes into play!

Make your test assembly to be a command line executable, and then you can execute whatever you like from command line.

public static class Program
{
    public static int Main(string[] args)
    {
        if (args[0] == "UpdateDatabase")
        {
            Console.WriteLine("Wiping and restoring database");

            // this is the class with tests listed above
            var databaseRestorer = new DatabaseSetup();

            try
            {
                databaseRestorer.Wipe_And_Create_Database();
            }
            catch (Exception exception)
            {
                Console.WriteLine("Failed to wipe and restore database");
                Console.WriteLine(exception.ToString());
                return 1;
            }

            Console.WriteLine("Restoring database complete");
        }
        else
        {
            Console.WriteLine(@"Nothing is happening. The only available command is UpdateDatabase. Use this program like this: ""C:/>MyApp.Tests.exe UpdateDatabse""");
        }

        return 0;
    }
} 

And on your build server you’ll have to execute command similar to this:

MyApp.Tests.exe UpdateDatabase

This will rebuild your test database. After that you can run your database tests and be sure that schema is created by migrations.

One little catch for TeamCity users. As far as I know, TeamCity does not make available compiled assemblies until end of the build process. So you’ll have to have 2 build processes: one compiles your project, including MyApp.Tests.dll, possibly run your fast unit tests. And if everything is fine in this step, publish artefacts. Then have next build stage that depends on the first stage. And this stage will have access to your compiled .exe file. After that you’ll be able to re-create your database for testing, and run database tests.

Setting up TeamCity for this process was trivial, so I’ll omit the description of that.

And this is how you should organise your database-dependent tests, kids!

We are hitting the deck with our site performance and optimisation. It is fast, but we want it uber-fast! So next stage is to have IIS up and active all the time with all the views being compiled and ready before any user comes to them.

By default, IIS compiles views only when a request for that view comes in. So first time a user visits some rare page in your application, user is waiting a bit longer while IIS does Just-In-Time compilation. And actually if you look under the hood IIS does stacks of things before it shows you a web-site.

Despite of common believe, IIS does not run your web-application from /bin folder, it copies all required files to a temp folder. To be more specific, it copies files to c:\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files\. Reason for that – file locking. For just-in-time compilation, it needs to update binaries, but in /bin folder binaries can be locked.

So if you see very strange behaviours on your web-app, and clean-rebuild in VS does not help anymore, you go to that folder and clean everything up. This will force IIS to restart your web-sites and re-copy all required files.

I digress, back to View Compilation. When you do Web-Publishing, you can tick a checkbox to force views to be pre-compiled:

When you publish to Azure Web Roles you don’t have this option. And this makes me sad. So we need to do a work-around with some sort of hack.

One of the options (very popular on StackOverflow) is to use RazorGenerator package. I don’t like that approach for following reasons:

  1. Razor Gen requires a Visual Studio Extension. This means getting all developers to install this. We already have performance issues with VS. Yet another extension is not going to improve it.
  2. Razor Gen adds a generated .cs file for every view. On 600 views, have another .cs file? VS is slow enough with 180K lines of code. Add another 600 generated classes there? No thanks.
  3. We do some crazy things with view engines on per-tenant basis. Will that work with RazorGen? No idea, but certainly not really looking forward debugging this.

On stack I found a few answers that talk about ClientBuildManager.PrecompileApplication() (MSDN). This looked promising.

This method basically does JIT compilation for your entire web-app and copies outputs to \Temporary ASP.NET Files. Here is what I did to get it to work:

public class WebRole : RoleEntryPoint
{
    public override bool OnStart()
    {
        // this creates IIS Server Manager that works with IIS configuration
        // We need this to extract Site ID from IIS
        using (var serverManager = new ServerManager())
        {
            // this only works inside of Azure Role
            var siteName = RoleEnvironment.CurrentRoleInstance.Id + "_Web";

            // gets object corresponding to our IIS Site
            var mainSite = serverManager.Sites[siteName];

            // magic dance!
            // see discussion with David Ebbo http://stackoverflow.com/a/15351473/809357
            var rootVirtualPath = String.Format("/LM/W3SVC/{0}/ROOT/", mainSite.Id); 

            var clientBuildManager = new ClientBuildManager(rootVirtualPath, null);

            clientBuildManager.PrecompileApplication();
        }
    }
}

And this kinda worked. Only not completely. PrecompileApplication() did the job, compiled all the views into ASP.Net Temp folder. Only one issue:

ASP_temp_files

IIS ignores all the pre-compiled files and spins up yet another folder where it runs it’s own compilation. I did confirm that by running IIS into pages that have not been compiled and IIS-used folder did increase it’s size, where as manually created folder is ignored and IIS ignores it.

Maybe I’m doing something wrong with rootVirtualPath, but I did not come with this one by myself, this comes from David Ebbo who has created Razor Generator. And I’d like to think he knows what he is doing. Also in that discussion, people confirmed that this approach worked for them.

So far I’ve failed with this task. I’ll try a few other approaches and will update this as I go.

Reference links:

  1. How do I force compilation of ASP.NET MVC views?
  2. Compile Views in ASP.NET MVC
  3. Azure precompilation doesn’t seem to work
  4. How do I force compilation of ASP.NET MVC views?

HTTPS everywhere is a common theme of the modern infosys topics. Despite of that when I google for implementation of HTTPS in ASP.Net MVC applications, I find only a handful of horrible questions on StackOverflow, about how to implement HTTPS only on certain pages (i.e. login page). There have been numerous rants about security holes awaiting for you down that path. And Troy Hunt will whack you over your had for doing that!

See that link above? Go and read it! Seriously. I’ll wait.

Have you read it? Troy there explains why you want to have HTTPS Everywhere on your site, not just on a login page. Listen to this guy, he knows what he is talking about.

Problem I faced when I wanted to implement complete “HTTPS Everywhere” in my MVC applications is lack of implementation instructions. I had a rough idea of what I needed to do, and now that I’ve done it a few times on different apps, my process is now ironed-out and I can share that with you here.

1. Redirect to HTTPS

Redirecting to HTTPS schema is pretty simple in modern MVC. All you need to know about is RequireHttpsAttribute. This is named as Attribute and can be used as an attribute on separate MVC controllers and even actions. And I hate it for that – it encourages for bad practices. But luckily this class is also implements IAuthorizationFilter interface, which means this can be used globally on the entire app as a filter.

Problem with this filter – once you add it to your app, you need to configure SSL on your development machine. If you work in team, all dev machines must be configured with SSL. If you allow people to work from home, their home machines must be configured to work with SSL. And configuring SSL on dev machines is a waste of time. Maybe there is a script that can do that automatically, but I could not find one quickly.

Instead of configuring SSL on local IIS, I decided to be a smart-ass and work around it. Quick study of source code highlighted that the class is not sealed and I can just inherit this class. So I inherited RequireHttpsAttribute and added logic to ignore all local requests:

public class RequreSecureConnectionFilter : RequireHttpsAttribute
{
    public override void OnAuthorization(AuthorizationContext filterContext)
    {
        if (filterContext == null)
        {
            throw new ArgumentNullException("filterContext");
        }

        if (filterContext.HttpContext.Request.IsLocal)
        {
            // when connection to the application is local, don't do any HTTPS stuff
            return;
        }

        base.OnAuthorization(filterContext);
    }
}

If you are lazy enough to follow the link to the source code, I’ll tell you all this attribute does is check if incoming request schema used is https (that is what Request.IsSecureConnection does), if not, redirect all GET request to https. And if request comes that is not secured and not GET, throw exception. I think this is a good-aggressive implementation.

One might argue that I’m creating a security hole by not redirecting to https on local requests. But if an intruder managed to do local requests on your server, you are toast anyway and SSl is not your priority at the moment.

I looked up what filterContext.HttpContext.Request.IsLocal does and how it can have an impact on security. Here is the source code:

    public bool IsLocal {
        get {
            String remoteAddress = UserHostAddress;

            // if unknown, assume not local
            if (String.IsNullOrEmpty(remoteAddress))
                return false;

            // check if localhost
            if (remoteAddress == "127.0.0.1" || remoteAddress == "::1")
                return true;

            // compare with local address
            if (remoteAddress == LocalAddress)
                return true;

            return false;
        }
    }

This is decompiled implementation of System.Web.HttpRequest. UserHostAddress get client’s IP address. If IP is localhost (IPv4 or IPv6), return true. LocalAddress property returns servers IP address. So basically .IsLocal() does what it says on the tin. If request comes from the same IP the application is hosted on, return true. I see no issues here.

By the way, here are the unit tests for my implementation of the secure filter. Can’t go without unit testing on this one!

And don’t forget to add this filter to list of your global filters

public static class FilterConfig
{
    public void RegisterGlobalFilters(GlobalFilterCollection filters)
    {
        filters.Add(new RequreSecureConnectionFilter());
        // other filters to follow...
    }
}

2. Cookies

If you think that redirecting to https is enough, you are very wrong. You must take care of your cookies. And set all of them by default to be HttpOnly and SslOnly. Read Troy Hunt’s excellent blog post why you need your cookies to be secured.

You can secure your cookies in web.config pretty simple:

<system.web>
    <httpCookies httpOnlyCookies="true" requireSSL="true"/>
</system.web>

The only issue with that is development stage. Again, if you developing locally you won’t be able to login to your application without https running locally. Solution to that is web.config transformation.

So in your web.config you should always have

<system.web>
    <httpCookies httpOnlyCookies="true" />
</system.web>

and in your web.Release.config file add

<system.web>
    <httpCookies httpOnlyCookies="true" requireSSL="true" lockItem="true" xdt:Transform="Replace" />
</system.web>

This secures your cookies when you publish your application. Simples!

3. Secure authentication cookie

Apart from all your cookies to be secure, you need to specifically require authentication cookie to be SslOnly. For that you need to add requireSSL="true" to your authentication/forms part of web.config. Again, this will require you to run your local IIS with https configured. Or you can do web.config transformation only for release. In your web.Release.config file add this into system.web section

<authentication mode="Forms">
    <forms loginUrl="~/Logon/LogOn" timeout="2880" requireSSL="true" xdt:Transform="Replace"/>
</authentication>

4. Strict Transport Security Header

Strict Transport Security Header is http header that tells web-browsers only to use HTTPS when dealing with your web-application. This reduces the risks of SSL Strip attack. To add this header by default to your application you can add add this section to your web.config:

<system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="Strict-Transport-Security" value="max-age=16070400; includeSubDomains" />
      </customHeaders>
    </httpProtocol>
</system.webServer> 

Again, the same issue as before, developers will have to have SSL configured on their local machines. Or you can do that via web.config transformation. Add the following code to your web.Release.config:

<system.webServer>
    <httpProtocol>
        <customHeaders>
           <add name="Strict-Transport-Security" value="max-age=16070400; includeSubDomains" xdt:Transform="Insert" />
        </customHeaders>
    </httpProtocol>
</system.webServer>

5. Secure your WebApi

WebApi is very cool and default template for MVC application now comes with WebApi activated. Redirecting all MVC requests to HTTPS does not redirect WebApi requests. So even if you secured your MVC pipeline, your WebApi requests are still available via HTTP.

Unfortunately redirecting WebApi requests to HTTPS is not as simple as it is with MVC. There is no [RequireHttps] available, so you’ll have to make one yourself. Or copy the code below:

using System;
using System.Net;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;
using System.Web;

public class EnforceHttpsHandler : DelegatingHandler
{
    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        // if request is local, just serve it without https
        object httpContextBaseObject;
        if (request.Properties.TryGetValue("MS_HttpContext", out httpContextBaseObject))
        {
            var httpContextBase = httpContextBaseObject as HttpContextBase;

            if (httpContextBase != null && httpContextBase.Request.IsLocal)
            {
                return base.SendAsync(request, cancellationToken);
            }
        }

        // if request is remote, enforce https
        if (request.RequestUri.Scheme != Uri.UriSchemeHttps)
        {
            return Task<HttpResponseMessage>.Factory.StartNew(
                () =>
                {
                    var response = new HttpResponseMessage(HttpStatusCode.Forbidden)
                    {
                        Content = new StringContent("HTTPS Required")
                    };

                    return response;
                });
        }

        return base.SendAsync(request, cancellationToken);
    }
}

This is a global handler that rejects all non https requests to WebApi. I did not do any redirection (not sure this term is applicable to WebApi) because there is no excuse for clients to use HTTP first.

WARNING This approach couples WebApi to System.Web libraries and you won’t be able to use this code in self-hosed WebApi applications. But there is a better way to implement detection if request is local. I have not used it because my unit tests have been written before I learned about better way. And I’m too lazy to fix this -)

Don’t forget to add this handler as a global:

namespace MyApp.Web.App_Start
{
    public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            // other configurations...

            // make all web-api requests to be sent over https
            config.MessageHandlers.Add(new EnforceHttpsHandler());
        }
    }
}

6. Set up automatic security scanner for your site

ASafaWeb is a great tool that checks for a basic security issues on your application. The best feature is scheduled scan. I’ve set all my applications to be scanned on weekly basis and if something fails, it emails me. So far this helped me once, when error pages on one of the apps were messed up. If not for this automated scan, the issue could have stayed there forever. So go and sign-up!

Conclusion

This is no way a complete guide on securing your application. But this will help you with one of the few steps you need to take to lock down your application.

In this Gist you can copy my web.Release.config transformation file, in case you got confused with my explanation.

Resharper is becoming more and more monstrous. Soon it will be an IDE in itself and won’t need Visual Studio at all!

When a new 8.1 version was out, I was overly excited with it. Even paid almost 100 quid for a licence out of my own pocket (for my personal use). After using it for 2-3 months, I’m not so happy with it.

Project I mostly work on has some sizeable amount of MVC controllers and views: 180K lines of code excluding code for views. On this codebase R# is choking badly.

Autocomplete

A lot of the times I get very random completion suggestions, even through I almost typed complete name for local variable that was defined a line above. Many times I get some random namespaces added to using block because R# thought I’m done with typing and concluded that I need some random namespace. I’ve disabled Resharper Intelisence and now using native Visual Studio functionality for that. VS does seem to do not a bad job there. Also it is not as automatic as R#, so I need to press Ctrl+Space to get the completion. Yes, extra 2 keystrokes, but that protects you from random crap that R# decides to add to your class.

Go Anywhere

Go Anywhere in Resharper 7 was useful. Most of the times it got me where I needed to. In v8 this is a VERY global search, and order of suggestions is nowhere near how it should be. When I type “web.cofnig” in Go Anywhere box, I expect to see web.config file in root of the project as first suggestion. Not some other auto-generated classes that have web and config in their name somewhere, like MyApp.Web.Infrastructure.AutomapperBootstrap.config. Most of the times the suggestion is bang on, but when it is not, it is so wrong – I don’t know where to start rant about it. Also, why does it offer me auto-generated classes as a first navigation option? (think T4MVC-generated controllers).

Turns out that Visual Studio has it’s own Navigate To option. Usually invoked by Ctrl+,, but probably R# have messed up the keyboard shortcuts for you, so you’ll need to re-assign the mapping again. And navigation in VS is not trying to be very smart, hence getting me to the places I’d like to go.

I’m dependant on Resharper so much, at the moment I can’t work without it. This is bad! I’ll keep adding here native replacements to R# functionality, as I discover them.

Here are the options I’d like to find replacements to:

  1. Renaming of classes and controller actions
  2. Creating of new classes from their name.
  3. Initialise fields from constructor parameters – for IoC injections
  4. … probably another infinite list of functions coming from R#

After initial excitement about xUnit and how cool your tests become with AutoDataAttribute. My itch to convert all my tests into xUnit have died down. And I’m not excited as much. XUnit is certainly not the silver bullet.

I do agree with certain things that are in xUnit, like not providing [SetUp] and [TearDown], rather have constructor and destructor, but can’t agree with other things. And the authors of xUnit have not been listening to their community which is unfortunate. I’ll list the annoyances I discovered in the order I faced them.

1. Unable to provide a message on failed assertion

In NUnit you could say Assert.IsFalse(true, "Yep, true is not false"). In every assertion you could’ve added a piece of text to the failed assertion. I have used that in the past. Not in many tests, but in few, enough to notice when I decided to convert these tests to xUnit. In xUnit you can’t do that. Apart from Assert.True() and Assert.False() where you can provide a message.

This feature have been removed by authors and it is not coming back. The reason for that is “If you need messages because you can’t understand the test code otherwise, then that’s the problem (and not the lack of a message parameter)“(comments from Brad Wilson). I do agree with the part of smelly test. But this is not the only reason to use the messages – there are cases when this message represents the error.

See my test that uses reflection to go through all EF models to verify a presence of empty constructor. In that test I build a list of classes that fail the condition and in the end if this list is not empty, I throw assertion exception with string of comma-separated types that fail condition. This is not your conventional unit test, but still a valid test. With NUnit the assertion looks like this:

var finalMessage = String.Join(", ", errors);
Assert.IsEmpty(errors, finalMessage);

As part of the message I get on a test fail, I get a list of classes that fail my test. There is no other way to pass a message to developer about failed test.

If I move this to xUnit, assertion would look like this:

var finalMessage = String.Join(", ", errors);
Assert.False(errors.Any(), finalMessage)

Now this is a code smell. It is not immediately clear what we are asserting. Maybe I’m just picky, but I’m not very happy about these lines. I can predict that Brad Wilson would suggest to write an extension for [InlineData] attribute that provides list of Types to be tested. Yep, this sounds like a reasonable idea. But I could not find any documentation on how to do that.

2. Lack of documentation.

The previous point slowly leads to this one. I could not find any apprehensive document/site that has documented features of xUnit. Documentation for xUnit looks like this at the moment:

2014-02-15 19_58_58-xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit) - Docu

There is a bunch of outdated articles linked from the home page. There is another bunch of blog posts which you need to find first. But there is nothing central anywhere. Unlike NUnit amazing documentation. In this sense NUnit beats many projects, even commercial ones, so can’t really compare.

To find out some specifics of operations I needed to troll through xUnit source code, only to find out that I’m looking on source code v2, where I was using 1.9.x. I was determined enough to get the source and find parts I needed. But less experienced developers would not do this and will struggle.

3. Unable to have “Manual execution only” tests

You can ignore NUnit tests with [Ignore] attribute. I would like to have a reason as a mandatory parameter in this attribute, but this is secondary. Ignored tests will be skipped in test runners, but you can specifically run this one ignored test manually. This is useful when you write exploratory tests – where you just trying things out and they are not really a test, but more like a piece of code in your system which you can execute separately from the whole thing. Or if these tests are integration tests interacting with external API, where you need to manually undo effects of the executed test. And I’m not the only one who uses this practice.

In xUnit you ignore tests like this: [Fact(Skip="run only manually")]. Only you can’t run them at all! Not even manually. And people want that! Jimmy Bogard restores test database with a manual execution of ignored test.. He came up with idea where skipped tests do run only when debugger is attached. Not a bad idea, also other people done the same.

Running these tests in debugger kinda work, but looks like a cludge to me. Why not allow for manual execution and do not multiply the hacks?

4. Unable to filter out tests by categories

NUnit has [Category()] attribute that marks the test(s) with some category. Usually these categories mark tests as “fast”, “slow”, “database”, “integration”, “smoke”, etc. So when you run these tests you can only include (or exclude) tests suitable for the environment. xUnit has Trait attribute which is a pair of strings. To create a category with Trait you’ll have to do this: [Trait("Category", "database")]. The key-value structure gives a bit more flexibility with categories. But I can’t come up with a scenario where I’d use other than “Category” for the trait key. Also, code examples shipped with xUnit 1.9.2 do have [Category("")] attribute which is inherited from Trait and placing the key to be "Category". But in xUnit v2 (which is in alpha just now) Trait attribute is sealed, so you can’t inherit it any more.

We do run our tests in TeamCity and way to execute xUnit is through MSBuild script. The problem is here: I can’t filter out tests by their Trait attribute when executing through MSBuild. And this is not an oversight. This is intentional. The idea behind this is “…place different test types into different assemblies, rather than use traits for the filtering.“. Excellent idea, I say! Let’s have a bunch of test assemblies to make Visual Studio even slower. A few months back I’ve merged a million (OK, there was 6) of our test assemblies into one project to speed up VS and reduce maintenance. Now let’s revert this and create a few extra test assemblies to filter out by test types.

See this scenario: In one of my projects I have integration tests that need database. Also I have integration tests that use Azure Emulator. On the build server in one step I execute all non-integrtion tests first, then if none fails, in the next step I re-create database and further build step executes database tests. See this video for reasons behind it. For these build steps I first need to filter out database tests, then to include only database tests.

Because I use hosted build server I can’t run Azure emulator on the build server and all my tests using storage emulator will fail without it, so I need to filter them out.

According to xUnit authors, these tests should live in 2 separate assemblies. I have about 10 db-tests and about 20 for Azure emulator tests. This is 2 extra assemblies with very small number of tests. Good practice? I don’t think so! Only encourages to make a mess – I’ve been there, I did not like it. Every separate test project in your solution doubles maintenance burden.

What about people who separate tests by being “fast” and “slow”. And execute fast first, slow later on their build server. Or ignore slow tests on CI build, run them only on nightly builds. There is no clear distinction between the tests, unlike in my example. And within the same class you can have slow tests next to fast ones, all testing the same SUT. How do you propose to work this one out? Throw tests from one assembly to another when they become slow? Now here is some serious mess waiting to be happen.

And if you can’t filter by Traits on build server, what is the point of them? Gui runners usually allow you to choose what tests you’d like to run. And on very rare occasion I filter out tests by their category in GUI.

I know, you can filter out by traits with console runner. I could not make it to work -(. Also this sounds a bit hypocritical to me: authors allow to filter out traits in console runner, but not in MSBuild runner, because MSBuild for automatic test execution.

Conclusion

While I enjoyed writing tests with AutoData attribute from Autofixture, I can’t really say xUnit solved my issues with test execution. There is a big possibility that I don’t understand a lot of concepts behind this framework, but there is no good place to go for an explanation. And if somebody has answers to my moans, please feel free to speak out in comments! I’d love to hear you proving me wrong. Because I hope I’m wrong here.

So far xUnit been a disappointment in my experience with a lot of hype around it. I’ll keep it for the cases where I’ll benefit from AutoDataAttribute, but all other tests will be based on NUnit.

One of the tasks have performed lately in our massive web-application is restructuring menu. And for the menu to work correctly we had to make sure that every page is somewhere on the menu.

For Menu generation we use MvcSitemapProvider. And for strongly-typed references to our controllers/actions we generate static classes via T4MVC. Task of making sure that every controller action (out of ~600) has a SiteMapAttribute is very tedious. And with development of new features, this can easily be forgotten, leading to bugs in our menu. So we decided to write a test. This turned out to be yet another massive reflection exercise and it took a while to get it correctly. So I’d like to share this with you.

Theory of the test are simple – find all controllers, on every controller find all the methods and check that every method has a custom attribute of type
MvcSiteMapNodeAttribute. But in practice this was more complex because we used T4MVC.

The way T4MVC works is making your controllers as partial classes and adds second part of the controller somewhere. And in the second partial class it adds a load of methods. The actual controller looks like this:

namespace MyAPp.Web.Areas.Core.Controllers
{
    [Authorize(Roles = MembershipRoles.Administrator)]
    public partial class AdminMenuController : CoreController
    {
        [MvcSiteMapNode(Title = "Admin", ParentKey = SiteMapKeys.Home.Root, Key = SiteMapKeys.Home.AdminTop, Order = 999)]
        public virtual EmptyResult Menu()
        {
            return new EmptyResult();
        }
    }
}

and these are part of generated code produced by T4MVC:

    [GeneratedCode("T4MVC", "2.0"), DebuggerNonUserCode]
    protected RedirectToRouteResult RedirectToActionPermanent(ActionResult result)
    {
        var callInfo = result.GetT4MVCResult();
        return RedirectToRoutePermanent(callInfo.RouteValueDictionary);
    }

    [NonAction]
    [GeneratedCode("T4MVC", "2.0"), DebuggerNonUserCode]
    public virtual System.Web.Mvc.JsonResult LargeJson()
    {
        return new T4MVC_System_Web_Mvc_JsonResult(Area, Name, ActionNames.LargeJson);
    }
    [NonAction]
    [GeneratedCode("T4MVC", "2.0"), DebuggerNonUserCode]
    public virtual System.Web.Mvc.ActionResult RedirectToPrevious()
    {
        return new T4MVC_System_Web_Mvc_ActionResult(Area, Name, ActionNames.RedirectToPrevious);
    }
    [NonAction]
    [GeneratedCode("T4MVC", "2.0"), DebuggerNonUserCode]
    public virtual System.Web.Mvc.ActionResult BackToList()
    {
        return new T4MVC_System_Web_Mvc_ActionResult(Area, Name, ActionNames.BackToList);
    }

See these methods? they are not even defined on the controller I’m looking at at the moment. These are all parts of base abstract controller that is a parent for all our controllers.

And when you run your reflection, all these methods pop up as actions. And for our test this is just a noise. Must be filtered out.
Notice how generated code is marked with [GeneratedCode] attribute. This is what we are going to use to filter these methods out of our test.

After a long messing about with reflection I came up with this test. I hope this is mostly self-explanatory and comments also help with the reasoning.

    // controller exclusion list
    private static readonly List<Type> ExcludedControllers = new List<Type>()
                                                        {
                                                            typeof(HelpController),
                                                            typeof(HomePageController),
                                                        };

    // ActionRestyle types that should be excluded from the test
    // If action returns only partial view result, it should not have MvcSiteMapAttribute
    private static readonly List<Type> ExcludedReturnTypes = new List<Type>()
                                                        {
                                                            typeof(PartialViewResult),
                                                            typeof(JsonResult),
                                                            typeof(FileResult),
                                                        };


    [Fact] // using xUnit
    public void ControllerActions_Always_HaveBreadcrumbAttribute()
    {
        var errors = new List<string>();

        // excude controllers that should not be testsed
        var controllerTypes = GetControllerTypes().Where(ct => !ExcludedControllers.Contains(ct));

        foreach (var controllerType in controllerTypes)
        {
            // get all public action in the controller type
            var offendingActions = controllerType.GetMethods(BindingFlags.Public | BindingFlags.Instance | BindingFlags.DeclaredOnly)
                // filter out all NonActions
                .Where(m => !m.IsDefined(typeof(NonActionAttribute)))
                // filter out all T4MVC generated code
                .Where(m => !m.IsDefined(typeof(GeneratedCodeAttribute)))
                // T4MVC adds some methods that don't return ActionResult - kick them ot as well
                .Where(m => typeof(ActionResult).IsAssignableFrom(m.ReturnType))
                // if action is Post-only, we don't want to apply Sitemap attribute
                .Where(m => !m.IsDefined(typeof(HttpPostAttribute)))
                // and now show us all the actions that don't have SiteMap attributes - that's what we want!
                .Where(m => !m.IsDefined(typeof(MvcSiteMapNodeAttribute)))
                // excluding types of actions that return partial views or FileResults - see filter list above
                .Where(m => !ExcludedReturnTypes.Contains(m.ReturnType))
                .ToArray();

            // add all the offending actions into list of errors
            errors.AddRange(offendingActions.Select(action => String.Format("{0}.{1}", controllerType.Name, action.Name)));
        }

        // Assert
        if (errors.Any())   // if anything in errors  - print out the names
        {
            Console.WriteLine("Total number of controller actions without SiteMapAttribute: {0}", errors.Count);
            var finalMessage = String.Join(Environment.NewLine, errors);
            Console.WriteLine(finalMessage);
        }
        Assert.Empty(errors); // fail the test if there are any errors.
    }


    // return all types of controller in your application
    public static IEnumerable<Type> GetControllerTypes()
    {
        // MvcApplication type is defined in your Global.asax.cs
        return Assembly.GetAssembly(typeof(MvcApplication))
            .GetTypes()
            .Where(t => !t.IsAbstract && t.IsSubclassOf(typeof(Controller)))
            .Where(t => t.Namespace != null && !t.Name.Contains("T4MVC"))
            .ToList();
    }
}