Resharper is becoming more and more monstrous. Soon it will be an IDE in itself and won’t need Visual Studio at all!

When a new 8.1 version was out, I was overly excited with it. Even paid almost 100 quid for a licence out of my own pocket (for my personal use). After using it for 2-3 months, I’m not so happy with it.

Project I mostly work on has some sizeable amount of MVC controllers and views: 180K lines of code excluding code for views. On this codebase R# is choking badly.

Autocomplete

A lot of the times I get very random completion suggestions, even through I almost typed complete name for local variable that was defined a line above. Many times I get some random namespaces added to using block because R# thought I’m done with typing and concluded that I need some random namespace. I’ve disabled Resharper Intelisence and now using native Visual Studio functionality for that. VS does seem to do not a bad job there. Also it is not as automatic as R#, so I need to press Ctrl+Space to get the completion. Yes, extra 2 keystrokes, but that protects you from random crap that R# decides to add to your class.

Go Anywhere

Go Anywhere in Resharper 7 was useful. Most of the times it got me where I needed to. In v8 this is a VERY global search, and order of suggestions is nowhere near how it should be. When I type “web.cofnig” in Go Anywhere box, I expect to see web.config file in root of the project as first suggestion. Not some other auto-generated classes that have web and config in their name somewhere, like MyApp.Web.Infrastructure.AutomapperBootstrap.config. Most of the times the suggestion is bang on, but when it is not, it is so wrong – I don’t know where to start rant about it. Also, why does it offer me auto-generated classes as a first navigation option? (think T4MVC-generated controllers).

Turns out that Visual Studio has it’s own Navigate To option. Usually invoked by Ctrl+,, but probably R# have messed up the keyboard shortcuts for you, so you’ll need to re-assign the mapping again. And navigation in VS is not trying to be very smart, hence getting me to the places I’d like to go.

I’m dependant on Resharper so much, at the moment I can’t work without it. This is bad! I’ll keep adding here native replacements to R# functionality, as I discover them.

Here are the options I’d like to find replacements to:

  1. Renaming of classes and controller actions
  2. Creating of new classes from their name.
  3. Initialise fields from constructor parameters – for IoC injections
  4. … probably another infinite list of functions coming from R#

After initial excitement about xUnit and how cool your tests become with AutoDataAttribute. My itch to convert all my tests into xUnit have died down. And I’m not excited as much. XUnit is certainly not the silver bullet.

I do agree with certain things that are in xUnit, like not providing [SetUp] and [TearDown], rather have constructor and destructor, but can’t agree with other things. And the authors of xUnit have not been listening to their community which is unfortunate. I’ll list the annoyances I discovered in the order I faced them.

Continue reading

One of the tasks have performed lately in our massive web-application is restructuring menu. And for the menu to work correctly we had to make sure that every page is somewhere on the menu.

For Menu generation we use MvcSitemapProvider. And for strongly-typed references to our controllers/actions we generate static classes via T4MVC. Task of making sure that every controller action (out of ~600) has a SiteMapAttribute is very tedious. And with development of new features, this can easily be forgotten, leading to bugs in our menu. So we decided to write a test. This turned out to be yet another massive reflection exercise and it took a while to get it correctly. So I’d like to share this with you.

Theory of the test are simple – find all controllers, on every controller find all the methods and check that every method has a custom attribute of type
MvcSiteMapNodeAttribute. But in practice this was more complex because we used T4MVC.

Continue reading

Recently we have upgraded our project to run on EF 6. And this is still in beta in Deb branch. So this has not been deployed to most of our customers. And today I had to solve a bug for older version of our system that is still on EF5.

Long story short: I’ve tried to connect to DB that already had migrations from EF6, but I was connecting with EF5. And cryptic error messages emerged, just like EF loves to throw cryptic error messages, this time it was no exception, excuse the pun.

This time the exception is saying: The provider did not return a ProviderManifest instance. with internal exception saying Could not determine storage version; a valid storage connection or a version hint is required. Which made no sense at that point. Until you spend couple hours trying to figure out what is happening.

Let me go into EF migration system details slightly. EF stores database state information in __MigrationHistory table that looks like this for EF5:

EF5_MigrationHistory

For EF6 the structure of the table have slightly changed:

EF6_MIgrationHistory

Notice on the second screenshot there is another column for DbContext type name used for the migration. This is a new feature in EF6 – allows more than one context to live in one project and you can run migrations for both of them. Have not tried this one yet, but time will come pretty soon!

Also once we have run migrations in EF6, the version recorded next to the migration record have changed from 5.xx to 6.xx – this is also visible on the last screenshot.

And if you try to connect to this database with older version of EF, table structure throws the system off it’s tracks. I’ve tried updating version number in data, but it did not change anything, do don’t bother.

I have not found the solution to this issue, other than re-create database migrations in EF5. So be careful when you update to EF6 and run migrations – there is no way back to EF5.

Note for yourself, when you are registering odata route in web-api configuration, it must be placed before the default catch-all route. I.e. :

Gloabal.asax.cs:

    protected void Application_Start()
    {
        // .. do stuff

        // This must be before registering all other routes    
        GlobalConfiguration.Configure(WebApiConfig.Register);

        AreaRegistration.RegisterAllAreas();

        RouteConfig.RegisterRoutes(RouteTable.Routes);

        // do other stuff...
    }

Of you place web-api route registration after MVC route registration, MVC routes will take priority and probably you won’t have OdataController in your MVC code.

And no, nothing magic about OData there, just standard routes registered in Asp.Net application.

I’ve updated one of our Azure-hosted projects from MVC3 to MVC5. And at the same time I’ve bumped from .Net 4 to version 4.5.1. And after deployment I’ve bumped into a cryptic error:

The feature named NetFx451 that is required by the uploaded package is not available in the OS * chosen for the deployment.

A quick search revealed that if your service is based on something below Windows Server 2012 R2, you’ll get this error for .Net 4.5.1. To fix this go into all your *.cscfg files and in the very top of the file, in the signature of <ServiceConfiguration> node, there is osFamily="3" property. Update this to osFamily="4":

image_476D46C2

Don’t forget to update this in all of your .cscfg files. By the way, this is recommended by Scott Gu in one of his announcements.

After using MVC bundling and minification for a bit I run into a few problems. All of them are well known and will cause you trouble.

First of the problems I encountered – reordering of files in the bundle. When I add to bundle Main.css and then add MainOverrides.css I bloody-well mean it to stay in this order, cause overrides will not do anything if they are posted before the main file. Same goes to jquery and plugins.

Another issue I run into was image urls inside of css. When css files are bundled, they are output as:

<link href="/bundles/jquery-ui_css?v=U5E9COcCYIcH9HU1Qr9aiQotqSzLGT8YYDi6qhOpObk1" rel="stylesheet"/>

Continue reading

Generally Exceptions are usually thrown by parts of the code that encounter unrecoverable situation. Like famous NullReferenceException is thrown when code tries to access properties of object that is null. I’m sure you know this, don’t need to tell that 101 of programming again. My point is that exceptions are always thrown and it is down to other parts of code to catch exceptions and do something with them. Because if exceptions are not caught, the’ll bubble up to the top level and will crash your application.

I’ve never seen any methods that return exceptions. Never thought of exceptions as something that can be returned by a method. Have you?

Today I’ve spent some time doing work on bundling and minification in MVC5 project. I did open dotPeek decompiler to check how it actually works. And I came across this bit of code:

  Exception exception = this.Items.IncludePath(virtualPath, transforms);
  if (exception != null)
  {
    throw exception;
  }
  // do other stuff

I’m not posting full source code here, but it is available from MS Symbols server. That’s where dotPeek got the sources from

Note in the block above a method does something and exception is returned. It does not throw exception. Exception is thrown a level closer to user – method I’m showing you is public and I’m going to use it. Method that returns exception is internal and not accessible for consumer.

My first reaction was that this breaks Clean Code convention: method modifies state of application and does not return objects, or method returns information, but does not change anything. Basically commands and queries in CQRS: commands for writing, queries for gathering data.
Here we have a clear violation – method modifies program state and returns exception.

I went exploring deeper and looked through some other methods. The convention in this library is: private methods return exceptions. Public methods throw exceptions. This looks strange and new to me.

Remembering university course on machine code (a.k.a. Assembler, yes, I’ve done that one and loved it!) throwing an Exception involves walking execution stack which involves a lot of CPU cycles and tracing (details escape me now). This results thrown exception being expensive in terms of CPU cycles. A quick googling revealed a stack of answers and blog posts to confirm my understanding.

Given this, I presume this technique is used in the library to avoid performance penalty on throwing an exception. And the deeper the exception is thrown in the execution stack, the slower it is. Also this technique adds a lot of extra boilerplate code to methods. If you use traditional throw new Exception() then instead of this:

    Exception exception = this.IncludePath(virtualPath, (IItemTransform[]) null);
    if (exception != null)
      return exception;

you’d simply have this:

this.IncludePath(virtualPath, (IItemTransform[]) null);

Where private method would just throw an exception.

Bottom line. Unless this is used for purposes other than optimisation, don’t use this technique – it’s confusing and against conventions. And if you find your exceptions to be a bottleneck (enough to optimise it), you throw way too many exceptions and need to rethink your application. If there is other purpose to this approach, please tell me!

Yesterday Nuget server was down for couple hours. That wasn’t nice. I got stuck for a bit, because I blew up my environment and needed to re-download all the packages again. And some of them were not in the local cache. And just the same night Mr. Mark Seemann blogged that auto-restore in Nuget is considered harmful. And I could not disagree with Mark. Amount of time I’ve spent on trouble-shooting missing packages or package restore in the last year mounts to probably a week. Most of that time was spent for package restore on build servers. And just for these little shits I had to dig very deep into TFS build template, modify it, so it can restore packages before attempting to build it. It was not a nice experience.

Anyway, long story short. Today I decided to give that advice a go and check in packages for one medium sized project hosted on TFS. I went to Source Control Explorer, added existing folder of packages, it added a big number of files and then I checked-in.

Build server crunched for a moment and failed the build: references are not found. Strange. Connected to the build server and discovered that TFS by default ignored *.DLL files but included all other crap that comes with nuget packages (xml files, readme.md, source code, etc.). That was strange. After a bit of digging I found that TFS has a list of file-types that it looks for.

Continue reading

Now, as promised my other project is hosted and built on hosted Team Foundation Service and it took me a bit of fiddling before I managed to run NUnit tests on it. However, there is an excellent walk-through how to do that: http://www.mytechfinds.com/articles/software-testing/6-test-automation/72-running-nunit-tests-from-team-foundation-server-2012-continuous-integration-build

Much in the same way you can do xUnit:

  1. Download xunit.runner.visualstudio-x.x.x.vsix from Codeplex.
  2. Change file extension from .vsix to .zip and open with zip archiver.
  3. Copy all xunit.*.dll files to a folder that is mapped to TFS and check them in. I prefer to have these files in /BuildFramework folder

xUnit_VS

  1. Now, pretty much the same as with NUnit set up (If you have not done that before), you need to update settings for Build Controller. In VS go to Team Explorer -> Builds -> Actions -> Manage Build Controllers. And set “Version Control path to custom assemblies” to the folder where you have saved adapters for xUnit:

Build_controller

And you are pretty much done. You can now check in and your xUnit tests will run on hosted TFS (Presuming TFS is set up correctly to run your tests).