I’m a big fan of unit tests. But not everything can and should be unit tested. Also I do love CQRS architecture – it provides awesome separation of read and writes and isolates each read from the next read via query classes. Usually my queries are reading data from a database, though they can use any persistence i.e. files. About 90% of my queries are run against database with Entity Framework or with some micro ORM (currently I’m a fan of PetaPoco, though Dapper is also grand).

And no amount of stubs/mocks or isolating frameworks will help you with testing your database-access layer without actual database. To test your database-related code you need to have a database. Full stop, don’t even argue about this. If you say you can do unit-tests for your db-layer, I say your tests are worth nothing.

A while ago I’ve blogged about integration tests and that article seems to be quite popular – in top 10 by visitors in the last 2 years. So I decided to write an update.

Continue reading

UPD There is a part 2 of this blog-post explaining how to do roles and fixing a minor issue with authentication.

UPD If you are on Windows 10 and get “System.IO.FileNotFoundException: The system cannot find the file specified”, have a look on this page. Thanks to David Engel for this link.

A while back I had to implement a login system that relied on in-house Active Directory. I did spend some time on figuring out how to work this in the nicest possible ways.

One of the approaches I used in the past is to slam Windows Authentication on top of the entire site and be done with it. But this is not very user-friendly – before showing anything, you are slammed with a nasty prompt for username/password. And you need to remember to include your domain name in some cases. I totally did not want that on a new green-field project. So here goes the instructions on how to do a nice authentication against your Windows Users or in-house hosted Active Directory.

For this project I’ll use Visual Studio 2015. But steps for VS2013 will be the same. Can’t say anything nice about any earlier versions of Visual Studio – I don’t use them anymore.

Continue reading

First time I have tried TFS Build feature a few years ago and that was on hosted TFS 2010 version. And I hated it. It all was confusing XML, to change a build template I had to create a Visual Studio Project. I barely managed to get NUnit tests to run, but could not do any other steps I needed. So I abandoned it. Overall I have spent about 3 days on TFS. Then I installed Team City and got the same result in about 3 hours. That is how good TeamCity was and how poor was TFS Build.

These days I’m looking for ways to remove all on-prem servers, including Source Control and Build server. (Yep, it is 2016 and some companies still host version control in house)

Visual Studio Team Services Build – now with NuGet feed

Recently I’ve been playing with Visual Studio Team Services Build (used to be Visual Studio Online). This is a hosted TFS server, provided by Microsoft. Good thing about it – it is free for teams under 5 people. At the end of 2015 Microsoft announced a new Package Management service. So I decided to take it for a spin. So today I’ll be talking about my experience with VSTS build, combined with Package Management.

Disclaimer

I’m not associated with Microsoft, nor paid for this post (that’s a shame!). All opinions are mine.
VSTS is moving fast with new features popping up every other week. So information in this post might become out-of date. Please leave a comment if things don’t work for you as expected here.

C# package in private NuGet feed

Today I actually need to create a C# library for internal use and publish it to a private NuGet feed. So I’ll write down my steps for future generations.
For the purposes of this guide, I presume you already have VSTS project and have some code in it. You also know what CI is and why you need a build server.

Continue reading

This post is a summary of links I’ve studied about ARM.

One of the things we do for our project – automatic provisioning of services in MS Azure for new clients. Couple years ago we used to do provisioning manually and that took days. Now we have a system that does it all for us – web-based system that talks to Azure API and asks for new websites/databases/storages/etc. to be created for the system we work on. And I’ve been actively writing this system for the last 3-4 months.

I’ve been using Azure Management C# libraries to access Azure API for couple years now. And as far as I could remember, these libraries never actually were out of preview and approved for production. And I had a lot of trouble with these, especially when I took half-year breaks from that project then came back and realised that half the API’s I’ve used are changed.

This time I come back to this problem and I realise that I’ve missed Azure Resource Management band-wagon and the plethora of new libraries. And need to start learning from scratch again (don’t you hate that?).

Now there are 2 ways to access Azure API: Azure Resource Management (ARM) and Classic Azure Management (the old way). ARM is the new cool kid in the block and looks like it is here to stay because new Azure Portal is totally based on this system. See comparison new and old ways.

There are a lot of differences between new and old. Old system required to authenticate through a certificate that you had to attach to your HTTP requests. And you had to manage these certs and all that. I’m sure there were other ways to authenticate, but when I started working with Azure Management API this was the only way that I knew. ARM allows to authenticate via Azure AD where you need to know couple Guids and a password. Here is the overview of ARM

The most radical change that ARM is really based on Resource Groups. And everything you create must be in a group. So you need to create a Resource Group first, then resources. There are benefits to that: you can view billing per group – i.e. put all resources related to a project and you can see how much that project costs you, without having to go through per-item subscription billing. Another massive benefit is access control. Now you can give users access only to a specific group of resources (you could only give access to a subscription before) (Read more about Role-Based Access Control and built-in roles).

Authentication

But I disgress – I’m working with API at the moment. Authentication is slightly easier now. You’ll need to create an application in Azure Active Directory, get a “client secret” from it and do 3-line C# code execution to get an authentication bearer token. Read more about authentication process here (including code sample). And this one shows creation of AD application.

Then for every request to ARM you need to attach this token as an Authentication header to HTTP request: request.Headers.Add(HttpRequestHeader.Authorization, "Bearer " + token);

Requests

Now you don’t even need any libraries – you can form requests yourself pretty easy. You need authentication token as a header, you need to know the URL you need to work with. And then you POST/PUT json-formatted object. And to delete you do a DELETE request to that URL. And URL always maps to a resource – very RESTful indeed.

URL you need to work with looks similar to this:

https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourcetype}/{resourceName}?api-version={apiversion}

Here is the sample of URL accessing a website resource:

https://management.azure.com/subscriptions/suy64wae-f814-3189-8742-b53d5a4532cd/resourceGroups/NewResourceGroup/providers/Microsoft.Web/sites/MyHappyWebsiteName?api-version=2015-08-01

But don’t worry about this – you have a Resource browser https://resources.azure.com/ that will tell you exactly the URL you need – just navigate to already existing item and see the generated URL. This site is also going to give you JSON format/data to send through to portal.

Templates

Another great feature of ARM is Templates. Basically you write information about resources you need in JSON, add parameters there and feed that to ARM (either programmatically or through Azure Portal). Though I’ve not found a good programmatical way to use templates – I have seen samples where you need to upload template json file and parameters json file to Azure Storage and tell ARM where to look for. But I’m not convinced about this – sound like too many steps with uploading.

Here is the definition of templates. And you can create them within Visual Studio 2015 or you can copy-paste json from existing objects in resource browser and modify to your needs.

Links

This post will be a dumping ground of links, problems and solutions related to Swagger. I’ll be putting updates here as I go along


I’m starting a new project and we would love trying new things. This time we would like to jump on the whole Azure Api Applications with Swagger as a descriptor of API.

Great article “What is Swagger”

Here is the whole Swagger Spec – have a look what is possible.

Here is the pretty cool page with Swagger file editor

Tutorials for ASP.Net

Continue reading

I have seen a fair amount of questions on Stackoverflow asking how to prevent users sharing their password. Previously with MembershipProvider framework it was not a simple task. People went into all sorts of crazy procedures. One of the most common was to have a static global list of logged-in users. And if a user already in that list, the system denied their second login. This worked to an extent, until you clean cookies in the browser and try to re-login.

Luckily now Asp.Net Identity framework provides a simple and clean way of preventing users sharing their details or logging-in twice from different computers.

Continue reading

I’m an avid user on StackOverflow in questions about Asp.Net Identity and I attempt to answer most of the interesting questions. And the same question comes up quite often: users try to confirm their email via a confiramtion link or reset their password via reset link and in both cases get “Invalid Token” error.

There are a few possible solutions to this problem.

Continue reading

The title is quite vague, but this is the best I can call this post. Bear with me, I’ll explain what I mean.

While working with Asp.Net MVC, in Razor views I use Html helpers a lot. My view end up looking like this:

@Html.TextBoxFor(m => m.Title)
@Html.TextAreaFor(m => m.Description)

And later if I write JavaScript and need to reference one of the input fields, I usually do this:

var title = document.getElementById("Title");
var description = document.getElementById("Description");

or if you use jQuery you go like this:

var title = $("#Title");
var description = $("#Description");

This is fine and works. Until you start renaming field names in your model. And then you need to track down where you hard-coded the input id. And many times these references slip away (they do from me!) and remain unchanged.

Would it not be great if these changed could be picked up automatically? or even better, do some sort of strongly-typed reference to the element if it is used in JavaScript?

Just use @Html.IdFor(m => m.Name)

And your JS code would look like this:

var title = document.getElementById("@Html.IdFor(m => m.Title)");
var description = $("#@Html.IdFor(m => m.Description)");

Below I’ve re-invented the wheel. [Facepalm]

So I trolled through MVC source code and figured out how it generates id’s for elements and here are my findings:

using System;
using System.Linq.Expressions;
using System.Web.Mvc;

public static class HtmlHelpers
{
    public static String ElementId<T, TResult>(this HtmlHelper<T> htmlHelper, Expression<Func<T, TResult>> selector)
    {
        var text = System.Web.Mvc.ExpressionHelper.GetExpressionText(selector);
        var fullName = htmlHelper.ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldName(text);
        var id = TagBuilder.CreateSanitizedId(fullName);

        return id;
    }
}

As you can see, I have not made any custom code, all components are from MVC and are applied in the same order as MVC does when it generates id for elements when you call for @Html.EditorFor(). So this guarantees the provided id from this function will match the id in your form.

In the view you use it like this:

@Html.ElementId(m => m.Title)

And your JS code would look like this:

var title = document.getElementById("@Html.ElementId(m => m.Title)");
var description = document.getElementById("@Html.ElementId(m => m.Description)");

A bit of mouthful, but gives you the safety net of strong types. I have not actually used it widely, so not sure how it’ll work out in the longer run.

A note to myself, next time I try to do view compilation in msbulid and see exception

Can’t find the valid AspnetCompilerPath

I should either install VS2013 on my build server. Or choose a different version of msbuild.exe:
C:\Program Files (x86)\MSBuild\12.0\Bin\msbuild.exe.

The problem is caused by version mis-match. New msbuild is part of Visual Studio 2013 and not part of .Net framework itself. And by default your PATH variable points to .Net folders where older msbuild is found. So you in your scripts you need to explicitly specify what msbuild you need to use.