Today for debugging purposes I had to capture all the information submitted by users in a specific controller. Turned out that there was nothing readily implemented or provided by framework that I could use.

But implementation turned out quite simple. It is just another Action Filter that can be applied globally, per controller or per action:

using System.Linq;
using System.Web.Mvc;
using Newtonsoft.Json;

public class DebuggingFilter : ActionFilterAttribute
{
    public override void OnActionExecuting(ActionExecutingContext filterContext)
    {
        if (!filterContext.HttpContext.Request.IsPost())
        {
            return;
        }

        // I use instance wrapper around NLog logger. Apply your own logic here for creating logger.
        var logger = DependencyResolver.Current.GetService<ILoggingService>();

        var form = filterContext.HttpContext.Request.Form;

        // need to convert Form object into dictionary so it can be converted to json properly
        var dictionary = form.AllKeys.ToDictionary(k => k, k => form[k]);

        // You'll need Newtonsoft.Json nuget package here.
        var jsonPostedData = JsonConvert.SerializeObject(dictionary);

        logger.Debug(jsonPostedData);

        base.OnActionExecuting(filterContext);
    }
}

And then apply this attribute on top of your controller or action you’d like to monitor:

[DebuggingFilter]
public partial class MyController : Controller
{
    //... actions
}

This will put all POSTed data into your logs formatted as JSON – later it will be relatively easy to reproduce user actions and re-submit the data for debugging.

Word of a Warning

adding this filter to a global filter collection can lead to a significant performance problem in your application. I’ve set up logging into files (as a temp solution) and without measuring I could see the application was slower to respond with this filter.

Another warning – you don’t want to have this attribute on controller that takes passwords. Passwords are usually stored salted-and-hashed and never in plain text. But if you have this attribute on a controller that takes passwords – user passwords will end up in plain text in your logs. That’s a pretty big NO-NO, so don’t do it.

Today in a conversation with a customer I came up with a metaphor to explain some concepts about web-sites and hosting in Azure. I think this can be useful, should be recorded and developed further:

  • Think of a web-site as a Book. Pages in a book are web-site pages with text. Easy enough and translates well.
  • Web-sites to run need a server. Server is a Book-shelf that can have many books (sites).
  • You can have a book-shelf (server) in your office or in a Library. Library is a data-centre or a cloud that contains many book-shelves and many books. Somebody cares about the library for you, but you pay for the space on a shelf to keep you books there.
  • Domain Name is a Library Card that by name of a domain knows what shelf/location the required book(site) sits.
  • DNS server is a Librarian that keeps library cards organised and provides you with the right card when you ask for you site.
  • I also was explaining about having 2 publishing slots on Azure: production and staging. Production is the current book that is served to clients. Then when you publish new version of a book(site) – it is placed next to the current book(site) and then you swap these books around. Making the new book(site) to be currently in production.

Though this worked really well for me this morning, this metaphor does not explain well how on-premises hosting will work (your own tiny private library?). Or how web-sites fetch data from databases and if you swap sites/book they still serve the same data from the same database? Perhaps database can be a filing cabinet and every book contains a reference section on how to fetch data from the filing cabinet, but this is getting more complex than I like.

So, I’ve been selected to make a public appearance in DDD Scotland conference on 14th May 2016 in Edinburgh. Or rather people are interested in what I have to say about CQRS software architecture, and nobody cares about me personally. This way it sounds less intimidating. Because I’m petrified.

I’ve done a couple talks in the local .Net developers group, I’ve done presentations in front of people. But the most of people I’ve spoken before was about 30. This time there will be 110 people, or so the room can fit that many people. If you have not got this yet, I’m not used to making public appearances in front of such crowds. And more – all these people are expecting me to talk for an hour and not just blabber, but make sense and put my knowledge into their heads. Well, that’s exciting!

I’ve already done this talk about CQRS and I if you are interested, you can go through [slides](https://github.com/ trailmax/CQRS.Talk/raw/master/CQRS.pptx) or look on code samples. But I’ll be trying to cut corners, as the talk is too long for DDD presentation.

CQRS

Anyway, what to expect from the talk? This is mostly aimed on .Net developers as I’ll be showing code examples in C#. Most of my development is happening in Asp.Net MVC, but the same techniques can be applied to WebForms and not web-related application. I have successfully used the same architecture in command-line applications. Certainly the theory behind (separate your reads and writes) should be applied every time you write any code.

Talking about theory – I’ll start from introduction into what CQS and CQRS are and the differences, will show some code samples. I’ll show some diagrams how CQRS-application differs from your typical CRUD application. I’ll explain why CQRS does not mean you have to use NoSQL or any other funky databases.

I’ll go through a refactoring exercise of how to go from a bloated repository implementation to a Queries and how is SOLID principles are going to be implemented.

Next will be similar refactoring (though shorter this time) from a service implementation to a command implementation, followed up by a discussion why this approach is better.

Towards the end of the presentation I’ll talk about the real magic here – Decorators and how easy it’ll be to implement a lot of cross-cutting concerns like logging of every write-action a user takes. And there will be a little demo for those who is not familiar with this pattern.

Hopefully it all will come together nicely and a shall see you on the conference!

Most of my large projects are using Entity Framework. And Entity Framework is known to sometimes create crazy sql requests. Or rather it easily allows developers to create crazy queries. It is really easy to throw couple .Include() into your query and then return objects as they come from EF:

var products = dbContext.Products.Include(p => p.Tags)
                                 .Include(p => p.Reviews)
                                 .ToList(); // first query
foreach(var prod in products)
{
    var reviewers = prod.Reviews.Select(r => r.Owner).ToList(); // N+1
}

The above code is fictional, but most likely will produce N+1 issue. N+1 issue is when you don’t just issue a SELECT query, but follow up every returned row by another SELECT query to retrieve related entries. This is caused by EF feature called Lazy Loading. Read better explanation about Lazy Loading problem here. Basically think that your first query for Products returns 1000 records. Then in case of N+1, operation inside of foreach loop will produce another thousand queries to your database.

Continue reading

A while back I had to implement a login system that relied on in-house Active Directory. I did spend some time on figuring out how to work this in the nicest possible ways.

One of the approaches I used in the past is to slam Windows Authentication on top of the entire site and be done with it. But this is not very user-friendly – before showing anything, you are slammed with a nasty prompt for username/password. And you need to remember to include your domain name in some cases. I totally did not want that on a new green-field project. So here goes the instructions on how to do a nice authentication against your Windows Users or in-house hosted Active Directory.

For this project I’ll use Visual Studio 2015. But steps for VS2013 will be the same. Can’t say anything nice about any earlier versions of Visual Studio – I don’t use them anymore.

Continue reading

Every now and then I come across a “puzzle” on Facebook that asks to solve some simple math problem. Claiming something like “only smart ones know” or “only for genius”.

Last “puzzle” looked like this:

3-3x6+2=??

And there was 331K of comments all with answers or discussions.

I hope readers of my blog are all maths-literate and know rules of multiplication and addition. And I hope nobody is going to argue here that the answer is -13.

But out of curiosity I looked through comments and I was horrified how many people got this wrong. And insisting that their incorrect solutions is right. Some comments got my attention:

2016-02-14 23_05_23-Facebook

The first guy gives the explanation how this should be calculated. And then somebody comes and challenges saying that addition should be done before subtraction. Now I should insert an gif-image saying “My mind is blown”.

Now here we have somebody referring to “rules of bodmas”.

2016-02-14 23_10_40-Facebook

I did not know what these rules are, so had to look up. Turned out that I knew the rules, I just did not know that “BODMAS” is the yet another name for these rules. So this person clearly understood that you should do your multiplication first, but then struggled to do additions correctly.

Another common answer was 2. That is what you get if you use your calculator and go “three minus three multiply by six add two”. You’ll get 2. And many people did:

2016-02-14 23_17_36-Facebook

Some people clearly never paid attention in their maths class:

2016-02-14 23_22_33-Facebook

You get the picture. List of incorrect answers goes on and on. There are a lot of -13 answers, but they go roughly for 20-30% of all the answers. I wish I could extract data from all the comments and get some statistics out.

Honestly, after looking on a mass of different answers I started question my sanity and if I got the answer correctly, so I even wrote this down on a paper and re-done the “calculations”.

You may ask what I’m getting at? Yes, people have no idea about maths and it is hard. And programming is harder. You need to know these rules – operations ordering and that -7+4 is -3, not -11. And that in some programming languages you can get a=null always equals to true because we are assigning null to variable a and operation is always successful. So learn your maths before trying to program.

Over the Christmas and New Year holidays I have finished a book by Uncle Bob Martin The Clean Coder. It was a pleasure to read this book and I have finished it rather quickly. The overall feel to the book was like I’m reading a big collection of blog-posts from Uncle Bob.

Entire book is filled with advice for software developers. But I found that a lot of this advice is aimed at junior developers. Myself being in a senior role for a while now and doing project management as well, I did not find a lot of new stuff. There was a few interesting points of view on some things, like “being in The Zone is counter-productive”. I did not agree with this point at first, but then thinking about it when I was coding, I do get some of the arguments against being in The Zone.

Another thing I did not quite agree was that “QA Should Find Nothing”, meaning when you send your system for testing, there should be no bugs. Yes, you should not let a buggy software escape your desk. But testing it over and over again – I find this counter-productive. If you release a feature, most likely you have been looking on that piece of project for a few days or even weeks. And it works because you tested it. But I would not rely on your testing because you know how it works and you know what input is expected. If somebody else looks on the system and feeds unexpected input, in a lot of times it will be different point of view on the feature. In my view purpose of QA testing is exploratory testing: what happens if I type rubbish input into this field, double click on Save button and close the browser immediately instead of waiting for the transaction to complete? Or something like that. Because all the other testing can (and should) be automated.

Also 100% code coverage by unit tests? Please don’t do it – that’s a waste of time. All critical business logic should be tested. But there is are tons of boilerplate code that have no point in testing. I.e. Controller Actions in MVC, when using CQRS architecture. OK, OK. I will not go there – there have been numerous debates about purpose of 100% code coverage. I have tried 100% coverage once on a smallish-project and that was a waste of effort that business did not appreciate.

I very much agree with continuous learning – this applicable to all developers however senior. I actually think all professionals, not just software developers, should do it. Software technology moves so fast now – you need to learn new stuff before you go out of fashion.

Though not all advice was for novice – conflict management and “Saying No” was right to the point. I’m pretty good with saying “no” but my diplomacy skills are lacking, so these chapters were very useful.

Overall this was a good read with some interesting points and insights, but if you have been following the industry last few years you already know quite a lot from this book.

First time I have tried TFS Build feature a few years ago and that was on hosted TFS 2010 version. And I hated it. It all was confusing XML, to change a build template I had to create a Visual Studio Project. I barely managed to get NUnit tests to run, but could not do any other steps I needed. So I abandoned it. Overall I have spent about 3 days on TFS. Then I installed Team City and got the same result in about 3 hours. That is how good TeamCity was and how poor was TFS Build.

These days I’m looking for ways to remove all on-prem servers, including Source Control and Build server. (Yep, it is 2016 and some companies still host version control in house)

Visual Studio Team Services Build – now with NuGet feed

Recently I’ve been playing with Visual Studio Team Services Build (used to be Visual Studio Online). This is a hosted TFS server, provided by Microsoft. Good thing about it – it is free for teams under 5 people. At the end of 2015 Microsoft announced a new Package Management service. So I decided to take it for a spin. So today I’ll be talking about my experience with VSTS build, combined with Package Management.

Disclaimer

I’m not associated with Microsoft, nor paid for this post (that’s a shame!). All opinions are mine.
VSTS is moving fast with new features popping up every other week. So information in this post might become out-of date. Please leave a comment if things don’t work for you as expected here.

C# package in private NuGet feed

Today I actually need to create a C# library for internal use and publish it to a private NuGet feed. So I’ll write down my steps for future generations.
For the purposes of this guide, I presume you already have VSTS project and have some code in it. You also know what CI is and why you need a build server.

Continue reading

Last year, before Christmas I had a jolly time in my office, cleaning up old bits of code and other stuff. One of the things I’ve “Cleaned-up” was old expired Azure Management Certificates. Then I went for 2 weeks holiday travelling around England. Until I had a text message from my colleague saying that one of our production system is down and not going back up. Error was something to do with Azure authentication.

Quickly doing the math I’ve realised that I have accidentally removed a current management certificate that was used in our system. The cert was used on the start-up phase of the app to check that all Azure Resources were available for the system to operate. And of-course at the time when I deleted the certs there were no outages and all systems run as normal. Until one of them decided to get restarted (because Azure). And when it went down it never came back up from because certificate that the system had was removed by me.

After fixing the systems (sorry, man!) and a bit of investigation turned out that our system had an expired certificate and it could authenticate with the service with it. So Azure Management Portal did not check if the management certificate was expired.

Now I’m not sure if this is a bug or a feature, but certainly I’m not going to report that because if this is going to be fixed, how many other systems with expired certs will go down without a warning?

From now on a rule of thumb – never to delete management certificates, even expired ones!

Today I’ve spent quite some time figuring out why my new Azure Subscription Settings file was not picked up by Octopus Deploy. And I was getting an obscure error message:

Get-AzureWebsite : Communication could not be established. This could be due to an invalid subscription ID. Note that subscription IDs are case sensitive.

Turned out that old Subscription Settings File was stuck in user cache and I had to “unstuck” it by executing this script from under the user account I was trying to execute PowerShell.

Remove-AzureSubscription 'Subscription Name' -Force

This does not actually do anything to the actual Azure subscription (I panicked about it first). Documentation says this only deletes subscription data file from so PowerShell can’t use it. Nothing to do with the actual Azure Subscription.

After removing old subscription data you can re-import new Subscription Settings File:

Import-AzurePublishSettingsFile 'C:/path/to/subscription.publishsettings'

Select-AzureSubscription -SubscriptionName "Subscription Name"

Hope this helps someone!