General Rules When Building OSS Libraries

In the last couple years I’ve built a few libraries for public and internal consumption. Mostly these have been internal things, but they are not much different from OSS libraries.

So far the most popular libarary I’ve released open-sourced is Cake.SqlServer but I’m mostly proud of NSaga though it is not as used, mostly due to lack of marketing effort on my part. Both of these are still on v1.x though these are young libraries – both under 1 year of age at the moment of writing.

In a sense libraries for internal consumption are easier to build – you can break things and then fix the broken things in the downstream projects (or tell people how to fix). If your library is publicly released you need to be much more careful in how you handle breaking changes and how you define your public API surface. Semantic versioning is supposed to help you manage the breaking changes, but you still need to minimise the possibilities of breaking things for other people. If you are not careful and break a lot of stuff, people will be pissed off and eventually move away from your library.

While building OSS libraries I had to excersise a lot of care to cater for wider audience and rules I draw below are mostly coming from OSS. So here they are:

.Net Framework Versions

You need to consider what framework version you build for. Nature of .Net framework – every version is an increment on the previous, so library built for v4.5 will work in v4.6.1. But not the other way. So you need to pick the lowest version of .Net you’d like to work with and stick with it. I think it is fare to say that no currently developed project should be below v4.5. There is no reason to stay on lower version and upgrade from v3.5 is relatively painful. So my rule of thumb is to target v4.5 as a baseline, unless I need framework features that are only available in later versions.

Continue reading

Another reason not to use TFS

TFS source control system has got some strange meaning of “Workspace”. I’ve run into this numerous times and tonight again. This time I’m trying to migrate a project from TFS into git and keep the project name intact. So I’ve renamed my old project in VSTS to be ProjectName.TFS. And created a new one called ProjectName. But was faced with this great error:

The Team Project name ProjectName was previously used and there are still TFVC workspaces referring to this name. Before you can use this name, the owner of each workspace should execute the Get command to update their workspaces. See renaming a team project for more details (https://go.microsoft.com/fwlinkip?Linkld=528893). Found 2 workspace(s) using this name: ws_1_1;b03e2eb0-22aa-1122-b692-30097a2fa824, ws_dd5f57e41;b2345678-98a0-4f29-13692-30097a2fa824

Well, yes. Thanks for letting me know that this project name was used before. And I obviously don’t care about these workspaces – the PC where these were used no longer exist.

Following the link I was advised to execute this command to delete the dead workspaces:

tf workspace /delete [/collection:TeamProjectCollectionUrl] workspacename[;workspaceowner]

Yeah, no problem. Only it took me a while to find tf.exe. It is in the most obvious place in VS2017:

c:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\

And WTF is TeamProjectCollectionUrl? and what about workspacename[;workspaceowner]? took me a while to figure out the correct format expected. Here what worked for me:

.\tf workspace /delete /collection:mycompanyname.visualstudio.com\DefaultCollection “ws_1_1;b03e2eb0-22aa-1122-b692-30097a2fa824”

The last bit is coming from the error message in VSTS: ws_1_1;b03e2eb0-22aa-1122-b692-30097a2fa824, ws_dd5f57e41;b2345678-98a0-4f29-13692-30097a2fa824 Name from the owner is separated by ; and different namespaces are separated by ,.

All that bloody obvious!

Publish Core XUnit Test Results in VSTS

Following my previous post, I’m building Asp.Net Core web application and I’m running my tests in XUnit. Default VSTS template for Asp.Net Core application runs the tests but it does not publish any results of test execution, so going into Tests results panel can be sad:

And even if you have a task that publishes test results after dotnet test, you will not get far.

As it turns out command dotnet test does not publish any xml files with tests execution results. That was a puzzle for me.

Luckily there were good instructions on XUnit page that explained how to do XUnit with Dotnet Core properly. In *test.csproj file you need to add basically the following stuff:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="xunit" Version="2.3.0-beta2-build3683" />
    <DotNetCliToolReference Include="dotnet-xunit" Version="2.3.0-beta2-build3683" />
  </ItemGroup>

</Project>

Continue reading

VSTS Private NuGet Feed and Building Dotnet Core Application

Honestly, VSTS private NuGet feeds will be a most blogged topic in this blog! I’ve already mentioned them at least twice. Here goes another post:

This time I’m building in VSTS Core application and the build literally goes:

  • dotnet restore
  • dotnet build
  • dotnet test
  • dotnet publish

And this worked really well for the basic application – I had it set up in minutes and got the resuilt I was looking for.

But during the life of the application I needed NuGet packages from my private feed and dotnet restore had no idea about my private feeds.

Even if supplied with nuget.config file I was getting 401 Unauthenticated – this is because I was not keeping my password(token) in this file.

Solution was to add this feed globally on the agent for every build. And this turned out to be easier than I thought.

In your build add a new task called Nuget Command:

Continue reading

Working With Attachments in DocumentDB

I’m building a prototype for a new project and it was decided to use DocumentDB to store our data. There will be very little data and even less relationship between the data, so document database is a good fit. Also there is a chance for us to use DocumentDB in production.

There is a comprehensive documentation about the structure and how it all ties together. Yet not enough coding samples on how to use attachments. And I struggled a bit to come up with the working solution. So I’ll explain it all here for future generations.

Structure

This diagram is from the documentation

And this is correct, but incomplete. Store this for a moment, I’ll come back to this point later.

Continue reading

My Git Cheat-Sheet

Git can do a lot of things, but I’m lazy remembering all the commands I need – some of them are like 5 words long. So I’ll put this here so next time I don’t have to search for them.

Update list of remote branches:

git remote update origin --prune

Or set automatic pruning globally:

git config --global fetch.prune true    

Delete local and remote branch:

git push origin --delete <branch_name>
git branch -d <branch_name>

Push all branches to remote:

git push --all -u

Push this new branch to remote:

git push origin branchB:branchB

Add annotated tag:

git tag -a v1.4 -m "my version 1.4"

And push tags to remote

git push --follow-tags

To kill all the local changes do

git reset --hard HEAD

To reset to a state of a commit

git reset --hard SHA

To make sure all the extra files are removed do

git clean -f -d

Move local commits into different branch.

Quite often I start hacking and coding while being on master branch and then I commit and then I try to push, but then I get git policies kicking in saying I can only do a Pull Request and push to master. So I need to move committs to a different branch. Here is the way:
1) Create new branch with your changes.

git checkout -b mybranch

2) Push new branch code on remote server.

git push origin mybranch

3) Checkout back to master branch.

git checkout master

4) Reset master branch code with remote server and remove local commit.

git reset --hard origin/master

Stolen from this answer

Publish to VSTS NuGet feed from CakeBuild

I’m on a roll today – second post in the day!

Some time last year I’ve blogged about pushing a NuGet package to VSTS package feed from Cake script. Turns out there is an easier way to do it that does not involve using your own personal access token and storing it in nuget.config file.

Turns out that VSTS exposes OAuth Token to build scripts. You just need to make it available to the scripts:

In you build definition go to Options and tick checkbox “Allow Sctips To Access OAuth Token”:

Then instead of creating a nuget.config in your repository you need to create a new NuGet source on the build agent machine that has this token as a password. And then you can push packages to that feed just by using name of the new feed. Luckily Cake already has all the commands you need to do that:

Task("Publish-Nuget")
    .IsDependentOn("Package")
    .WithCriteria(() => Context.TFBuild().IsRunningOnVSTS)
    .Does(() => 
    {
        var package = ".\path\to\package.nupkg";

        // get the access token
        var accessToken = EnvironmentVariable("SYSTEM_ACCESSTOKEN");;

        // add the NuGet source into the build agent sources list
        NuGetAddSource("MyFeedName", "https://mycompany.pkgs.visualstudio.com/_packaging/myfeed/nuget/v2", new NuGetSourcesSettings()
            {
                UserName = "MyUsername",
                Password = accessToken,
            });

        // Push the package.
        NuGetPush(package, new NuGetPushSettings 
            { 
                Source ="MyFeedName",
                ApiKey = "VSTS", 
                Verbosity = NuGetVerbosity.Detailed,
            });
    });

I like this a lot better than having to faf-about with personal access token and nuget.config file. Probably the same way you can restore nuget packages from private sources – have not tried it yet.

Nuget Version

If you have noticed I specify url for the feed in format of version 2 – i.e. ending v2. This is because default nuget.exe version provided by VSTS does not yet support v3. Yet packages can take v3. Right now if you try to push to url with “v3” in it you will get error:

System.InvalidOperationException: Failed to process request. 'Method Not Allowed'. 
The remote server returned an error: (405) Method Not Allowed.. ---> System.Net.WebException: The remote server returned an error: (405) Method Not Allowed.

So downgrade the url to v2 – as I’ve done int the example above. Most of the time v2 works just fine for pushing packages. But if you really need v3 you can check-in your own copy of nuget.exe and then specify where to find this file like this:

NuGetPush(package, new NuGetPushSettings 
    { 
        Source ="AMVSoftware",
        ApiKey = "VSTS", 
        Verbosity = NuGetVerbosity.Detailed,
        ToolPath = "./lib/nuget.exe",                   
    });

VSTS vs NuGet vs CakeBuild – again!

I keep migrating my build scripts into CakeBuild system. And I keep running them on VSTS. Because mostly VSTS build system is awesome, it is free for small teams and has a lot of good stuff in it.

But working with NuGet on VSTS is for some reason a complete PITA. This time I had trouble with restoring NuGet packages:

'AutoMapper' already has a dependency defined for 'NETStandard.Library'.
An error occurred when executing task 'Restore-NuGet-Packages'.
Error: NuGet: Process returned an error (exit code 1).
System.Exception: Unexpected exit code 1 returned from tool Cake.exe

This is because Automapper is way ahead of times and VSTS uses older version of nuget.exe. If I run the same code locally, I don’t get this error. So I need to provide my own nuget.exe file and rely on that. This is how it is done in Cake script:

Task("Restore-NuGet-Packages")
    .Does(() =>
    {
        var settings = new NuGetRestoreSettings()
        {
            // VSTS has old version of Nuget.exe and Automapper restore fails because of that
            ToolPath = "./lib/nuget.exe",
            Verbosity = NuGetVerbosity.Detailed,
        };
        NuGetRestore(".\MySolution.sln", settings);
    });

Note the overriding ToolPath – this is how you can tell Cake to use the specific .exe file for the operation.

NSaga – Lightweight Saga Management Framework For .Net

Ladies and gentlement, I’m glad to present you NSaga – lightweight saga management framework for .Net. This is something I’ve been working for the last few months and now can happily annonce the first public release. NSaga gives ability to create and manage sagas without having to write any plumbing code yourself.

Saga is a multi-step operation or activity that has persisted state and is operated by messages. Saga defines behaviour and state, but keeps them distinctly separated.

Saga classes are defined by ISaga<TSagaData> interface and take messages. Messages are directed by a SagaMediator. Comes with an internal DI container, but you can use your own. Comes with SQL server persistence, but others will follow shortly.

Basic saga will look like this:

public class ShoppingBasketSaga : ISaga<ShoppingBasketData>,
    InitiatedBy<StartShopping>,
    ConsumerOf<AddProductIntoBasket>,
    ConsumerOf<NotifyCustomerAboutBasket>
{
    public Guid CorrelationId { get; set; }
    public Dictionary<string, string> Headers { get; set; }
    public ShoppingBasketData SagaData { get; set; }

    private readonly IEmailService emailService;
    private readonly ICustomerRepository customerRepository;

    public ShoppingBasketSaga(IEmailService emailService, ICustomerRepository customerRepository)
    {
        this.emailService = emailService;
        this.customerRepository = customerRepository;
    }


    public OperationResult Initiate(StartShopping message)
    {
        SagaData.CustomerId = message.CustomerId;
        return new OperationResult(); // no errors to report
    }


    public OperationResult Consume(AddProductIntoBasket message)
    {
        SagaData.BasketProducts.Add(new BasketProducts()
        {
            ProductId = message.ProductId,
            ProductName = message.ProductName,
            ItemCount = message.ItemCount,
            ItemPrice = message.ItemPrice,
        });
        return new OperationResult(); // no possibility to fail
    }


    public OperationResult Consume(NotifyCustomerAboutBasket message)
    {
        var customer = customerRepository.Find(SagaData.CustomerId);
        if (String.IsNullOrEmpty(customer.Email))
        {
            return new OperationResult("No email recorded for the customer - unable to send message");
        }

        try
        {
            var emailMessage = $"We see your basket is not checked-out. We offer you a 85% discount if you go ahead with the checkout. Please visit https://www.example.com/ShoppingBasket/{CorrelationId}";
            emailService.SendEmail(customer.Email, "Checkout not complete", emailMessage);
        }
        catch (Exception exception)
        {
            return new OperationResult($"Failed to send email: {exception}");
        }
        return new OperationResult(); // operation successful
    }
}

And the saga usage will be

    var correlationId = Guid.NewGuid();

    // start the shopping.
    mediator.Consume(new StartShopping()
    {
        CorrelationId = correlationId,
        CustomerId = Guid.NewGuid(),
    });

    // add a product into the basket
    mediator.Consume(new AddProductIntoBasket()
    {
        CorrelationId = correlationId,
        ProductId = 1,
        ProductName = "Magic Dust",
        ItemCount = 42,
        ItemPrice = 42.42M,
    });

There is some documentation and all hosted on GitHub.

Have a look through samples, add a star to the repository and next time you need a multi-step operation, give it a go!