In the last couple years I’ve built a few libraries for public and internal consumption. Mostly these have been internal things, but they are not much different from OSS libraries.

So far the most popular libarary I’ve released open-sourced is Cake.SqlServer but I’m mostly proud of NSaga though it is not as used, mostly due to lack of marketing effort on my part. Both of these are still on v1.x though these are young libraries – both under 1 year of age at the moment of writing.

In a sense libraries for internal consumption are easier to build – you can break things and then fix the broken things in the downstream projects (or tell people how to fix). If your library is publicly released you need to be much more careful in how you handle breaking changes and how you define your public API surface. Semantic versioning is supposed to help you manage the breaking changes, but you still need to minimise the possibilities of breaking things for other people. If you are not careful and break a lot of stuff, people will be pissed off and eventually move away from your library.

While building OSS libraries I had to excersise a lot of care to cater for wider audience and rules I draw below are mostly coming from OSS. So here they are:

.Net Framework Versions

You need to consider what framework version you build for. Nature of .Net framework – every version is an increment on the previous, so library built for v4.5 will work in v4.6.1. But not the other way. So you need to pick the lowest version of .Net you’d like to work with and stick with it. I think it is fare to say that no currently developed project should be below v4.5. There is no reason to stay on lower version and upgrade from v3.5 is relatively painful. So my rule of thumb is to target v4.5 as a baseline, unless I need framework features that are only available in later versions.

You can build your NuGet targeting different Framework versions. However if you stick with v4.5 there is not much point in providing a separate build for 4.6.

However with the raise of .Net Core you should really consider building your library targeting Core. You can configure NuGet to build for both full-fat .Net Framework and .Net Core and this is relatively easy thing to do.

Provide Documentation

This has been iterated over and over again. Write your XML documentation for all your public methods/objects. Make your build fail on warnings and make XML generation a part of the build – so if you don’t put an annotation on a method/class your build will fail. This is a great reminder to get your documentation done from the start – otherwise it’ll never happen.

Also documentation is a great deciding factor for public OSS libraries – if they survive and grow or die. If I’m picking a library for a certain functionality, I’ll pick the one with a better documentation. Because documentation is the first thing I see before I start using their code. So be that guy with the better documentation!

Automate Your Build

If you run OSS project – you can use AppVeyor for free – this is a great Continuous Integration server, it integrates well with GitHub and can help with a lot of things – like running your tests or pushing nugets to NuGet.org.

AppVeyor knows how to build .Net projects, but for cases of more complex builds you’ll have to script what you need to happen. I’m a big fan of Cake Build system. All of my recent projects are built with this and I love it.

Internal All the Things

Define your API first – your client’s working surface, classes that client should use – these are public classes. Everything else must be marked as internal or private. This is to reduce the possibility of accidently introducing breaking changes. If a class is public, even if it is not meant to be used by your clients – you don’t know how and where it is used and refactoring becomes problematic. But if this class is marked private – you know that refactoring can be safely done without breaking everyone’s projects. There are a lot of arguments for marking everything as public and let consumers deal with this themselves. And there was a few cases when I wished third party library had a bit more public classes, so I can mold it to my liking. But I much rather prefer people contact me on GitHub and raise issues that they can’t do what they like to do because the classes are marked private – this will lead to a discussion and project evolution. Also what is private can always become public, but not the other way.

And in addition – only public classes and methods need to have XML documentation on them. So you’ll have to write less documentation for your library which is always a win for any developer ;-)

Have Only One Namespace

Resharper loves to suggest different namespaces for classes, depending if your classes are in different folders. Don’t go for for it. Have only one namespace. Because later, when you decide it is time to move this file from this folder to another folder and update the namespace – this will be a breaking change (unless we are talking about internal classes). It is much easier to do only one namespace – will save you headaches later down the line.

Exception to that are different NuGet packages. As a rule of thumb I use a single namespace within a NuGet package. My NSaga project has a few supporting projects. Main namespace is NSaga but if you are using SimpleInjector, you will need NSaga.SimpleInjector namespace – these are deployed via 2 NuGet packages.

Conclusion

These things seem basic and obvious, yet I had to research them, try different things, experiment and get burnt. So I wish I was given this advice couple years back when I was scratching my head on a design of my first internal library.

TFS source control system has got some strange meaning of “Workspace”. I’ve run into this numerous times and tonight again. This time I’m trying to migrate a project from TFS into git and keep the project name intact. So I’ve renamed my old project in VSTS to be ProjectName.TFS. And created a new one called ProjectName. But was faced with this great error:

The Team Project name ProjectName was previously used and there are still TFVC workspaces referring to this name. Before you can use this name, the owner of each workspace should execute the Get command to update their workspaces. See renaming a team project for more details (https://go.microsoft.com/fwlinkip?Linkld=528893). Found 2 workspace(s) using this name: ws_1_1;b03e2eb0-22aa-1122-b692-30097a2fa824, ws_dd5f57e41;b2345678-98a0-4f29-13692-30097a2fa824

Well, yes. Thanks for letting me know that this project name was used before. And I obviously don’t care about these workspaces – the PC where these were used no longer exist.

Following the link I was advised to execute this command to delete the dead workspaces:

tf workspace /delete [/collection:TeamProjectCollectionUrl] workspacename[;workspaceowner]

Yeah, no problem. Only it took me a while to find tf.exe. It is in the most obvious place in VS2017:

c:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\

And WTF is TeamProjectCollectionUrl? and what about workspacename[;workspaceowner]? took me a while to figure out the correct format expected. Here what worked for me:

.\tf workspace /delete /collection:mycompanyname.visualstudio.com\DefaultCollection “ws_1_1;b03e2eb0-22aa-1122-b692-30097a2fa824”

The last bit is coming from the error message in VSTS: ws_1_1;b03e2eb0-22aa-1122-b692-30097a2fa824, ws_dd5f57e41;b2345678-98a0-4f29-13692-30097a2fa824 Name from the owner is separated by ; and different namespaces are separated by ,.

All that bloody obvious!

Following my previous post, I’m building Asp.Net Core web application and I’m running my tests in XUnit. Default VSTS template for Asp.Net Core application runs the tests but it does not publish any results of test execution, so going into Tests results panel can be sad:

And even if you have a task that publishes test results after dotnet test, you will not get far.

As it turns out command dotnet test does not publish any xml files with tests execution results. That was a puzzle for me.

Luckily there were good instructions on XUnit page that explained how to do XUnit with Dotnet Core properly. In *test.csproj file you need to add basically the following stuff:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="xunit" Version="2.3.0-beta2-build3683" />
    <DotNetCliToolReference Include="dotnet-xunit" Version="2.3.0-beta2-build3683" />
  </ItemGroup>

</Project>

Make sure you don’t miss the DotNetCliToolReference line – this is the key here.

Run dotnet restore and dotnet xunit inside the folder with this project. If you try to run this command outside of this folder you’ll get an error.

Command dotnet xunit has an option to output test results as XML file: dotnet xunit -xml .\test-results.xml.

Because this needs to be executed inside of the tests folder we can’t use “.Net Core” build task from VSTS – there is no option to configure what is the base execution folder. Instead you can just add “Command Line” task to execute what we need and from the correct folder:

Add “Command Line” task. As Tool option give it dotnet, for arguments say xunit -xml ./test-results.xml and make sure you specify the working folder – for my case that was src/Tests.

After that add “Publish Test Results”, tell it to use XUnit format, the rest of the default parameters worked for me.

And BOOM! We have tests results published at the end of the build:

Honestly, VSTS private NuGet feeds will be a most blogged topic in this blog! I’ve already mentioned them at least twice. Here goes another post:

This time I’m building in VSTS Core application and the build literally goes:

  • dotnet restore
  • dotnet build
  • dotnet test
  • dotnet publish

And this worked really well for the basic application – I had it set up in minutes and got the resuilt I was looking for.

But during the life of the application I needed NuGet packages from my private feed and dotnet restore had no idea about my private feeds.

Even if supplied with nuget.config file I was getting 401 Unauthenticated – this is because I was not keeping my password(token) in this file.

Solution was to add this feed globally on the agent for every build. And this turned out to be easier than I thought.

In your build add a new task called Nuget Command:

When you configure this task give sources as Command parameter. And for arguments put this:

add -Name MyFeedName -Source https://MyProject.pkgs.visualstudio.com/_packaging/MyFeedName/nuget/v3/index.json -username myUsername@MyTeamName.onmicrosoft.com -password $(System.AccessToken)

My build task looks like this:

Replace the names accordingly. This adds a NuGet source to the agent. This $(System.AccessToken) is getting build variable that contains an access token – it is generated for every build you run, so no need to keep it around in your build script or nuget.config.

To make the token available you need change the toggle in build Options: Change Allow scripts to access OAuth Token to Enabled:

This way when you run dotnet restore your private feed will be available and it will use OAuth token that authenticates with the private package feed.

I’m building a prototype for a new project and it was decided to use DocumentDB to store our data. There will be very little data and even less relationship between the data, so document database is a good fit. Also there is a chance for us to use DocumentDB in production.

There is a comprehensive documentation about the structure and how it all ties together. Yet not enough coding samples on how to use attachments. And I struggled a bit to come up with the working solution. So I’ll explain it all here for future generations.

Structure

This diagram is from the documentation

And this is correct, but incomplete. Store this for a moment, I’ll come back to this point later.

Ignore the left three nodes on the diagram, look on Documents and Attachments nodes. This basically shows that if you create a document, it will be available on URI like this:

https://{accountname}.documents.azure.com/dbs/{databaseId}/colls/{collectionId}/docs/{docId}

That’s fine – you call an authenticated request to the correctly formed URI (and authenticated) and you’ll get JSON back as a result.

According to the schema you will also get attachment on this address:

https://{accountname}.documents.azure.com/dbs/{databaseId}/colls/{collectionId}/docs/{docId}/attachments{attachId}

And this is correct. If you do HTTP GET to this address – you’ll get JSON. Something like this:

{
    "contentType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
    "id": "1",
    "media": "/media/5VEpAMZpeasdfdfdAAAAAOFDl80B",
    "_rid": "5VEpAMZpeasdfdfdAAAAAOFDl80B=",
    "_self": "dbs\/5VEpAA==\/colls\/5VEpEWZpsQA=\/docs\/5VEpAMZpeasdfdfdAAAAAOFDl80B==\/attachments\/5VEpAMZpeasdfdfdAAAAAOFDl80B=",
    "_etag": "\"0000533e-0000-0000-0000-59079b8a0000\"",
    "_ts": 1493673393
}

Turns out that there are 2 ways you can do attachments in DocumentDB – managed and (surpise!) unmanaged. Unmanaged is when you don’t really attach anything, but just provide a link to an external storage. To be honest, I don’t see much sense in doing it that way – why bother with extra resource just to keep external links? It would be much easier to make these links as part of the actual document, so you don’t have to do another call to retrieve them.

Managed attachments is when you actually do store binaries in DocumentDB and this is what I chose to use. And unfortunately had to discover for myself that it is not straight forward.

Managed Attachments

If you noticed in the JSON above there is a line "media": "/media/5VEpAMZpeasdfdfdAAAAAOFDl80B". This is actually the link to the stored binary payload. And you need to query that URI to get the payload. So from knowing document id, you’ll need 2 requests to get your hands on attached binaries:

  1. Get list of attachments
  2. Every attachment contains link to Media – get that.

So this /media/{mediaId} is missing in the diagram above. Perhaps this is deliberate not to confuse users. I’ll go with that.

Code Samples

Now to the code samples.

I’m using NuGet package provided by Microsoft to do the requests for me:

Install-Package Microsoft.Azure.DocumentDB

Start with basics to get them out of the way:

private async Task<DocumentClient> GetClientAsync()
{
    if (documentClient == null)
    {
        var endpointUrl = configuration["DocumentDb:EndpointUri"];
        var primaryKey = configuration["DocumentDb:PrimaryKey"];

        documentClient = new DocumentClient(new Uri(endpointUrl), primaryKey);
        await documentClient.OpenAsync();
    }

    return documentClient;
}

where documentClient is a local variable in the containing class.

Now let’s create a document and attach a binary:

var myDoc = new { id = "42", Name = "Max", City="Aberdeen" }; // this is the document you are trying to save
var attachmentStream = File.OpenRead("c:/Path/To/File.pdf"); // this is the document stream you are attaching

var client = await GetClientAsync();
var createUrl = UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName);
Document document = await client.CreateDocumentAsync(createUrl, myDoc);

await client.CreateAttachmentAsync(document.SelfLink, attachmentStream, new MediaOptions()
    {
        ContentType = "application/pdf", // your application type
        Slug = "78", // this is actually attachment ID
    });

Now a few things are going on here: I create an anonymous class for sample sake – use strongly typed models. Reading attachment stream from file system – that is also for sample sake; whatever source you have, you’ll need to provide an instance of Stream to upload an attachment.

Now this is worth paying attention to: var createUrl = UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName);. UriFactory class is not really a factory in the broad OOP sense – it does not produce other objects that will do actual work. This class gives you a lot of patterns that create URI addressess based on names of things you use. In other words there are a lot of String.Format with templates.

Method UriFactory.CreateDocumentCollectionUri is a going to give you link in format /dbs/{documentId}/colls/{collectionId}/. If you are looking on CreateAttachmentUri it will work with this template: dbs/{dbId}/colls/{collectionId}/docs/{docId}/attachments/{attachmentId}.

Next line with await client.CreateDocumentAsync(createUrl, myDoc) is doing what you think it is doing – creating a document on Azure – no surprises here.

But when you look on block of code with client.CreateAttachmentAsync(), not everything might be clear. document.SelfLink is a URI that links back to the document – it will be in format of dbs/{dbId}/colls/{collectionId}/docs/{docId}. Next big question is Slug – this is actually works as attachment ID. They might as well could’ve called it Id because this is what goes into id field when you look on the storage.

Retrieving Attachments

Once we’ve put something in the storage, some time in the future we’ll have to take it out. Let’s get back our attached file.

var client = await GetClientAsync();
var attachmentUri = UriFactory.CreateAttachmentUri(DatabaseName, CollectionName, docId, attachId);

var attachmentResponse = await client.ReadAttachmentAsync(attachmentUri);

var resourceMediaLink = attachmentResponse.Resource.MediaLink;

var mediaLinkResponse = await client.ReadMediaAsync(resourceMediaLink);

var contentType = mediaLinkResponse.ContentType;
var stream = mediaLinkResponse.Media;

Here we have some funky things going on again. This part UriFactory.CreateAttachmentUri(DatabaseName, CollectionName, docId, attachId) will give dbs/{dbId}/colls/{collectionId}/docs/{docId}/attachments/{attachmentId}. And GETting to this address will return you JSON same as in the start of the article. Value for attachmentResponse.Resource.MediaLink will look like /media/5VEpAMZpeasdfdfdAAAAAOFDl80B3 and this is the path to GET the actual attached binary – this is what we are doing in await client.ReadMediaAsync(resourceMediaLink). The rest should be self-explanatory.

Conclusion

To be honest, lack of explanation in documentation of this /media/{mediaId} does not add kudos to the team. And I feel like the provided API is not straight-forwrard and not easy to use – I had to decompile and have a wonder about what is actually happening inside of the API library. Also there is too much leakage of the implementation: I really could’ve lived without ever having to know about UriFactory.

Git can do a lot of things, but I’m lazy remembering all the commands I need – some of them are like 5 words long. So I’ll put this here so next time I don’t have to search for them.

Update list of remote branches:

git remote update origin --prune

Or set automatic pruning globally:

git config --global fetch.prune true    

Delete local and remote branch:

git push origin --delete <branch_name>
git branch -d <branch_name>

Push all branches to remote:

git push --all -u

Push this new branch to remote:

git push origin branchB:branchB

Add annotated tag:

git tag -a v1.4 -m "my version 1.4"

And push tags to remote

git push --follow-tags

I’m on a roll today – second post in the day!

Some time last year I’ve blogged about pushing a NuGet package to VSTS package feed from Cake script. Turns out there is an easier way to do it that does not involve using your own personal access token and storing it in nuget.config file.

Turns out that VSTS exposes OAuth Token to build scripts. You just need to make it available to the scripts:

In you build definition go to Options and tick checkbox “Allow Sctips To Access OAuth Token”:

Then instead of creating a nuget.config in your repository you need to create a new NuGet source on the build agent machine that has this token as a password. And then you can push packages to that feed just by using name of the new feed. Luckily Cake already has all the commands you need to do that:

Task("Publish-Nuget")
    .IsDependentOn("Package")
    .WithCriteria(() => Context.TFBuild().IsRunningOnVSTS)
    .Does(() => 
    {
        var package = ".\path\to\package.nupkg";

        // get the access token
        var accessToken = EnvironmentVariable("SYSTEM_ACCESSTOKEN");;

        // add the NuGet source into the build agent sources list
        NuGetAddSource("MyFeedName", "https://mycompany.pkgs.visualstudio.com/_packaging/myfeed/nuget/v2", new NuGetSourcesSettings()
            {
                UserName = "MyUsername",
                Password = accessToken,
            });

        // Push the package.
        NuGetPush(package, new NuGetPushSettings 
            { 
                Source ="MyFeedName",
                ApiKey = "VSTS", 
                Verbosity = NuGetVerbosity.Detailed,
            });
    });

I like this a lot better than having to faf-about with personal access token and nuget.config file. Probably the same way you can restore nuget packages from private sources – have not tried it yet.

Nuget Version

If you have noticed I specify url for the feed in format of version 2 – i.e. ending v2. This is because default nuget.exe version provided by VSTS does not yet support v3. Yet packages can take v3. Right now if you try to push to url with “v3” in it you will get error:

System.InvalidOperationException: Failed to process request. 'Method Not Allowed'. 
The remote server returned an error: (405) Method Not Allowed.. ---> System.Net.WebException: The remote server returned an error: (405) Method Not Allowed.

So downgrade the url to v2 – as I’ve done int the example above. Most of the time v2 works just fine for pushing packages. But if you really need v3 you can check-in your own copy of nuget.exe and then specify where to find this file like this:

NuGetPush(package, new NuGetPushSettings 
    { 
        Source ="AMVSoftware",
        ApiKey = "VSTS", 
        Verbosity = NuGetVerbosity.Detailed,
        ToolPath = "./lib/nuget.exe",                   
    });

I keep migrating my build scripts into CakeBuild system. And I keep running them on VSTS. Because mostly VSTS build system is awesome, it is free for small teams and has a lot of good stuff in it.

But working with NuGet on VSTS is for some reason a complete PITA. This time I had trouble with restoring NuGet packages:

'AutoMapper' already has a dependency defined for 'NETStandard.Library'.
An error occurred when executing task 'Restore-NuGet-Packages'.
Error: NuGet: Process returned an error (exit code 1).
System.Exception: Unexpected exit code 1 returned from tool Cake.exe

This is because Automapper is way ahead of times and VSTS uses older version of nuget.exe. If I run the same code locally, I don’t get this error. So I need to provide my own nuget.exe file and rely on that. This is how it is done in Cake script:

Task("Restore-NuGet-Packages")
    .Does(() =>
    {
        var settings = new NuGetRestoreSettings()
        {
            // VSTS has old version of Nuget.exe and Automapper restore fails because of that
            ToolPath = "./lib/nuget.exe",
            Verbosity = NuGetVerbosity.Detailed,
        };
        NuGetRestore(".\MySolution.sln", settings);
    });

Note the overriding ToolPath – this is how you can tell Cake to use the specific .exe file for the operation.

Ladies and gentlement, I’m glad to present you NSaga – lightweight saga management framework for .Net. This is something I’ve been working for the last few months and now can happily annonce the first public release. NSaga gives ability to create and manage sagas without having to write any plumbing code yourself.

Saga is a multi-step operation or activity that has persisted state and is operated by messages. Saga defines behaviour and state, but keeps them distinctly separated.

Saga classes are defined by ISaga<TSagaData> interface and take messages. Messages are directed by a SagaMediator. Comes with an internal DI container, but you can use your own. Comes with SQL server persistence, but others will follow shortly.

Basic saga will look like this:

public class ShoppingBasketSaga : ISaga<ShoppingBasketData>,
    InitiatedBy<StartShopping>,
    ConsumerOf<AddProductIntoBasket>,
    ConsumerOf<NotifyCustomerAboutBasket>
{
    public Guid CorrelationId { get; set; }
    public Dictionary<string, string> Headers { get; set; }
    public ShoppingBasketData SagaData { get; set; }

    private readonly IEmailService emailService;
    private readonly ICustomerRepository customerRepository;

    public ShoppingBasketSaga(IEmailService emailService, ICustomerRepository customerRepository)
    {
        this.emailService = emailService;
        this.customerRepository = customerRepository;
    }


    public OperationResult Initiate(StartShopping message)
    {
        SagaData.CustomerId = message.CustomerId;
        return new OperationResult(); // no errors to report
    }


    public OperationResult Consume(AddProductIntoBasket message)
    {
        SagaData.BasketProducts.Add(new BasketProducts()
        {
            ProductId = message.ProductId,
            ProductName = message.ProductName,
            ItemCount = message.ItemCount,
            ItemPrice = message.ItemPrice,
        });
        return new OperationResult(); // no possibility to fail
    }


    public OperationResult Consume(NotifyCustomerAboutBasket message)
    {
        var customer = customerRepository.Find(SagaData.CustomerId);
        if (String.IsNullOrEmpty(customer.Email))
        {
            return new OperationResult("No email recorded for the customer - unable to send message");
        }

        try
        {
            var emailMessage = $"We see your basket is not checked-out. We offer you a 85% discount if you go ahead with the checkout. Please visit https://www.example.com/ShoppingBasket/{CorrelationId}";
            emailService.SendEmail(customer.Email, "Checkout not complete", emailMessage);
        }
        catch (Exception exception)
        {
            return new OperationResult($"Failed to send email: {exception}");
        }
        return new OperationResult(); // operation successful
    }
}

And the saga usage will be

    var correlationId = Guid.NewGuid();

    // start the shopping.
    mediator.Consume(new StartShopping()
    {
        CorrelationId = correlationId,
        CustomerId = Guid.NewGuid(),
    });

    // add a product into the basket
    mediator.Consume(new AddProductIntoBasket()
    {
        CorrelationId = correlationId,
        ProductId = 1,
        ProductName = "Magic Dust",
        ItemCount = 42,
        ItemPrice = 42.42M,
    });

There is some documentation and all hosted on GitHub.

Have a look through samples, add a star to the repository and next time you need a multi-step operation, give it a go!