Recently I purchased Amazon Alexa, just for fun. Initially wife wanted a radio or a Bluetooth speaker on kitchen. And Alexa was cheap enough to use as a speaker/radio.

On the first day I’ve asked it a bunch of questions and got a lot of rather disappointing “hm.. I don’t know that one”. Yeah, sure it can tell me weather and news briefing. Kitchen timer is rather useful (unless it mishears you and resets it instead of pausing).

I’ve tried a bunch of skills and none of them were particularly good/useful. Idea was to get a cook-book, but nothing descent was available (there are Allrecipies, but not available in UK).

And for listening to a music – that’s another disappointment. Turns out that all my MP3s I’ve collected over the years are worthless here. To be able to listen to a music I need to purchase one of the streaming services (Amazon Music or Spotify). But I want my music, not streamed. I’ve wasted a lot of time searching for solution. There were a few options. There is Synology (of which I’m an owner) skill, but that’s been only released in October 2017 and not yet available. There is Plex server, but the this skill does not let me stream music through Alexa – it is a merely voice remote control for other existing player.

I’ve seen a lot of developers telling their way of enabling Alexa to stream their music, but none of it was ever approved to Alexa skills market (why would Amazon allow for that? they sell music service!). And to get that working, involved a lot of work that I did not want to do.

Another thing that I did not like was a way of activating the skills – I had to call it by name and ask for the exact specific command: i.e. Alexa, tell Hive to boost heating to 21 degrees. That’s rather a mouthful. Given I does not always work from the first time, I can turn on my heating on my phone faster. I’d much rather prefer Alexa, boost heating, but that did not always work. Same goes for other skills: Alexa, ask Radioplayer to suggest a station or Alexa, ask [skillname] to do [command-name].
Well, yes, I understand that we are dealing with computers here where precise commands are required, but the last thing I want to remember is that silly skillname to get myself some radio. Granted, there is a built-in TuneIn online radio with hundreds of radios to choose from – problem here to remember (or rather know) the station name I’d like to listen to.

Also I was hoping for advertised shopping list and to-do list to be voice-activated. But my preferred service Wunderlist was supported as a second-class citizen and to add items to my Groceries list I had to go like Alexa, tell Wunder Link to add eggs to my Groceries list. I was hoping for Alexa, add eggs to my shopping list but that is not available. However Todoist.com had a better integration, but I’m not sure I like it better. I grew fond of Wunderlist simplicity and bells-and-whistles of Todoist are overkill for my simple lists.

Anyway, that device turned out to be not that useful as I was hoping it to be.

Here are useful stuff for myself (your mileage may vary):

  • Kitchen timer (rather expensive one)
  • Bluetooth speaker (again, on the expensive side)
  • Online radio (if you know stations names you like)
  • Hive heating control (somewhat flaky)

Disappointments:

  • Unable to play my music without buying a service subscription
  • Voice commands have to be in certain format and having to remember application names and exact format expected is a pain
  • Not clever at all when answering non-wiki questions

So now I’m pondering should I try and build a skill for Alexa (and what should it be?) or just send it back to Amazon?

Last night I run into a problem with GitVersion and dotnet pack command while running my builds on VSTS

The probem came up when I tried to specify a version number I’d like the package to have. There are answers on SO telling me how to do it and I’ve used it in the past. But this time it did not work for some reason, so I had to investigate.

Here is the way it worked for me.
To begin, edit your *.csproj file and make sure your package information is there:

<PropertyGroup>
    <OutputType>Library</OutputType>
    <PackageId>MyPackageNaem</PackageId>
    <Authors>AuthorName</Authors>
    <Description>Some description</Description>
    <PackageRequireLicenseAcceptance>false</PackageRequireLicenseAcceptance>
    <PackageReleaseNotes>This is a first release</PackageReleaseNotes>
    <Copyright>Copyright 2017 (c) Trailmax. All rights reserved.</Copyright>
    <GenerateAssemblyInfo>false</GenerateAssemblyInfo>
</PropertyGroup>

Make sure that you don’t have any <Version> or <VersionPrefix> elements in there – these will break your stuff.

Then when you do your dotnet pack specify /p:Version=7.7.7 as a parameter to the command. i.e. dotnet pack --Configuration Release --Output d:/tmp /p:Version=7.7.7.

Assembly version number

You can specify assembly verion number in your *.csproj file or via old-school [assembly: AssemblyVersion("1.2.3.4")] assembly attribute. Both will work in Core projects. However GitVersion does not know how to update version number in your csproj file but works pretty well with AssemblyInfo.cs file. By default Core projects do not come with AssemblyInfo.cs file and you need to create one yourself.

To save some effort for multiple projects in the same solution you can create AssemblyInfo.cs file next to your *.sln file and add this file as a reference to all your projects.

Here is a sample of AssemblyInfo.cs:

using System;
using System.Reflection;

[assembly: System.Reflection.AssemblyCompanyAttribute("Trailmax")]
[assembly: System.Reflection.AssemblyProductAttribute("MyAwesome.Project")]
[assembly: System.Reflection.AssemblyTitleAttribute("MyAwesome.Project")]

[assembly: System.Reflection.AssemblyConfigurationAttribute("Release")]
[assembly: System.Reflection.AssemblyDescriptionAttribute("MyAwesome.Project")]

[assembly: AssemblyVersion("1.2.3.4")]
[assembly: AssemblyInformationalVersion("1.2.3")]
[assembly: AssemblyFileVersion("1.2.3.4")]

To add as link right click on your project: Add -> Existing Item -> select SolutionInfo.cs file -> Instead of Add select Add as Link.

Or you can add this to your csproj file:

<ItemGroup>
    <Compile Include="..\SolutionInfo.cs" Link="SolutionInfo.cs" />
</ItemGroup>

After that is done you can tell GitVersion to update this SolutionInfo.cs file with the new versions.

GitVersion pushes various formats for version numbers into envrionment variables and that can be used as part of dotnet pack command.

The Whole Thing

There is GitVersion addon for VSTS – install it and add it as a first step on your build pipeline. Point it to update your SolutionInfo.cs file.

Then when you need to create a package add another .Net Core build step. Put the command to be pack and add /p:VersionPrefix=$(GitVersion.NuGetVersion) to the arguments list.

Microsoft has a strange idea that anybody would want their PC to be woken up during the night to do the maintenance task. WHY???

All my PCs have encrypted drives and password is required for windows to boot. So even if PC manages to wake itself up – it is stuck on password prompt. DUH!

Way to check it:

powercfg -waketimers

If you see Reason: Windows will execute 'Maintenance Activator' scheduled task that requested waking the computer.

Head to Control Panel -> Security and Maintenance -> Expand Maintenance. There will be an option to disable waking your PC for Maintenance.

And here is another discussion with some answers, though people do report that Windows 10 is notoriously bad for ignoring any/all of these settings and you will still get your PC woken up at night, barring all but unplugging it from the mains

And another discussion with a lot of details.

I’ve blogged about user impersonation in Asp.Net MVC three years ago and this article has been in top 10 of most visited pages in my blog.

Since Asp.Net Core is out and more or less ready for production, it is time to review the article and give guidance how to do the same thing in Core. Approach have not changed at all, we do same steps as before, only some API been changed and it took me a bit of time to figure out all the updated API parts.

Impersonation Process

Impersonation is when an admin user is logged in with the same privileges as a user, but without knowing their password or other credentials. I’ve used this in couple applications and it was invaluable for support cases and debugging user permissions.

The process of impersonation in Asp.Net core is pretty simple – we create cookie for potential user and give this to the current admin user. Also we need to add some information to the cookie that impersonation is happening and give admin a way to go back to their own account without having to log-in again.

In my previous article I’ve used a service-layer class to do impersonation. This time I’m doing everything in a controller just because it is easier. However please don’t be alarmed by this – I’m still a big believer of thin controllers – they should accept requests and hand-over the control to other layers. So if you feel you need to add an abstraction layer – this should be pretty simple. I’m not doing this here for simplicity sake.

So this is the part that starts the impersonation:

[Authorize(Roles = "Admin")] // <-- Make sure only admins can access this 
public async Task<IActionResult> ImpersonateUser(String userId)
{
    var currentUserId = User.GetUserId();

    var impersonatedUser = await _userManager.FindByIdAsync(userId);

    var userPrincipal = await _signInManager.CreateUserPrincipalAsync(impersonatedUser);

    userPrincipal.Identities.First().AddClaim(new Claim("OriginalUserId", currentUserId));
    userPrincipal.Identities.First().AddClaim(new Claim("IsImpersonating", "true"));

    // sign out the current user
    await _signInManager.SignOutAsync();

    await HttpContext.Authentication.SignInAsync(cookieOptions.ApplicationCookieAuthenticationScheme, userPrincipal);

    return RedirectToAction("Index", "Home");
}

In this snippet you can see that I’m creating a ClaimsPrincipal for impersonation victim and adding extra claims. And then using this claims principal to create auth cookie.

To de-impersonate use this method:

[Authorize]
public async Task<IActionResult> StopImpersonation()
{
    if (!User.IsImpersonating())
    {
        throw new Exception("You are not impersonating now. Can't stop impersonation");
    }

    var originalUserId = User.FindFirst("OriginalUserId").Value;

    var originalUser = await _userManager.FindByIdAsync(originalUserId);

    await _signInManager.SignOutAsync();

    await _signInManager.SignInAsync(originalUser, isPersistent: true);

    return RedirectToAction("Index", "Home");
}

Here we check if impersonation claim is available, then get original user id and login with that user. You will probably ask what is .IsImpersonating() method? Here it is along with GetUserId() method:

public static class ClaimsPrincipalExtensions
{
    //https://stackoverflow.com/a/35577673/809357
    public static string GetUserId(this ClaimsPrincipal principal)
    {
        if (principal == null)
        {
            throw new ArgumentNullException(nameof(principal));
        }
        var claim = principal.FindFirst(ClaimTypes.NameIdentifier);

        return claim?.Value;
    }

    public static bool IsImpersonating(this ClaimsPrincipal principal)
    {
        if (principal == null)
        {
            throw new ArgumentNullException(nameof(principal));
        }

        var isImpersonating = principal.HasClaim("IsImpersonating", "true");

        return isImpersonating;
    }
}

Dealing with Security Stamp Invalidation and cookie refreshing

Now this bit I always forget. When Security Stamp validation and cookie refreshing kicks in, it will erase the custom claims we’ve put into the cookie when started impersonation. This is a subtle bug because this happens only every 30 minutes by default and I’m never using impersonation long enough to experience this bug. However it happens and I better update this post, since I have a solution.

In you Startup class inside ConfigureServices put this block of code:

services.Configure<IdentityOptions>(options =>
{
    // this sets how often the cookie is refreshed. Adjust as needed.
    options.SecurityStampValidationInterval = TimeSpan.FromMinutes(10);
    options.OnSecurityStampRefreshingPrincipal = context =>
    {
        var originalUserIdClaim = context.CurrentPrincipal.FindFirst("OriginalUserId");
        var isImpersonatingClaim = context.CurrentPrincipal.FindFirst("IsImpersonating");
        if (isImpersonatingClaim.Value == "true" && originalUserIdClaim != null)
        {
            context.NewPrincipal.Identities.First().AddClaim(originalUserIdClaim);
            context.NewPrincipal.Identities.First().AddClaim(isImpersonatingClaim);
        }
        return Task.FromResult(0);
    };
});

This fixes the issue by re-adding our custom claims to the new cookie.

And this is pretty much it. You can see the full working sample on GitHub

Impersonation in Asp.Net Core v2.0

Aspnet Core v2 has been out for a while, but I did not get a chance to migrate my projects. Today that happened and I had to work out impersonation for Core v2. Mostly I’ve followed this guide: https://docs.microsoft.com/en-us/aspnet/core/migration/1x-to-2x/ and then Identity part: https://docs.microsoft.com/en-us/aspnet/core/migration/1x-to-2x/identity-2x

One important change was in ImpersonationController.

   //[Authorize(Roles = "Admin")] // <-- Make sure only admins can access this 
    public async Task<IActionResult> ImpersonateUser(String userId)
    {
        var currentUserId = User.GetUserId();

        var impersonatedUser = await _userManager.FindByIdAsync(userId);

        var userPrincipal = await _signInManager.CreateUserPrincipalAsync(impersonatedUser);

        userPrincipal.Identities.First().AddClaim(new Claim("OriginalUserId", currentUserId));
        userPrincipal.Identities.First().AddClaim(new Claim("IsImpersonating", "true"));

        // sign out the current user
        await _signInManager.SignOutAsync();

        await HttpContext.SignInAsync(IdentityConstants.ApplicationScheme, userPrincipal); // <-- This has changed from the previous version.

        return RedirectToAction("Index", "Home");
    }

Also there was syntax change in Startup when configuring Security Stamp invalidator:

        services.Configure<SecurityStampValidatorOptions>(options => // different class name
        {
            options.ValidationInterval = TimeSpan.FromMinutes(1);  // new property name
            options.OnRefreshingPrincipal = context =>             // new property name
            {
                var originalUserIdClaim = context.CurrentPrincipal.FindFirst("OriginalUserId");
                var isImpersonatingClaim = context.CurrentPrincipal.FindFirst("IsImpersonating");
                if (isImpersonatingClaim.Value == "true" && originalUserIdClaim != null)
                {
                    context.NewPrincipal.Identities.First().AddClaim(originalUserIdClaim);
                    context.NewPrincipal.Identities.First().AddClaim(isImpersonatingClaim);
                }
                return Task.FromResult(0);
            };
        });

Other than that the impersonation is mostly the same. See full working sample in CoreV2 branch on GitHub

In the last couple years I’ve built a few libraries for public and internal consumption. Mostly these have been internal things, but they are not much different from OSS libraries.

So far the most popular libarary I’ve released open-sourced is Cake.SqlServer but I’m mostly proud of NSaga though it is not as used, mostly due to lack of marketing effort on my part. Both of these are still on v1.x though these are young libraries – both under 1 year of age at the moment of writing.

In a sense libraries for internal consumption are easier to build – you can break things and then fix the broken things in the downstream projects (or tell people how to fix). If your library is publicly released you need to be much more careful in how you handle breaking changes and how you define your public API surface. Semantic versioning is supposed to help you manage the breaking changes, but you still need to minimise the possibilities of breaking things for other people. If you are not careful and break a lot of stuff, people will be pissed off and eventually move away from your library.

While building OSS libraries I had to excersise a lot of care to cater for wider audience and rules I draw below are mostly coming from OSS. So here they are:

.Net Framework Versions

You need to consider what framework version you build for. Nature of .Net framework – every version is an increment on the previous, so library built for v4.5 will work in v4.6.1. But not the other way. So you need to pick the lowest version of .Net you’d like to work with and stick with it. I think it is fare to say that no currently developed project should be below v4.5. There is no reason to stay on lower version and upgrade from v3.5 is relatively painful. So my rule of thumb is to target v4.5 as a baseline, unless I need framework features that are only available in later versions.

You can build your NuGet targeting different Framework versions. However if you stick with v4.5 there is not much point in providing a separate build for 4.6.

However with the raise of .Net Core you should really consider building your library targeting Core. You can configure NuGet to build for both full-fat .Net Framework and .Net Core and this is relatively easy thing to do.

Provide Documentation

This has been iterated over and over again. Write your XML documentation for all your public methods/objects. Make your build fail on warnings and make XML generation a part of the build – so if you don’t put an annotation on a method/class your build will fail. This is a great reminder to get your documentation done from the start – otherwise it’ll never happen.

Also documentation is a great deciding factor for public OSS libraries – if they survive and grow or die. If I’m picking a library for a certain functionality, I’ll pick the one with a better documentation. Because documentation is the first thing I see before I start using their code. So be that guy with the better documentation!

Automate Your Build

If you run OSS project – you can use AppVeyor for free – this is a great Continuous Integration server, it integrates well with GitHub and can help with a lot of things – like running your tests or pushing nugets to NuGet.org.

AppVeyor knows how to build .Net projects, but for cases of more complex builds you’ll have to script what you need to happen. I’m a big fan of Cake Build system. All of my recent projects are built with this and I love it.

Internal All the Things

Define your API first – your client’s working surface, classes that client should use – these are public classes. Everything else must be marked as internal or private. This is to reduce the possibility of accidently introducing breaking changes. If a class is public, even if it is not meant to be used by your clients – you don’t know how and where it is used and refactoring becomes problematic. But if this class is marked private – you know that refactoring can be safely done without breaking everyone’s projects. There are a lot of arguments for marking everything as public and let consumers deal with this themselves. And there was a few cases when I wished third party library had a bit more public classes, so I can mold it to my liking. But I much rather prefer people contact me on GitHub and raise issues that they can’t do what they like to do because the classes are marked private – this will lead to a discussion and project evolution. Also what is private can always become public, but not the other way.

And in addition – only public classes and methods need to have XML documentation on them. So you’ll have to write less documentation for your library which is always a win for any developer ;-)

Have Only One Namespace

Resharper loves to suggest different namespaces for classes, depending if your classes are in different folders. Don’t go for for it. Have only one namespace. Because later, when you decide it is time to move this file from this folder to another folder and update the namespace – this will be a breaking change (unless we are talking about internal classes). It is much easier to do only one namespace – will save you headaches later down the line.

Exception to that are different NuGet packages. As a rule of thumb I use a single namespace within a NuGet package. My NSaga project has a few supporting projects. Main namespace is NSaga but if you are using SimpleInjector, you will need NSaga.SimpleInjector namespace – these are deployed via 2 NuGet packages.

Conclusion

These things seem basic and obvious, yet I had to research them, try different things, experiment and get burnt. So I wish I was given this advice couple years back when I was scratching my head on a design of my first internal library.

TFS source control system has got some strange meaning of “Workspace”. I’ve run into this numerous times and tonight again. This time I’m trying to migrate a project from TFS into git and keep the project name intact. So I’ve renamed my old project in VSTS to be ProjectName.TFS. And created a new one called ProjectName. But was faced with this great error:

The Team Project name ProjectName was previously used and there are still TFVC workspaces referring to this name. Before you can use this name, the owner of each workspace should execute the Get command to update their workspaces. See renaming a team project for more details (https://go.microsoft.com/fwlinkip?Linkld=528893). Found 2 workspace(s) using this name: ws_1_1;b03e2eb0-22aa-1122-b692-30097a2fa824, ws_dd5f57e41;b2345678-98a0-4f29-13692-30097a2fa824

Well, yes. Thanks for letting me know that this project name was used before. And I obviously don’t care about these workspaces – the PC where these were used no longer exist.

Following the link I was advised to execute this command to delete the dead workspaces:

tf workspace /delete [/collection:TeamProjectCollectionUrl] workspacename[;workspaceowner]

Yeah, no problem. Only it took me a while to find tf.exe. It is in the most obvious place in VS2017:

c:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer\

And WTF is TeamProjectCollectionUrl? and what about workspacename[;workspaceowner]? took me a while to figure out the correct format expected. Here what worked for me:

.\tf workspace /delete /collection:mycompanyname.visualstudio.com\DefaultCollection “ws_1_1;b03e2eb0-22aa-1122-b692-30097a2fa824”

The last bit is coming from the error message in VSTS: ws_1_1;b03e2eb0-22aa-1122-b692-30097a2fa824, ws_dd5f57e41;b2345678-98a0-4f29-13692-30097a2fa824 Name from the owner is separated by ; and different namespaces are separated by ,.

All that bloody obvious!

Following my previous post, I’m building Asp.Net Core web application and I’m running my tests in XUnit. Default VSTS template for Asp.Net Core application runs the tests but it does not publish any results of test execution, so going into Tests results panel can be sad:

And even if you have a task that publishes test results after dotnet test, you will not get far.

As it turns out command dotnet test does not publish any xml files with tests execution results. That was a puzzle for me.

Luckily there were good instructions on XUnit page that explained how to do XUnit with Dotnet Core properly. In *test.csproj file you need to add basically the following stuff:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="xunit" Version="2.3.0-beta2-build3683" />
    <DotNetCliToolReference Include="dotnet-xunit" Version="2.3.0-beta2-build3683" />
  </ItemGroup>

</Project>

Make sure you don’t miss the DotNetCliToolReference line – this is the key here.

Run dotnet restore and dotnet xunit inside the folder with this project. If you try to run this command outside of this folder you’ll get an error.

Command dotnet xunit has an option to output test results as XML file: dotnet xunit -xml .\test-results.xml.

Because this needs to be executed inside of the tests folder we can’t use “.Net Core” build task from VSTS – there is no option to configure what is the base execution folder. Instead you can just add “Command Line” task to execute what we need and from the correct folder:

Add “Command Line” task. As Tool option give it dotnet, for arguments say xunit -xml ./test-results.xml and make sure you specify the working folder – for my case that was src/Tests.

After that add “Publish Test Results”, tell it to use XUnit format, the rest of the default parameters worked for me.

And BOOM! We have tests results published at the end of the build:

Honestly, VSTS private NuGet feeds will be a most blogged topic in this blog! I’ve already mentioned them at least twice. Here goes another post:

This time I’m building in VSTS Core application and the build literally goes:

  • dotnet restore
  • dotnet build
  • dotnet test
  • dotnet publish

And this worked really well for the basic application – I had it set up in minutes and got the resuilt I was looking for.

But during the life of the application I needed NuGet packages from my private feed and dotnet restore had no idea about my private feeds.

Even if supplied with nuget.config file I was getting 401 Unauthenticated – this is because I was not keeping my password(token) in this file.

Solution was to add this feed globally on the agent for every build. And this turned out to be easier than I thought.

In your build add a new task called Nuget Command:

When you configure this task give sources as Command parameter. And for arguments put this:

add -Name MyFeedName -Source https://MyProject.pkgs.visualstudio.com/_packaging/MyFeedName/nuget/v3/index.json -username myUsername@MyTeamName.onmicrosoft.com -password $(System.AccessToken)

My build task looks like this:

Replace the names accordingly. This adds a NuGet source to the agent. This $(System.AccessToken) is getting build variable that contains an access token – it is generated for every build you run, so no need to keep it around in your build script or nuget.config.

To make the token available you need change the toggle in build Options: Change Allow scripts to access OAuth Token to Enabled:

This way when you run dotnet restore your private feed will be available and it will use OAuth token that authenticates with the private package feed.

I’m building a prototype for a new project and it was decided to use DocumentDB to store our data. There will be very little data and even less relationship between the data, so document database is a good fit. Also there is a chance for us to use DocumentDB in production.

There is a comprehensive documentation about the structure and how it all ties together. Yet not enough coding samples on how to use attachments. And I struggled a bit to come up with the working solution. So I’ll explain it all here for future generations.

Structure

This diagram is from the documentation

And this is correct, but incomplete. Store this for a moment, I’ll come back to this point later.

Ignore the left three nodes on the diagram, look on Documents and Attachments nodes. This basically shows that if you create a document, it will be available on URI like this:

https://{accountname}.documents.azure.com/dbs/{databaseId}/colls/{collectionId}/docs/{docId}

That’s fine – you call an authenticated request to the correctly formed URI (and authenticated) and you’ll get JSON back as a result.

According to the schema you will also get attachment on this address:

https://{accountname}.documents.azure.com/dbs/{databaseId}/colls/{collectionId}/docs/{docId}/attachments{attachId}

And this is correct. If you do HTTP GET to this address – you’ll get JSON. Something like this:

{
    "contentType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
    "id": "1",
    "media": "/media/5VEpAMZpeasdfdfdAAAAAOFDl80B",
    "_rid": "5VEpAMZpeasdfdfdAAAAAOFDl80B=",
    "_self": "dbs\/5VEpAA==\/colls\/5VEpEWZpsQA=\/docs\/5VEpAMZpeasdfdfdAAAAAOFDl80B==\/attachments\/5VEpAMZpeasdfdfdAAAAAOFDl80B=",
    "_etag": "\"0000533e-0000-0000-0000-59079b8a0000\"",
    "_ts": 1493673393
}

Turns out that there are 2 ways you can do attachments in DocumentDB – managed and (surpise!) unmanaged. Unmanaged is when you don’t really attach anything, but just provide a link to an external storage. To be honest, I don’t see much sense in doing it that way – why bother with extra resource just to keep external links? It would be much easier to make these links as part of the actual document, so you don’t have to do another call to retrieve them.

Managed attachments is when you actually do store binaries in DocumentDB and this is what I chose to use. And unfortunately had to discover for myself that it is not straight forward.

Managed Attachments

If you noticed in the JSON above there is a line "media": "/media/5VEpAMZpeasdfdfdAAAAAOFDl80B". This is actually the link to the stored binary payload. And you need to query that URI to get the payload. So from knowing document id, you’ll need 2 requests to get your hands on attached binaries:

  1. Get list of attachments
  2. Every attachment contains link to Media – get that.

So this /media/{mediaId} is missing in the diagram above. Perhaps this is deliberate not to confuse users. I’ll go with that.

Code Samples

Now to the code samples.

I’m using NuGet package provided by Microsoft to do the requests for me:

Install-Package Microsoft.Azure.DocumentDB

Start with basics to get them out of the way:

private async Task<DocumentClient> GetClientAsync()
{
    if (documentClient == null)
    {
        var endpointUrl = configuration["DocumentDb:EndpointUri"];
        var primaryKey = configuration["DocumentDb:PrimaryKey"];

        documentClient = new DocumentClient(new Uri(endpointUrl), primaryKey);
        await documentClient.OpenAsync();
    }

    return documentClient;
}

where documentClient is a local variable in the containing class.

Now let’s create a document and attach a binary:

var myDoc = new { id = "42", Name = "Max", City="Aberdeen" }; // this is the document you are trying to save
var attachmentStream = File.OpenRead("c:/Path/To/File.pdf"); // this is the document stream you are attaching

var client = await GetClientAsync();
var createUrl = UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName);
Document document = await client.CreateDocumentAsync(createUrl, myDoc);

await client.CreateAttachmentAsync(document.SelfLink, attachmentStream, new MediaOptions()
    {
        ContentType = "application/pdf", // your application type
        Slug = "78", // this is actually attachment ID
    });

Now a few things are going on here: I create an anonymous class for sample sake – use strongly typed models. Reading attachment stream from file system – that is also for sample sake; whatever source you have, you’ll need to provide an instance of Stream to upload an attachment.

Now this is worth paying attention to: var createUrl = UriFactory.CreateDocumentCollectionUri(DatabaseName, CollectionName);. UriFactory class is not really a factory in the broad OOP sense – it does not produce other objects that will do actual work. This class gives you a lot of patterns that create URI addressess based on names of things you use. In other words there are a lot of String.Format with templates.

Method UriFactory.CreateDocumentCollectionUri is a going to give you link in format /dbs/{documentId}/colls/{collectionId}/. If you are looking on CreateAttachmentUri it will work with this template: dbs/{dbId}/colls/{collectionId}/docs/{docId}/attachments/{attachmentId}.

Next line with await client.CreateDocumentAsync(createUrl, myDoc) is doing what you think it is doing – creating a document on Azure – no surprises here.

But when you look on block of code with client.CreateAttachmentAsync(), not everything might be clear. document.SelfLink is a URI that links back to the document – it will be in format of dbs/{dbId}/colls/{collectionId}/docs/{docId}. Next big question is Slug – this is actually works as attachment ID. They might as well could’ve called it Id because this is what goes into id field when you look on the storage.

Retrieving Attachments

Once we’ve put something in the storage, some time in the future we’ll have to take it out. Let’s get back our attached file.

var client = await GetClientAsync();
var attachmentUri = UriFactory.CreateAttachmentUri(DatabaseName, CollectionName, docId, attachId);

var attachmentResponse = await client.ReadAttachmentAsync(attachmentUri);

var resourceMediaLink = attachmentResponse.Resource.MediaLink;

var mediaLinkResponse = await client.ReadMediaAsync(resourceMediaLink);

var contentType = mediaLinkResponse.ContentType;
var stream = mediaLinkResponse.Media;

Here we have some funky things going on again. This part UriFactory.CreateAttachmentUri(DatabaseName, CollectionName, docId, attachId) will give dbs/{dbId}/colls/{collectionId}/docs/{docId}/attachments/{attachmentId}. And GETting to this address will return you JSON same as in the start of the article. Value for attachmentResponse.Resource.MediaLink will look like /media/5VEpAMZpeasdfdfdAAAAAOFDl80B3 and this is the path to GET the actual attached binary – this is what we are doing in await client.ReadMediaAsync(resourceMediaLink). The rest should be self-explanatory.

Conclusion

To be honest, lack of explanation in documentation of this /media/{mediaId} does not add kudos to the team. And I feel like the provided API is not straight-forwrard and not easy to use – I had to decompile and have a wonder about what is actually happening inside of the API library. Also there is too much leakage of the implementation: I really could’ve lived without ever having to know about UriFactory.

Git can do a lot of things, but I’m lazy remembering all the commands I need – some of them are like 5 words long. So I’ll put this here so next time I don’t have to search for them.

Update list of remote branches:

git remote update origin --prune

Or set automatic pruning globally:

git config --global fetch.prune true    

Delete local and remote branch:

git push origin --delete <branch_name>
git branch -d <branch_name>

Push all branches to remote:

git push --all -u

Push this new branch to remote:

git push origin branchB:branchB

Add annotated tag:

git tag -a v1.4 -m "my version 1.4"

And push tags to remote

git push --follow-tags

To kill all the local changes do

git reset --hard HEAD

To make sure all the extra files are removed do

git clean -f -d