How to test code for accessing Azure Storage REST Api

Update: There is an updated implementation of this code: see this blog post

One of my applications has a feature where it is given a URL with Shared Access Signature to Azure Blob Storage and then via REST API it uploads files to the storage. You can do that by provided Azure Storage libraries. Originally I had that functionality implemented with these libraries. But now I’m slimming down the application (this is a small command line app) and getting rid of unnecessary dependencies. And for the sake of one upload method I don’t want to tie myself to masses of extra DLLs. So I’m replacing the libraries with REST API call.

I have spent some time trying to test this functionality. You may say that this is a not a unit test. Yes, it is not a unit test. Other questions? Also you may say that you shouldn’t test these kind of things and wrap a class around these and ignore. That is what I have done in the past, but that particular piece of code caused an endless pain because I’ve done it all wrong and it was not covered by tests.

Great advantage with Azure is when you want to try to test REST API, there is an Azure Emulator. And you don’t have to actually make calls to the real service. So before you start any kind of testing you need to start Azure Emulator. Here how you can do that programmatically:

public static class StorageEmulator
{
    public static void Start()
    {
        // check if emulator is already running
        var processes = Process.GetProcesses().OrderBy(p => p.ProcessName).ToList();
        if (processes.Any(process => process.ProcessName.Contains("DSServiceLDB")))
        {
            return;
        }

        // environment variable not always work, so I hardcoded the path below
        //var command = Environment.GetEnvironmentVariable("PROGRAMFILES") + @"\Microsoft SDKs\Windows Azure\Emulator\csrun.exe";
        const string command = @"c:\Program Files\Microsoft SDKs\Windows Azure\Emulator\csrun.exe";

        using (var process = Process.Start(command, "/devstore:start"))
        {
            process.WaitForExit();
        }
    }
}

Here you see that first I check if the emulator is already running. That speeds up the process a lot, because once you have started the emulator, it stays running. On our build server I set up a Scheduler Task that every 15 minutes: checks that the emulator is up and running, otherwise starts it. This also speeds up with builds on check-in.

One of the rules of automated testing that tests must be independent and repeatable. So every time I run a test on blob storage (or queue) I create a new container (or queue), after the test is finished I delete the container. Every time the tests are run I create a new unique name for the container, so if 2 storage tests are executed on 2 parallel threads, they do not clash with each other.

Also for uploading files, I actually need real files from the file system. I find that Path.GetTempFileName() is a great function for this purpose. Path.GetTempFileName() creates a uniquely named, zero-byte temporary file on disk and returns the full path of that file. Many people are against touching a file system in unit tests. I know all that, I’ve read many discussions about it. Just because somebody said so, I’m not going to wrap another abstraction around System.IO.File and not test that layer. I see no benefit in that abstraction just now. I’m not going to replace file system with some other implementation. EVER. Also I’ve found that even 100% unit test covered system can fail at the seam, because nobody tested the seam between components. So I’ll test the whole stack here.

I’m not using Microsoft.WindowsAzure.Storage library in my production code, but I’m using it in my tests, because I need to verify that the files have been uploaded and these libraries are proven to be working. Extra dependencies in tests are OK, as long as these don’t spill into production code.

This is my plumbing for the tests:

    private String containerName;
    private String tempFile;
    private byte[] uploadedBytes;

    [SetUp]
    public void SetUp()
    {
        containerName = Guid.NewGuid().ToString().ToLower();

        tempFile = Path.GetTempFileName();

        // tempFile created empty by default, so we need to write some data in it
        // we are going to check the contents of the files, so it must be different for every test 
        var randomNumberGenerator = new RNGCryptoServiceProvider();
        uploadedBytes = new byte[128];
        randomNumberGenerator.GetBytes(uploadedBytes);
        File.WriteAllBytes(tempFile, uploadedBytes);
    }

    [TearDown]
    public void TearDown()
    {
        // delete the temp file, so it does not clog Temp folder
        if (File.Exists(tempFile))
        {
            File.Delete(tempFile);
        }

        // delete the container
        var container = GetContainerReference();
        container.DeleteIfExists();
    }


    private CloudBlobContainer GetContainerReference()
    {
        const string connectionString = "UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://127.0.0.1";

        var container = CloudStorageAccount.Parse(connectionString)
            .CreateCloudBlobClient()
            .GetContainerReference(containerName);

        container.CreateIfNotExists();
        return container;
    }


    private string GetSasUrl(String filePath)
    {
        var container = GetContainerReference();

        var blob = container.GetBlockBlobReference(filePath);

        var sas = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
        {
            Permissions = SharedAccessBlobPermissions.Write,
            SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(10),
        });

        return String.Format("{0}{1}", blob.Uri, sas);
    }

Requirements

Now it is time to define the requirements for my uploading functionality.

  1. Given URL with SAS, the method should be able to upload a file to Azure Blob
  2. I would like to be able to store meta-data for the blob
  3. Uploaded file must not be modified in the process. (One of the bugs I had is that multifile uploading headers were appended to the start of the files, rendering binary files unusable.)

In the spirit of TDD I’ll give you the tests first, then will show the implementation.

1. Test to check if file is uploaded

    [Test]
    public void UploadFile_GivenSas_UploadsFile()
    {
        //Arrange
        var fileName = Path.GetFileName(tempFile);
        var sasUri = GetSasUrl(fileName);

        var sut = new AzureStorageApi();
        // Act
        sut.UploadFile(tempFile, sasUri);

        // Assert
        var container = GetContainerReference();
        var blob = container.GetBlobReferenceFromServer(fileName);
        Assert.IsTrue(blob.Exists());
    }

2. Test to check if provided metadata is saved correctly

    [Test]
    public void UploadFile_GivenSas_SavesMetadata()
    {
        //Arrange
        var fileName = Path.GetFileName(tempFile);
        var sasUri = GetSasUrl(fileName);
        var metadata = new Dictionary<String, String>()
                         {
                             {"Key1", Guid.NewGuid().ToString()},
                             {"Key2", Guid.NewGuid().ToString()},
                         };

        var sut = new AzureStorageApi();
        // Act
        sut.UploadFile(tempFile, sasUri, metadata);

        // Assert
        var container = GetContainerReference();
        var blob = container.GetBlobReferenceFromServer(fileName);

        blob.FetchAttributes();
        Assert.AreEqual(metadata, blob.Metadata);
    }

3. The process does not modify the files in the process of uploading

    [Test]
    public void UploadFile_GivenFile_DoesNotModifyFile()
    {
        //Arrange
        var fileName = Path.GetFileName(tempFile);
        var sasUri = GetSasUrl(fileName);

        var sut = new AzureStorageApi();
        // Act
        sut.UploadFile(tempFile, sasUri);

        // Assert
        var container = GetContainerReference();
        var blob = container.GetBlobReferenceFromServer(fileName);
        blob.FetchAttributes();

        var downloadedBytes = new byte[blob.Properties.Length];
        blob.DownloadToByteArray(downloadedBytes, 0);

        var uploadedString = System.Text.Encoding.UTF8.GetString(uploadedBytes);
        var downloadedString = System.Text.Encoding.UTF8.GetString(downloadedBytes);

        Assert.AreEqual(uploadedString, downloadedString);
    }

This set of tests did quite a good job for me, when I decided to replace the implementation. And here is the implementation:

public class AzureStorageApi
{
    public void UploadFile(String fullFilePath, String blobSasUri, Dictionary<String, String> metadata = null)
    {
        using (var client = new HttpClient())
        using (var fileStream = File.OpenRead(fullFilePath))
        {
            HttpContent content = new StreamContent(fileStream);
            content.Headers.Add("x-ms-blob-type", "BlockBlob");
            foreach (var pair in metadata ?? new Dictionary<string, string>())
            {
                content.Headers.Add("x-ms-meta-" + pair.Key, pair.Value);
            }
            var response = client.PutAsync(blobSasUri, content).Result;
            if (response.IsSuccessStatusCode)
            {
                return;
            }
            var exceptionMessage = String.Format("Unable to finish request. Server returned status: {0}; {1}", response.StatusCode, response.ReasonPhrase);
            throw new ApplicationException(exceptionMessage);
        }
    }
}

You can check out full code listing in gist