Upload files to Azure Blob Storage with using Shared Access Keys

I had a task of uploading files from command line client to a private container in Azure Blob Storage. And as part of the overall infrastructure I also has MVC4 and WebApi sites running on my Azure Web-Site installation.

The problem with uploading files was that the container was private. Before, every time I used the container, I used private key to access it, but all the operations were contained on the server: user submits a file in a form in a browser, file is passed through MVC web-site, where it is uploaded into Azure blob storage. And in MVC there is a very easy way to manage multiple uploaded files.

Now my task was similar, but as a client I had command line application and server side was WebApi. In WebApi it is not so easy to manage uploaded files, especially if you had multiple files for upload. You could not (or I could not find a way to do that) simply redirect the upload stream from controller into Azure Blob. You had to store file somewhere and then copy that to Azure storage. And there are a few problems with that:

  1. What happens if your instance runs out of hard drive space?
  2. In true cloud computing you should not rely on any local storage, as that is a bad practice: think about load-balancing your requests. One request with one file upload goes to one server, another file is uploaded to other server. Servers are doing the same thing, but consistency is lost and who knows what would happen to your upload procedure.
  3. Microsoft recommends using LocalResource for storing files in instance. But Azure Web-Sites don’t have access to LocalResource and you’ll have to use Path.GetTempPath() to store your files. And nowhere in Azure documentation I have seen a recommendation to use that.
  4. How do you deal with VERY large uploads? (4Gb? 10Gb?) Save them to HDD then upload to blob storage? good luck with these ones!

After a lot of digging I’ve come to conclusion that uploading directly to Azure is the best solution I can find. And here comes permission granularity that Azure Provides with Shared Access Signatures: you can create a special url just for one blob that contains signature with permissions level you defined: read only, write only.

Back to my app. Now when client executed, it requests WebApi on a server, saying “Hey, I have file named blah.txt to be uploaded”. And server like “Sure, dude. Here is the url: https://blah.windows.blobs.blah/conainer/blob?signare=asdfasdfasd”. And then the client uploads the files. Once the upload is finished, client goes “hey, server, upload is done!”. Or something along these lines.

Give me the codez already!

Here is the client that sends request to WebApi. I’m using RestSharp library for http requests:

public BlobUriWithSas GetUploadUriWithSAS(String filename)
{
    var request = new RestRequest(Method.POST)
                      {
                          Resource = "project/GetBlobUrl",
                      };
    var data = new FileUploadDetailsRequest()
                   {
                       Filename = filename,
                   };
    request.AddObject(data);

    return Execute<BlobUriWithSas>(request, apiKey);
}

private T Execute<T>(IRestRequest request, String apiKey) where T : new()
{
    var client = CreateClient(request, apiKey);
    var response = client.Execute<T>(request);
    // error processing is snipped
    return response.Data;
}

With

public class FileUploadDetailsRequest
{
    [Required]
    public String Filename { get; set; }

    // other properties snipped
}

public class BlobUriWithSas
{
    /// <summary>
    /// Uri of Azure storage with container name. I.e. http://127.0.0.1:10000/devstoreaccount1/containername
    /// </summary>
    public String BaseUri { get; set; }

    /// <summary>
    /// Name of the file with version as part of the address. I.e. file.txt
    /// </summary>
    public String BlobName { get; set; }

    /// <summary>
    /// Shared Access Signature generated to access this blob on this container.
    /// </summary>
    public String Sas { get; set; }
}

On the server side, WebApi controller looks like this:

[HttpPost] // all my requests to WebApi are Post by convention
public BlobUriWithSas GetBlobUrl(FileUploadDetailsRequest id)
{
    var containerUri = containerService.GetContainerUri(); // finds required container and returns url for that container

    var azureBlobName = String.Format("{1}", id.Filename); // logic in filename processing is snipped

    var sas = containerService.CreateSAS(azureBlobName);

    return new BlobUriWithSas()
               {
                   Sas = sas,
                   BaseUri = containerUri.ToString(),
                   BlobName = azureBlobName,
               };
}


// creates a Shared Access Signature with write only permission and available only for the next 10 minutes.    
public string CreateSAS(string blobName)
{
    var container = blobClient.GetContainerReference(/* logic for container creation is snipped */);

    // Create the container if it doesn't already exist
    container.CreateIfNotExists();

    var blob = container.GetBlockBlobReference(blobName);

    var sas = blob.GetSharedAccessSignature(new SharedAccessBlobPolicy()
    {
        Permissions = SharedAccessBlobPermissions.Write,
        SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(10),
    });

    return sas;
}

So now our client has a reply from our WebApi with blob urls and SAS. We can now start uploading on the client:

public void UploadFile(string fullFilePath, BlobUriWithSas blobWithSas)
{
    var fileName = Path.GetFileName(fullFilePath);

    var container = new CloudBlobContainer(new Uri(blobWithSas.BaseUri + blobWithSas.Sas));

    var blob = container.GetBlockBlobReference(blobWithSas.BlobName);
    blob.Metadata["FileName"] = fileName;
    blob.Metadata["DateCreated"] = DateTime.UtcNow.ToString("dd/MM/yyyy HH:mm:ss");

    using (var stream = new FileStream(fullFilePath, FileMode.Open))
    {
        blob.UploadFromStream(stream);
    }
    Console.WriteLine("Uploaded file {0}", fileName);
}

And we are done now. My logic is slightly more complex, as I add a lot of meta-data to the files, when creating SAS, I also have some logic that decides what container and what folder it is all placed. Also I have authentication happening. All of that is snipped for ease of reading.