Auto updating webrole

by ingvar 20. februar 2012 21:17

While working on a cool Windows Azure deployment package for Composite C1 I did a lot of deployments. The stuff I did, like reconfiguring the IIS, needed testing on an actual Azure hosted service and not just the emulator. Always trying to optimize, I thought if ways to get around all this redeploying and came up with the idea of making it possible to change the WebRole behavior at run time. Then I could just “inject” a new WebRole and start testing without having to wait on a new deployment. After some fiddling around I found a really nice solution and will present it here!

Solution description

The solution I came up with was dynamically loading an assembly into a newly created AppDomain and calling methods within on an instance of a class in the assembly.
This is a fairly simple task and all the code needed is shown here:

/* Creating domain */
AppDomainSetup domainSetup = new AppDomainSetup();
domainSetup.PrivateBinPath = folder;
domainSetup.ApplicationBase = folder;
AppDomain appDomain = AppDomain.CreateDomain(“MyAssembly”, null, domainSetup);

/* Creating remove proxy object */
IDynamicWebRole dynamicWebRole =
   (IDynamicWebRole)appDomain.CreateInstanceAndUnwrap(
      “MyAssembly”, 
      “MyAssembly.MyDynamicWebRole, MyAssembly”);

/* Calling method */
dynamicWebRole.Run();

 

Common interface: IDynamicWebRole

The code for the IDynamicWebRole interface is in its own little assembly. And the code shown here and can be changed as you wish.

interface IDynamicWebRole
{
    void Run();
}

There is no actual need for an interface, but both the Azure WebRole project and the assembly project that contains the actuall IDynamicWebRole implementation needs to share a type. So that’s why I created this interface and put it in its own assembly. The assemblies/projects in play is shown in this figure:

Now its time to look at the more interesting code in the WebRole. It’s where all the magic is going on!

The WebRole implementation

The WebRole implementation is rather complex. The WebRole needs to periodicly look for new versions of the IDynamicWebRole implementation and when there is a new version, download it, start new AppDomain and create a remote instance of the IDynamicWebRole implementation. Here is all the code for the WebRole. After the code, I will go into more detail on how this works.

public class WebRole : RoleEntryPoint
{
    /* Initializes these */
    private readonly CloudBlobClient _client;
    private readonly string _assemblyBlobPath = 
        "mycontainer/MyAssembly.dll";
    private readonly string _dynamicWebRoleHandlerTypeFullName = 
        "MyAssembly.MyDynamicWebRole, MyAssembly";

    private AppDomain _appDomain = null;
    private IDynamicWebRole _dynamicWebRole;

    private volatile bool _keepRunning = true;
    private DateTime _lastModifiedUtc = DateTime.MinValue;


    public override void Run()
    {
       int tempFolderCounter = 0;

       while (_keepRunning)
       {
           CloudBlob assemblyBlob = 
               _client.GetBlobReference(_assemblyBlobPath);
           DateTime lastModified = assemblyBlob.Properties.LastModifiedUtc;

           if (lastModified > _lastModifiedUtc)
           {
               /* Stop running appdomain */
               if (_appDomain != null)
               {
                   AppDomain.Unload(_appDomain);
                   _appDomain = null;
               }

               /* Create temp folder */
               string folder = Path.Combine(
                   AppDomain.CurrentDomain.BaseDirectory, 
                   tempFolderCounter.ToString());
               tempFolderCounter++;
               Directory.CreateDirectory(folder);

               /* Copy needed assemblies to the folder */
               File.Copy("DynamicWebRole.dll", 
                   Path.Combine(folder, "DynamicWebRole.dll"), true);

               File.Copy("Microsoft.WindowsAzure.StorageClient.dll", 
                   Path.Combine(folder, 
                       "Microsoft.WindowsAzure.StorageClient.dll"), true);

               /* Download from blob */
               string filename = 
                   _assemblyBlobPath.Remove(0, _assemblyBlobPath.LastIndexOf('/') + 1);
               string localPath = Path.Combine(folder, filename);
               assemblyBlob.DownloadToFile(localPath);

               string assemblyFileName = 
                   Path.GetFileNameWithoutExtension(localPath);

               /* Create new appdomain */
               AppDomainSetup domainSetup = new AppDomainSetup();
               domainSetup.PrivateBinPath = folder;
               domainSetup.ApplicationBase = folder;
               _appDomain = 
                  AppDomain.CreateDomain(assemblyFileName, null, domainSetup);

               /* Create IDynamicWebRole proxy instance for remoting */
               _dynamicWebRole = 
                   (IDynamicWebRole)_appDomain.CreateInstanceAndUnwrap(
                       assemblyFileName, _dynamicWebRoleHandlerTypeFullName);
                       
               /* Start the dynamic webrole in other thread */
               /* so we can continue testing for new assebmlies */
               /* Thread will end when the appdomain is unloaded by us */
               new Thread(() => _dynamicWebRole.Run()).Start();

               _lastModifiedUtc = lastModified;
           }
           
           Thread.Sleep(30 * 1000);
       }
    }


    public override void OnStop()
    {
       _keepRunning = false;
    }
}

I have omitted all the exception handling to make the code more readable an easier to understand.

IDynamicWebRole implementation

The last thing we need is to implement the IDynamicWebRole interface and have it in its own assembly. There is two important things when implementing the interface for the remoting to work and that is implementing MarshalByRefObject class and overriding the InitializeLifetimeService method. This is shown in the following code:

public class MyDynamicWebRole : MarshalByRefObject, IDynamicWebRole
{
    public void Run()
    {
       /* Put your webrole implementation here */
    }

    public override object InitializeLifetimeService()
    {
       /* This is needed so the proxy dont get recycled */
       return null;
    }
}

Thats all there is to it, enjoy! :)

Tags:

.NET | Azure | C#

Abstract for Danish Developer Conference 2012

by ingvar 18. januar 2012 10:59

Here is my abstract for the speach I'm going to give at Danish Developer Conference 2012.

Danish version:

Skalering og monitorering på Azure - Abstract

Den 14. oktober sidste år blev Kulturnattens officielle website besøgt af over 60.000 unikke besøgende og over 500.000 side visninger på et døgn, mens selve Kulturnatten fandt sted. På trods af den massive trafik lykkedes det at lave et website, som performede godt ved hjælp af Windows Azure. Azure’s udskalering kombineret med vores monitorering gjorde os i stand til at skrue op eller ned for antallet af maskiner i løbet af dagen, så siden altid performede, som den skulle.

Med nye cloud services som Windows Azure er det blev meget billigere at håndtere mange besøgende på et website ved hjælp af udskalering på hardware. Det skaber dog et helt nyt sæt udfordringer, bland andet at skrive website logikken, så det kan håndtere at køre på flere maskiner på samme tid.

Med udgangspunkt i Composite’s erfaringer med lanceringen af Kulturnatten.dk vil jeg i min præsentation kigge på op- vs. udskalering, og hvilke klassiske softwareproblemer der er ved udskalering.  Jeg vil også komme ind på de out-of-the-bog løsninger, der er i Windows Azure, så som Content delivery network (CDN) og traffic manageren, der begge er gode services, man kan bruge ved udskalering.
Til sidst vil runde monitorering, som gør det muligt at skrue op og ned for antallet af maskiner, der håndterer et website. God monitorering gør det muligt at handle i tide og undgå at sitet går ned. Men mindst lige så interessant muliggør det også, at man selv kan justere antallet af servere løbende og derved spare penge i sidste ende.

 

 

English version:

Scaling and monitoring on Azure - Abstract

On 14 October last year Culture Night official website (kulturnatten.dk) visited by over 60,000 unique visitors and over 500,000 page views on the day Culture Night took place. Despite the massive traffic we managed to create a website that performed very well by using Windows Azure. Azure's out-scaling combined with our custom monitoring enabled us to increase or decrease the number of machines during the day, so the site continued performing as it should.

With new cloud services like Windows Azure, it was much cheaper to handle huge amount of visitors coming to a website using out-scaling. But it is an entirely new set of challenges creating a website that can run on multiple machines simultaneously.

Based on the Composite's experience with the launch of kulturnatten.dk I will in my presentation looking at up- vs. out-scaling and the classic software problems that are with out-scaling. I will also touch on the out-of-the-book solutions in Windows Azure, such as Content Delivery Network (CDN) and the Traffic Manager. Both of which are good services that can be used by out-scaling.
Finally I will talk about monitoring, which makes it possible to increase or decrease the number of machines that handle a website in an intelligently way. Good monitoring makes it possible to act in time and avoid the site goes down. But equally interesting monitoring, makes it possible to turn the number of running machines down and thereby saving money in the end.

Tags:

.NET | Azure | C#

Composite C1 non-live edit multi instance Windows Azure deployment

by ingvar 7. august 2011 19:53

Composite C1 non-live edit multi instance Windows Azure deployment

Introduction

This post is a technical description of how we made a non-live editing multi datacenter, multi instance Composite C1 deployment. A very good overview of the setup can be found here. The setup can be split into three parts. The first one is the Windows Azure Web Role. The second one is the Composite C1 “Windows Azure Publisher” package. And the third part is the Windows Azure Blob Storage. The latter is the common resource shared between the two first and, except for it usage, is self explaining. The rest of this blog post I will describe the first two parts of this setup in more technical detail. This setup also supports the Windows Azure Traffic Manager for handling geo dns and fall-over, which is a really nice feature to put on top!

The non-live edit web role

The Windows Azure deployment package contains an empty website and a web role. The configuration for the package contains a blob connection string, a name for the website blob container, a name for the work blob container and a display name used by the C1 Windows Azure Publisher package. The web role periodically checks the timestamp of a specific blob in the named work blob container. If this specific blob is changed since last synchronization, the web role will start a new synchronization from named website blob container to the local file system. It is a optimized synchronization that I have described in an earlier blog post: How to do a fast recursive local folder to/from azure blob storage synchronization.

Because it is time consuming to downloading new files from the blob, the synchronization is done to a local folder and not the live website. This minimizes the offline time of the website. All the paths of downloaded and deleted files are kept in memory. When the synchronization is done the live website is put offline with the app_offline.htm and all downloaded files are copied to the website and all deleted files are also deleted from the website. After this, the website is put back online. All this extra work is done to keep the offline time as low as possible.

The web role writes its current status (Initialized, Ready, Updating, Stopped, etc) in a xml file located in the named work blob container. During the synchronization (updating), the web role also includes the progress in the xml file. This xml is read by the local C1’s Windows Azure Publisher and displayed to the user. This is a really nice feature because if a web role is located in a datacenter on the other side of the planet, it will take longer time before it is done with the synchronization. And this feature gives the user a full overview of the progress of each web role. See the movie below of how this feature looks like.

All needed for starting a new Azure instance with this, is to create a new blob storage or use and existing one and modified the blob connection string in the package configuration. This is also shown in the movie below.

Composite C1 Windows Azure Publisher

This Composite C1 package adds a new feature to an existing/new C1 web site. The package is installed on a local C1 instance and all development and future editing is done on this local C1 website. The package adds a way of configuring the Windows Azure setup and a way of publishing the current version of the website.

A configuration consists of the blob storage name and access key. It also contains two blob container names. One container is used to upload all files in the website and the other container is used for very simple communication between the web roles and the local C1 installation.

After a valid configuration has been done, it is possible to publish the website. The publish process is a local folder to blob container synchronization with the same optimization as the one I have described in this earlier blog post:  How to do a fast recursive local folder to/from azure blob storage synchronization. Before the synchronization is started the C1 application is halted. This is done to insure that no changes will be made by the user while the synchronization is in progress. The first synchronization will obvious take some time because all files has to be uploaded to the blob. Ranging from 5 to 15 minutes or even more, depending on the size of the website. Consecutive synchronizations are much faster. And if no large files like movies are added a consecutive synchronization takes less than 1 minute!

The Windows Azure Publisher package also installs a feature that gives the user an overview of all the deployed web roles and their status. When a new publish is finished all the web roles will start synchronizing and the current progress for each web role synchronization is also displayed the overview. The movie below also shows this feature.

Here is a movie that shows: How to configure and deploy the Azure package, Adding the Windows Azure Traffic Manager, Installing and configuring the C1 Azure Publisher package, the synchronization process and overview and finally showing the result.

 

 

Tags:

.NET | Azure | Blob | C#

Why is MD5 not part of ListBlobs result?

by ingvar 26. juni 2011 20:13

Edit 6th july:

It turns out that this is a bug in the Windows Azure Client Library. Read more here.

 

Original version:

For some strange reason the ContentMD5 blob property is only populated if you call FetchAttributes on the blob. This means that you can not obtain the value of the ContentMD5 blob property by calling ListBlobs, with any parameter settings. If you have to call FetchAttributes to obtain the value it is not feasible to use on a large set of blobs because of the request time for calling FetchAttributes is going to take a long time. This means that it can not be used in a optimal way when doing something like fast folder synchronization I wrote about in this blog post.

The ContentMD5 blob property is never set by the blob storage it self. And if you try to put any value in it that is not a correct 128 MD5 hash of the file, you will get an exception. So it has limited usage.

I have made some code to illustrate the missing population of the ContentMD5 blob property when using the listBlobs method. 

Here is the output of running the code:

GetBlobReference without FetchAttributes: Not populated
GetBlobReference with FetchAttributes: zhFORQHS9OLc6j4XtUbzOQ==
ListBlobs with BlobListingDetails.None: Not populated
ListBlobs with BlobListingDetails.Metadata: Not populated
ListBlobs with BlobListingDetails.All: Not populated

And here is the code:

CloudStorageAccount storageAccount = /* Initialize this */;
CloudBlobClient client = storageAccount.CreateCloudBlobClient();

CloudBlobContainer container = client.GetContainerReference("mytest");
container.CreateIfNotExist();

const string blobContent = "This is a test";

/* Compute 128 MD5 hash of content */
MD5 md5 = new MD5CryptoServiceProvider();
byte[] hashBytes = md5.ComputeHash(Encoding.UTF8.GetBytes(blobContent));

/* Get the blob reference and upload it with MD5 hash */
CloudBlob blob = container.GetBlobReference("MyBlob1.txt");
blob.Properties.ContentMD5 = Convert.ToBase64String(hashBytes);
blob.UploadText(blobContent);



CloudBlob blob1 = container.GetBlobReference("MyBlob1.txt");

/* Not populated - this is expected */
Console.WriteLine("GetBlobReference without FetchAttributes: " + 
    (blob1.Properties.ContentMD5 ?? "Not populated"));



blob1.FetchAttributes();

/* Populated - this is expected */
Console.WriteLine("GetBlobReference with FetchAttributes: " + 
    (blob1.Properties.ContentMD5 ?? "Not populated"));



CloudBlob blob2 = container.ListBlobs().OfType().Single();
            
/* Not populated - this is NOT expected */
Console.WriteLine("ListBlobs with BlobListingDetails.None: " + 
    (blob2.Properties.ContentMD5 ?? "Not populated"));



CloudBlob blob3 = container.ListBlobs(new BlobRequestOptions
    { BlobListingDetails = BlobListingDetails.Metadata }).
    OfType().Single();

/* Not populated - this is NOT expected */
Console.WriteLine("ListBlobs with BlobListingDetails.Metadata: " + 
    (blob3.Properties.ContentMD5 ?? "Not populated"));



CloudBlob blob4 = container.ListBlobs(new BlobRequestOptions
    {
        UseFlatBlobListing = true,
        BlobListingDetails = BlobListingDetails.All
    }).
    OfType().Single();

/* Not populated - this is NOT expected */
Console.WriteLine("ListBlobs with BlobListingDetails.All: " + 
    (blob4.Properties.ContentMD5 ?? "Not populated"));

Tags:

.NET | Azure | Blob | C#

How to do a fast recursive local folder to/from azure blob storage synchronization

by ingvar 23. juni 2011 20:49

Introduction

In this blob post I will describe how to do a synchronize of a local file system folder against a Windows Azure blob container/folder. There are many ways to do this, some faster than others. My way of doing this is especially fast if few files has been added/updated/deleted. If many is added/updated/deleted it is still fast, but uploading/downloadig files to/from the blob will be the main time factor. The algorithm I’m going to describe was developed by me when I was implementing a non-live-editing Window Azure deployment model for Composite C1. You can read more about the setup here. I will do a more technical blog post about this non-live-editing setup later.

Breakdown of the problem

The algorithm should only do one way synchronization. Meaning that it either updates the local file system folder to match whats stored on the blob container. Or updates the blob container to match whats stored in the local folder. So I will split it up into two. One for synchronizing to the blob storage and one from the blob storage. 

Because the blob storage is located on another computer we can’t compare the time stamp of the blobs against timestamps on local files. The reason for this is that the two computers (blob storage and our local) clocks will never be 100% in sync. What we can do, is use file hashes like MD5. The only problem with file hashes is that they are expensive to calculate, so we have to this as little as possible. We can accomplish this by saving the MD5 hash in the blobs metadata and cache the hash for the local file in memory. Even if we convert the hash value to a base 64 string, holding the hash in memory for 10.000 files will cost less than 0.3 mega bytes. So this scales fairly okay. 

When working with the Windows Azure blob storage we have to take care not to do lots request. Especially we should take care not to do a request for every file/blob we process. Each request is likely to take more than 50ms and if we have 10.000 files to process, this will cost more than 8 minutes! So we should never use GetBlobReference/FetchAttributes to see if the blob exists and/or get its MD5 hash. But this is no problem because we can use the ListBlobs method with the right options.

Semi Pseudo Algorithms

Lets start with some semi pseudo code. I have left out some methods and properties but they should be self explaining enouth, to get the overall understanding of the algorithms. I did this so it would be easier to read and understand. Further down I’ll show the full C# code for these algorithms. 

You might wonder why I store the MD5 hash value in the blobs meta data and not in the ContentMD5 property of the blob. The reason is that ContentMD5 is only populated with a value if FetchAttribute is called on the blob, which will make the algorithm perform really bad. Ill do a blog post on the odd behavior of the ContentMD5 blob property in a later blog post. Edit: Read it here.

Download the full source here: BlobSync.cs (7.80 kb).

Synchronizing to the blob

public void SynchronizeToBlob()
{
    DateTime lastSync = LastSyncTime;
    DateTime newLastSyncTime = DateTime.Now;

    IEnumerable allFilesInStartFolder = GetAllFilesInStartFolder();

    var blobOptions = new BlobRequestOptions { UseFlatBlobListing = true };
            
    /* This is the only request to the blob storage that we will do */
    /* except when we have to upload or delete to/from the blob */
    var blobs =
        Container.ListBlobs(blobOptions).
        OfType().
        Select(b => new
        {
            Blob = b,
            LocalPath = GetLocalPath(b)
        }).
        ToList();
    /* We use ToList here to avoid multiple requests when enumerating */

    foreach (string filePath in allFilesInStartFolder)
    {
        string fileHash = GetFileHashFromCache(filePath, lastSync);

        /* Checking for added files */
        var blob = blobs.Where(b => b.LocalPath == filePath).SingleOrDefault();
        if (blob == null) // Does not exist
        {
            UploadToBlobStorage(filePath, fileHash);
        }

        /* Checking for changed files */
        if (fileHash != blob.Blob.Metadata["Hash"])
        {
            UploadToBlobStorage(filePath, fileHash, blob.Blob);
        }
    }

    /* Check for deleted files */
    foreach (var blob in blobs)
    {
        bool exists = allFilesInStartFolder.Where(f => blob.LocalPath == f).Any();

        if (!exists)
        {
            DeleteBlob(blob.Blob);
        }
    }

    LastSyncTime = newLastSyncTime;
}

 

Synchronizing from the blob

public void SynchronizeFromBlob()
{
    IEnumerable allFilesInStartFolder = GetAllFilesInStartFolder();

    var blobOptions = new BlobRequestOptions
    {
        UseFlatBlobListing = true,
        BlobListingDetails = BlobListingDetails.Metadata
    };

    /* This is the only request to the blob storage that we will do */
    /* except when we have to upload or delete to/from the blob */
    var blobs =
        Container.ListBlobs(blobOptions).
        OfType().
        Select(b => new
        {
            Blob = b,
            LocalPath = GetLocalPath(b)
        }).
        ToList();
    /* We use ToList here to avoid multiple requests when enumerating */

    foreach (var blob in blobs)
    {
        /* Checking for added files */
        if (!File.Exists(blob.LocalPath))
        {
            DownloadFromBlobStorage(blob.Blob, blob.LocalPath);
        }

        /* Checking for changed files */
        string fileHash = GetFileHashFromCache(blob.LocalPath);
        if (fileHash != blob.Blob.Metadata["Hash"])
        {
            DownloadFromBlobStorage(blob.Blob, blob.LocalPath);
            UpdateFileHash(blob.LocalPath, blob.Blob.Metadata["Hash"]);
        }
    }

    /* Checking for deleted files */
    foreach (string filePath in allFilesInStartFolder)
    {
        bool exists = blobs.Where(b => b.LocalPath == filePath).Any();
        if (!exists)
        {
            File.Delete(filePath);
        }
    }
}

The rest of the code

In this section I will go through the missing methods and properties from the semi pseudo algorithms above. Most of them are pretty simple and self explaining, but a few of them are more complex and needs more attention. 

LastSyncTime and Container

These are just get/set properties. Container should be initialized with the the blob container that you wish to synchronize to/from. LastSyncTime is initialized with DateTime.MinValue. LocalFolder points the to local directory to synchronize to/from.

private DateTime LastSyncTime { get; set; }      
private CloudBlobContainer Container { get; set; }
/* Ends with a \ */
private string LocalFolder { get; set; }

UploadToBlobStorage

Simply adds the file hash to the blobs metadata and uploads the file.

private void UploadToBlobStorage(string filePath, string fileHash)
{
    string blobPath = filePath.Remove(0, LocalFolder.Length);
    CloudBlob blob = Container.GetBlobReference(blobPath);
    blob.Metadata["Hash"] = fileHash;
    blob.UploadFile(filePath);
}


private void UploadToBlobStorage(string filePath, string fileHash, CloudBlob cloudBlob)
{
    cloudBlob.Metadata["Hash"] = fileHash;
    cloudBlob.UploadFile(filePath);
}

DownloadFromBlobStorage

Simply downloads the blob

private void DownloadFromBlobStorage(CloudBlob blob, string filePath)
{
    blob.DownloadToFile(filePath);
}

DeleteBlob

Simply deletes the blob

private void DeleteBlob(CloudBlob blob)
{
    blob.Delete();
}

GetFileHashFromCache

There are two versions of this method. This one is used when synchronizing to the blob. It uses the LastWriteTime of the file and the last time we did a sync to skip calculating the file hash of files that have not been changed. This saves a lot of time, so its worth the complexity. 

private readonly Dictionary _syncToBlobHashCache = new Dictionary();
private string GetFileHashFromCache(string filePath, DateTime lastSync)
{
    if (File.GetLastWriteTime(filePath) <= lastSync && 
        _syncToBlobHashCache.ContainsKey(filePath))
    {
        return _syncToBlobHashCache[filePath];
    }
    else
    {
        using (FileStream file = new FileStream(filePath, FileMode.Open))
        {
            MD5 md5 = new MD5CryptoServiceProvider();

            string fileHash = Convert.ToBase64String(md5.ComputeHash(file));
            _syncToBlobHashCache[filePath] = fileHash;

            return fileHash;
        }
    }
}

GetFileHashFromCache and UpdateFileHash

This is the other version of the GetFileHashFromCache method. This one is used when synchronizing from the blob. The UpdateFileHash is used for updating the file hash cache when a new hash is obtained from a blob.

private readonly Dictionary _syncFromBlobHashCache = new Dictionary();
private string GetFileHashFromCache(string filePath)
{
    if (_syncFromBlobHashCache.ContainsKey(filePath))
    {
        return _syncFromBlobHashCache[filePath];
    }
    else
    {
        using (FileStream file = new FileStream(filePath, FileMode.Open))
        {
            MD5 md5 = new MD5CryptoServiceProvider();

            string fileHash = Convert.ToBase64String(md5.ComputeHash(file));
            _syncFromBlobHashCache[filePath] = fileHash;

            return fileHash;
        }
    }
}

private void UpdateFileHash(string filePath, string fileHash)
{
    _syncFromBlobHashCache[filePath] = fileHash;
}

GetAllFilesInStartFolder

This method returns all files in the start folder given by the LocalFolder property. It ToLowers all file paths. This is done because blobs names are case sensitive, so when we compare paths returned from the method we want to compare on all lower cased paths. When comparing paths we also use the GetLocalPath method which translates a blob path to a local path and also ToLowers the result. 

private IEnumerable GetAllFilesInStartFolder()
{
    Queue foldersToProcess = new Queue();
    foldersToProcess.Enqueue(LocalFolder);

    while (foldersToProcess.Count > 0)
    {
        string currentFolder = foldersToProcess.Dequeue();
        foreach (string subFolder in Directory.GetDirectories(currentFolder))
        {
            foldersToProcess.Enqueue(subFolder);
        }

        foreach (string filePath in Directory.GetFiles(currentFolder))
        {
            yield return filePath.ToLower();
        }
    }
}

GetLocalPath

Returns the local path of the given blob using the LocalFolder property as base folder. It ToLowers the result so we only compare all lower cased paths due to that blob names are case sensitive. 

private string GetLocalPath(CloudBlob blob)
{
    /* Path should only use \ and no /  */
    string path = blob.Uri.LocalPath.Remove(blob.Container.Name.Length + 2).Replace('/', '\\');

    /* Blob names are case sensitive, so when we check local */
    /* filenames agains blob names we tolower all of it */
    return Path.Combine(LocalFolder, path).ToLower();
}

Tags:

.NET | Azure | C# | Blob

Azure cloud blob property vs metadata

by ingvar 5. juni 2011 21:45

Both properties (some of them) and the metadata collection of a blob can be used to store meta data for a given blob. But there are a small differences between them. When working with the blob storage, the number of HTTP REST request plays a significant role when it comes to performance. The number of request becomes very important if the blob storage contains a lot of small files. There are at least three properties found in the CloudBlob.Properties property that can be used freely. These are ContentType, ContentEncoding and ContentLanguage. These can hold very large strings! I have tried testing with a string containing 100.000 characters and it worked. They could possible hold a lot more, but hey, 100.000 is a lot! So all three of them can be used to hold meta data.
So, what are the difference of using these properties and using the metadata collection? The difference lies in when they get populated. This is best illustrated by the following code:

CloudBlobContainer container; /* Initialized assumed */
CloudBlob blob1 = container.GetBlobReference("MyTestBlob.txt");
blob1.Properties.ContentType = "MyType";
blob1.Metadata["Meta"] = "MyMeta";
blob1.UploadText("Some content");

CloudBlob blob2 = container.GetBlobReference("MyTestBlob.txt");
string value21 = blob2.Properties.ContentType; /* Not populated */
string value22 = blob2.Metadata["Meta"]; /* Not populated */

CloudBlob blob3 = container.GetBlobReference("MyTestBlob.txt");
blob3.FetchAttributes();
string value31 = blob3.Properties.ContentType; /* Populated */
string value32 = blob3.Metadata["Meta"]; /* Populated */

CloudBlob blob4 = (CloudBlob)container.ListBlobs().First();
string value41 = blob4.Properties.ContentType; /* Populated */
string value42 = blob4.Metadata["Meta"]; /* Not populated */

BlobRequestOptions options = new BlobRequestOptions
  {
     BlobListingDetails = BlobListingDetails.Metadata
  };
CloudBlob blob5 = (CloudBlob)container.ListBlobs(options).First();
string value51 = blob5.Properties.ContentType; /* Populated */
string value52 = blob5.Metadata["Meta"]; /* populated */

 

The difference is when using ListBlobs on a container or blob directory and the values of the BlobRequestOptions object. It might not seem to be a big difference, but imagine that there are 10.000 blobs all with a meta data string value with a length of 100 characters. That sums to 1.000.000 extra data to send when listing the blobs. So if the meta data is not used every time you do a ListBlobs call, you might consider moving it to the Metadata collection. I will investigate more into the performance of these methods of storing meta data for a blob in a later blog post.

Tags:

.NET | Azure | C# | Blob

A basic inter webrole broadcast communication on Azure using the service bus

by ingvar 19. maj 2011 21:48

In this blog post I'll try to show a bare bone setup that does inter webrole broadcast communication. The code is based on Valery M blog post. The code in his blog post is based on a customer solution and contains a lot more code than needed to get the setup working. But his code also provides a lot more robust broadcast communication with retries and other things that makes the communication reliable. I have omitted all this to make it, as easy to understand and recreate as possible. The idea is that the code I provide, could be used as a basis for setting up your own inter webrole broadcast communication. You can download the code here: InterWebroleBroadcastDemo.zip (17.02 kb)

Windows Azure AppFabric SDK and using Microsoft.ServiceBus.dll

We need a reference to Microsoft.ServiceBus.dll in order to do the inter webrole communication. The Microsoft.ServiceBus.dll assembly is a part of the Windows Azure AppFabric SDK found here.
When you use Microsoft.ServiceBus.dll you need to add it as a reference like any other assembly. You do this by browsing to the directory where the AppFabric SDK was installed. But unlike most other references you add, you need to set the "Copy local" property for the reference to true (default is false).
I have put all my code in a separate assembly and then the main classes are then used in the WebRole.cs file. Even if I have added Microsoft.ServiceBus.dll to my assembly () and setted the "Copy Local" to true, I still have to add it to the WebRole project and also set the "Copy Local" to true here. This is a very important detail!

Creating a new Service Bus Namespace

Here is a short step-guide on how to create a new Service Bus Namespace. If you have already done this, you can skip it and just use the already existing namespace and its values.

  1. Go to the section "Service Bus, Access Control & Caching"
  2. Click the button "New Namespace"
  3. Check "Service Bus"
  4. Enter the desired Namespace (This namespace is the one used for EndpointInformation.ServiceNamespace)
  5. Click "Create Namespace"
  6. Select the newly created namespace
  7. Under properties (To the right) find Default Key and click "View"
  8. Here you will find the Default Issuer (This value should be used for EndpointInformation.IssuerName) and Default Key (This value should be used for  EndpointInformation.IssuerSecret)

The code

Here I will go through all the classes in my sample code. The full project including the WebRole project can be download here: InterWebroleBroadcastDemo.zip (17.02 kb)

BroadcastEvent

We start with the BroadcastEvent class. This class represents the data we send across the wire. This is done with the class attribute DataContract and the member attribute DataMember. In this sample code I only send two simple strings. SenderInstanceId is not required but I use it to display where the message came from.

[DataContract(Namespace = BroadcastNamespaces.DataContract)]
public class BroadcastEvent
{
public BroadcastEvent(string senderInstanceId, string message)
{
this.SenderInstanceId = senderInstanceId;
this.Message = message;
}

[DataMember]
public string SenderInstanceId { get; private set; }

[DataMember]
public string Message { get; private set; }
}

BroadcastNamespaces

This class only contains some constants that are used by some of the other classes.

public static class BroadcastNamespaces
{
public const string DataContract = "http://broadcast.event.test/data";
public const string ServiceContract = "http://broadcast.event.test/service";
}

IBroadcastServiceContract

This interface defines the contract that the web roles uses when communication to each other. Here in this simple example, the contract only has one method, namely the Publish method. This method is, in the implementation of the contract (BroadcastService) used to send BroadcastEvent's to all web roles that have subscribed to this channel. There is another method, Subscribe, that is inherited from the IObservable. This method is used to subscribe to the BroadcastEvents when they are published by some web role. This method is also implemented in the BroadcastService class.

[ServiceContract(Name = "BroadcastServiceContract", 
Namespace = BroadcastNamespaces.ServiceContract)]
public interface IBroadcastServiceContract : IObservable<BroadcastEvent>
{
[OperationContract(IsOneWay = true)]
void Publish(BroadcastEvent e);
}

IBroadcastServiceChannel

This interface defines the channel which the web roles communicates through. This is done by adding the IClientChannel interface.

public interface IBroadcastServiceChannel : IBroadcastServiceContract, IClientChannel
{
}

BroadcastEventSubscriber

The web role subscribes to the channel by creating an instance of this class and registering it. For testing purpose, this implementation only logs when it receives any BroadcastEvent.

public class BroadcastEventSubscriber : IObserver<BroadcastEvent>
{
public void OnNext(BroadcastEvent value)
{
Logger.AddLogEntry(RoleEnvironment.CurrentRoleInstance.Id +
" got message from " + value.SenderInstanceId + " : " +
value.Message);
}

public void OnCompleted()
{
/* Handle on completed */
}

public void OnError(Exception error)
{
/* Handle on error */
}
}

BroadcastService

This class implements the IBroadcastServiceContract interface. It handles the publish scenario by calling the OnNext method on all subscribes in parallel. The reason for doing this parallel, is that the OnNext method is blocking, so there is a good change that there is a okay performance gain by doing this in parallel.
The other method is Subscribe. This method adds the BroadcastEvent observer to the subscribers and returns a object of type UnsubscribeCallbackHandler that, when disposed unsubscribe itself. This is a part of the IObserver/IObservable pattern.

[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, 
ConcurrencyMode = ConcurrencyMode.Multiple)]
public class BroadcastService : IBroadcastServiceContract
{
private readonly IList<IObserver<BroadcastEvent>> _subscribers =
new List<IObserver<BroadcastEvent>>();

public void Publish(BroadcastEvent e)
{
ParallelQuery<IObserver<BroadcastEvent>> subscribers =
from sub in _subscribers.AsParallel().AsUnordered()
select sub;

subscribers.ForAll((subscriber) =>
{
try
{
subscriber.OnNext(e);
}
catch (Exception ex)
{
try
{
subscriber.OnError(ex);
}
catch (Exception)
{
/* Ignore exception */
}
}
});
}

public IDisposable Subscribe(IObserver<BroadcastEvent> subscriber)
{
if (!_subscribers.Contains(subscriber))
{
_subscribers.Add(subscriber);
}

return new UnsubscribeCallbackHandler(_subscribers, subscriber);
}


private class UnsubscribeCallbackHandler : IDisposable
{
private readonly IList<IObserver<BroadcastEvent>> _subscribers;
private readonly IObserver<BroadcastEvent> _subscriber;

public UnsubscribeCallbackHandler(IList<IObserver<BroadcastEvent>> subscribers,
IObserver<BroadcastEvent> subscriber)
{
_subscribers = subscribers;
_subscriber = subscriber;
}

public void Dispose()
{
if ((_subscribers != null) && (_subscriber != null) &&
(_subscribers.Contains(_subscriber)))
{
_subscribers.Remove(_subscriber);
}
}
}
}

ServiceBusClient

The main purpose of the ServiceBusClient class is setup and create a ChannelFactory<IBroadcastServiceChannel> and a IBroadcastServiceChannel instance through the factory. The channel is used by the web role to send BroadcastEvent's through the publish method. It is in this class all the Azure service bus magic happens. Setting up the binding and endpoint. A few service bus related constants is used here and they are all kept in the EndpointInformation class. 

public class ServiceBusClient<T> where T : class, IClientChannel, IDisposable
{
private readonly ChannelFactory<T> _channelFactory;
private readonly T _channel;
private bool _disposed = false;

public ServiceBusClient()
{
Uri address = ServiceBusEnvironment.CreateServiceUri("sb",
EndpointInformation.ServiceNamespace, EndpointInformation.ServicePath);

NetEventRelayBinding binding = new NetEventRelayBinding(
EndToEndSecurityMode.None,
RelayEventSubscriberAuthenticationType.None);

TransportClientEndpointBehavior credentialsBehaviour =
new TransportClientEndpointBehavior();
credentialsBehaviour.CredentialType =
TransportClientCredentialType.SharedSecret;
credentialsBehaviour.Credentials.SharedSecret.IssuerName =
EndpointInformation.IssuerName;
credentialsBehaviour.Credentials.SharedSecret.IssuerSecret =
EndpointInformation.IssuerSecret;

ServiceEndpoint endpoint = new ServiceEndpoint(
ContractDescription.GetContract(typeof(T)), binding,
new EndpointAddress(address));
endpoint.Behaviors.Add(credentialsBehaviour);

_channelFactory = new ChannelFactory<T>(endpoint);

_channel = _channelFactory.CreateChannel();
}

public T Client
{
get
{
if (_channel.State == CommunicationState.Opening) return null;

if (_channel.State != CommunicationState.Opened)
{
_channel.Open();
}

return _channel;
}
}

public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}

public void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
try
{
if (_channel.State == CommunicationState.Opened)
{
_channel.Close();
}
else
{
_channel.Abort();
}
}
catch (Exception)
{
/* Ignore exceptions */
}


try
{
if (_channelFactory.State == CommunicationState.Opened)
{
_channelFactory.Close();
}
else
{
_channelFactory.Abort();
}
}
catch (Exception)
{
/* Ignore exceptions */
}

_disposed = true;
}
}
}

~ServiceBusClient()
{
Dispose(false);
}
}

ServiceBusHost

The main purpose of the ServiceBusHost class is to setup, create and open a ServiceHost. The service host is used by the web role to receive BroadcastEvent's through registering a BroadcastEventSubsriber instance. Like the ServiceBusClient it is in this class all the Azure service bus magic happens.

public class ServiceBusHost<T> where T : class
{
private readonly ServiceHost _serviceHost;
private bool _disposed = false;

public ServiceBusHost()
{
Uri address = ServiceBusEnvironment.CreateServiceUri("sb",
EndpointInformation.ServiceNamespace, EndpointInformation.ServicePath);

NetEventRelayBinding binding = new NetEventRelayBinding(
EndToEndSecurityMode.None,
RelayEventSubscriberAuthenticationType.None);

TransportClientEndpointBehavior credentialsBehaviour =
new TransportClientEndpointBehavior();
credentialsBehaviour.CredentialType =
TransportClientCredentialType.SharedSecret;
credentialsBehaviour.Credentials.SharedSecret.IssuerName =
EndpointInformation.IssuerName;
credentialsBehaviour.Credentials.SharedSecret.IssuerSecret =
EndpointInformation.IssuerSecret;

ServiceEndpoint endpoint = new ServiceEndpoint(
ContractDescription.GetContract(typeof(T)), binding,
new EndpointAddress(address));
endpoint.Behaviors.Add(credentialsBehaviour);

_serviceHost = new ServiceHost(Activator.CreateInstance(typeof(T)));

_serviceHost.Description.Endpoints.Add(endpoint);

_serviceHost.Open();
}

public T ServiceInstance
{
get
{
return _serviceHost.SingletonInstance as T;
}
}

public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}

public void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
try
{
if (_serviceHost.State == CommunicationState.Opened)
{
_serviceHost.Close();
}
else
{
_serviceHost.Abort();
}
}
catch
{
/* Ignore exceptions */
}
finally
{
_disposed = true;
}
}
}
}

~ServiceBusHost()
{
Dispose(false);
}
}

EndpointInformation

This class keeps all the service bus related constants. I have put a dummy constant for the ServiceNamespace, IssuerName and IssuerSecret. These you have to find in the Windows Azure Management Portal [URL]. Read below how to create a new Service Bus and obtain these values.

public static class EndpointInformation
{
public const string ServiceNamespace = "CHANGE THIS TO YOUR NAMESPACE";
public const string ServicePath = "BroadcastService";
public const string IssuerName = "CHANGE THIS TO YOUR ISSUER NAME";
public const string IssuerSecret = "CHANGE THIS TO YOUR ISSUER SECRET";
}


BroadcastCommunicator

This class abstracts all the dirty details away and is the main class that the web role uses. It has two methods. Publish for publishing BroadcastEvent instances. And Subscribe for subscribing to the broadcast events by creating an instanse of the BroadcastEventSubscriber and handing it to the Subscribe method.

public class BroadcastCommunicator : IDisposable
{
private ServiceBusClient<IBroadcastServiceChannel> _publisher;
private ServiceBusHost<BroadcastService> _subscriber;
private bool _disposed = false;

public void Publish(BroadcastEvent e)
{
if (this.Publisher.Client != null)
{
this.Publisher.Client.Publish(e);
}
}

public IDisposable Subscribe(IObserver<BroadcastEvent> subscriber)
{
return this.Subscriber.ServiceInstance.Subscribe(subscriber);
}

private ServiceBusClient<IBroadcastServiceChannel> Publisher
{
get
{
if (_publisher == null)
{
_publisher = new ServiceBusClient<IBroadcastServiceChannel>();
}

return _publisher;
}
}

private ServiceBusHost<BroadcastService> Subscriber
{
get
{
if (_subscriber == null)
{
_subscriber = new ServiceBusHost<BroadcastService>();
}

return _subscriber;
}
}

public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}

public void Dispose(bool disposing)
{
if (!_disposed && disposing)
{
try
{
_subscriber.Dispose();
_subscriber = null;
}
catch
{
/* Ignore exceptions */
}

try
{
_publisher.Dispose();
_publisher = null;
}
catch
{
/* Ignore exceptions */
}

_disposed = true;
}
}

~BroadcastCommunicator()
{
Dispose(false);
}
}

WebRole

This is a pretty strait forward web role. In the OnStart method a instance of the BroadcastCommunicator is created and an instance of BroadcastEventSubscriber is used to subscribe to the channel.
The Run method is a endless loop with a random sleep in every loop, for testing purpose. In every loop it sends a "Hello World" message including its own role instance id.
The OnStep method cleans up by disposing disposable objects.

public class WebRole : RoleEntryPoint
{
private volatile BroadcastCommunicator _broadcastCommunicator;
private volatile BroadcastEventSubscriber _broadcastEventSubscriber;
private volatile IDisposable _broadcastSubscription;
private volatile bool _keepLooping = true;


public override bool OnStart()
{
_broadcastCommunicator = new BroadcastCommunicator();
_broadcastEventSubscriber = new BroadcastEventSubscriber();

_broadcastSubscription =
_broadcastCommunicator.Subscribe(_broadcastEventSubscriber);

return base.OnStart();
}



public override void Run()
{
/* Just keep sending messasges */
while (_keepLooping)
{
int secs = ((new Random()).Next(30) + 60);

Thread.Sleep(secs * 1000);
try
{
BroadcastEvent broadcastEvent =
new BroadcastEvent(RoleEnvironment.CurrentRoleInstance.Id,
"Hello world!");

_broadcastCommunicator.Publish(broadcastEvent);
}
catch (Exception ex)
{
Logger.AddLogEntry(ex);
}
}
}

public override void OnStop()
{
_keepLooping = false;

if (_broadcastCommunicator != null)
{
_broadcastCommunicator.Dispose();
}

if (_broadcastSubscription != null)
{
_broadcastSubscription.Dispose();
}

base.OnStop();
}
}


Logger

The logger class is used some places in the code. If a logger action has been set, logging will be done. Read more about how I did logging below.

public static class Logger
{
private static Action<string> AddLogEntryAction { get; set; }

public static void Initialize(Action<string> addLogEntry)
{
AddLogEntryAction = addLogEntry;
}

public static void AddLogEntry(string entry)
{
if (AddLogEntryAction != null)
{
AddLogEntryAction(entry);
}
}

public static void AddLogEntry(Exception ex)
{
while (ex != null)
{
AddLogEntry(ex.ToString());

ex = ex.InnerException;
}
}
}

Simple but effective logging

When I developed this demo I used a web service on another server for logging. This web service just have one method taking one string argument, the line to log. Then I have a page for displaying and clearing the log. This is a very simple way of doing logging, but it gets the job done.

The output

Below is the output from a run of the demo project with 4 web role instances. Here the first two lines are most interesting. Here you can see that only web role instance WebRole_IN_0 and WebRole_IN_2 are ready to receive (and send) events. The reason for this is the late creation (create when needed) of the ServiceBusClient and ServiceBusHost in the BroadcastCommunicator class and the sleep period in the WebRole. This illustrates that web roles can join the broadcast channel at any time and start sending and receiving events.
 
20:30:40.4976 : WebRole_IN_2 got message from WebRole_IN_0 : Hello world!
20:30:40.7576 : WebRole_IN_0 got message from WebRole_IN_0 : Hello world!
20:30:43.0912 : WebRole_IN_2 got message from WebRole_IN_3 : Hello world!
20:30:43.0912 : WebRole_IN_1 got message from WebRole_IN_3 : Hello world!
20:30:43.0912 : WebRole_IN_0 got message from WebRole_IN_3 : Hello world!
20:30:43.1068 : WebRole_IN_3 got message from WebRole_IN_3 : Hello world!
20:30:45.4505 : WebRole_IN_0 got message from WebRole_IN_2 : Hello world!
20:30:45.4505 : WebRole_IN_3 got message from WebRole_IN_2 : Hello world!
20:30:45.4505 : WebRole_IN_1 got message from WebRole_IN_2 : Hello world!
20:30:45.4662 : WebRole_IN_2 got message from WebRole_IN_2 : Hello world!
20:30:59.4816 : WebRole_IN_0 got message from WebRole_IN_1 : Hello world!
20:30:59.4816 : WebRole_IN_3 got message from WebRole_IN_1 : Hello world!
20:30:59.4972 : WebRole_IN_2 got message from WebRole_IN_1 : Hello world!
20:30:59.4972 : WebRole_IN_1 got message from WebRole_IN_1 : Hello world!
20:31:59.1371 : WebRole_IN_2 got message from WebRole_IN_3 : Hello world!
20:31:59.2621 : WebRole_IN_1 got message from WebRole_IN_3 : Hello world!
20:31:59.3871 : WebRole_IN_0 got message from WebRole_IN_3 : Hello world!
20:31:59.5746 : WebRole_IN_3 got message from WebRole_IN_3 : Hello world!
20:32:03.1683 : WebRole_IN_2 got message from WebRole_IN_0 : Hello world!
20:32:03.1683 : WebRole_IN_0 got message from WebRole_IN_0 : Hello world!
20:32:03.1683 : WebRole_IN_1 got message from WebRole_IN_0 : Hello world!
20:32:03.1839 : WebRole_IN_3 got message from WebRole_IN_0 : Hello world!

Tags:

.NET | Azure | C#

Azure and CopyFromBlob performance

by ingvar 2. maj 2011 21:47

I have earlier looked at some of the performance for some of the methods for uploading content to a Azure blob. The result of these tests can be found here. While I was working on getting Composite C1 multi tenancy working in the cloud, I ran into a performance problem. Thanks to Henrik Westergaard, I took a look at the CopyFromBlob method and its performance. 

I used the same setup as in my eirlier test. The test code ran in the WebRole OnStart. The table below holds the tests results. Unlike the other methods i tested, the CopyFromBlob seems to be more invariant when the file size is getting larger. The CopyFromBlob method is 2-3 times faster for files larger than 250kb. And never significant slower. So if the setup permits it, this is preferred compared to the other upload-methods.

Size in kb CopyFromBlob
50 70
100 75
150 73
200 75
250 75
300 75
350 75
400 75
450 73
500 70

The code for the test loop is here:

int copyFromBlobTime1 = Environment.TickCount;
for (int i = 0; i < loopCount; i++)
{
    CloudBlob targetBlob = container.
        GetBlobReference(string.Format("TargetTestFile{0}.dat", i));
    targetBlob.CopyFromBlob(sourceBlob);
}
int copyFromBlobTime2 = Environment.TickCount;

Tags:

.NET | Azure | C# | Blob

Azure and creating website dynamically

by ingvar 28. april 2011 21:21

In this post I will show how to create a new website dynamically in an already deployed web role on Azure. This is especially interesting for multi tenant scenario. One other way of doing this would be creating a new deploy package and upgrade the running role. But this would mean downtime for the existing sites and will take longer than dynamically creating the site. 

I will start with some important things when creating websites dynamically from the web role and then I will show the code that does it.

The web role needs to have elevated privileges. This is done by adding the following element to the WebRole element in the ServiceDefinition.csdef file.


<Runtime executionContext="elevated" />

Only some ports are open in the firewall. When you normally creates new sites, you either run them on the same port as the deployed site does using host header (host name) or on a different port. On Azure using a different port is not an option. Only the ports that are specified for sites (or Remote Desktop Connection) when the package is deployed are open in the firewall. So the only option here is to use host header (host name) for new sites. You can see which ports that are open in the Window Azure Management Portal.

Microsoft.Web.Administration reference. You need to add the assembly Microsoft.Web.Administration to use the ServerManager class. The Microsoft.Web.Administration can be found in %WinDir%\System32\InetSrv directory. You also need to change the property Copy Local to true because it is not in the GAC on the Azure host. 

The code! Now lets get to the fun part, the code! Below is the code needed to create a new site. This could be done in OnStart or where you see fit. Though it has to be in the web role to have the privileges to do it. Here is the code:

string newWebsitePath = "SOME PATH!";
using (ServerManager serverManager = new ServerManager())
{
    /* Create the app pool */
    ApplicationPool applicationPool =
           serverManager.ApplicationPools.Add("MyApplicationPool");
    applicationPool.AutoStart = true;
    applicationPool.ManagedPipelineMode = ManagedPipelineMode.Integrated;
    applicationPool.ManagedRuntimeVersion = "v4.0";

    /* Create the web site */
    Site site = serverManager.Sites.
           Add("MyNewSite", "http", "*:80:www.mynewwebsite.com", newWebsitePath);
    site.Applications.First().ApplicationPoolName = "MyApplicationPool";
    site.ServerAutoStart = true;
    serverManager.CommitChanges();
}

Testing. A easy way to test this, is to edit your hosts file (C:\Windows\System32\drivers\etc\hosts). First you need to find the IP address of your web role. This can be done by either pinging it or by looking in the Window Azure Management Portal. So lets say the IP address is 111.222.333.444 and you have used www.mynewwebsite.com as host name. Then you need to add the following line to your hosts file:

111.222.333.444    www.mynewwebsite.com

When you have added this line and saved the file you can open www.mynewwebsite.com in your browser and the request will go to your web role.

Tags:

.NET | Azure | C#

Azure Shared Access Signature and pitfalls

by ingvar 26. april 2011 21:03

To start with, I did not think I would write a blog post about Azure Shared Access Signatures (SAS). But after having worked with them for some time I had stumbled into some things I think is worth sharing. The things I found is shown bellow the code. Thanks to @Danielovich for pointing me in the right direction.

I'll start by showing how to create a SAS. You need to have access to the Primary Access Key (or the Secondary Access Key) for the blob storage that you wish to use. These keys can be obtained through the Windows Azure Platform Portal. The code below shows how to create a SAS, use it and what you can/can not do with it. 

/* Here is how to create the SAS */
StorageCredentialsAccountAndKey masterCredentials = 
     new StorageCredentialsAccountAndKey("[Name]", "[AccessKey]");
CloudStorageAccount account = new CloudStorageAccount(masterCredentials, false);
CloudBlobClient client = account.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference("mytestcontainer");
container.CreateIfNotExist();

SharedAccessPolicy sharedAccessPolicy = new SharedAccessPolicy();
sharedAccessPolicy.Permissions = 
     SharedAccessPermissions.Delete |
     SharedAccessPermissions.List |
     SharedAccessPermissions.Read |
     haredAccessPermissions.Write;
sharedAccessPolicy.SharedAccessStartTime = DateTime.UtcNow;
sharedAccessPolicy.SharedAccessExpiryTime = DateTime.UtcNow + TimeSpan.FromHours(1);

string sharedAccessSignature = container.GetSharedAccessSignature(sharedAccessPolicy);

/* Here is how to use the sharedAccessSignature */
StorageCredentialsSharedAccessSignature sasCredentials = 
    new StorageCredentialsSharedAccessSignature(sharedAccessSignature);
CloudBlobClient sasClient = new CloudBlobClient(account.BlobEndpoint, sasCredentials);

CloudBlobContainer sasContainer = sasClient.GetContainerReference("mytestcontainer");
CloudBlob sasBlob = sasContainer.GetBlobReference("myblob.txt");

/* This will work if SharedAccessPermissions.Write is used */
sasBlob.UploadText("Hello!");

/* This will work if SharedAccessPermissions.Read is used */
sasBlob.DownloadText();

/* This will work if SharedAccessPermissions.Delete is used */
sasBlob.Delete();

/* This will work if SharedAccessPermissions.List is used */
sasContainer.ListBlobs();

/* This will always fail */
sasContainer.FetchAttributes();

/* This will always fail */
sasClient.ListContainers(); 

Here are some points that I think is worth noting when working with SAS. It might even save you some time:

  • Remember to use Utc methods on DateTime. If you use anything else, the time window where the SAS is valid, might not be the same as you think.
  • The FetchAttributes method does not work on the container/blob that the SAS was generated for. This is interesting because the FetchAttributes method is very often used to determine if the container/blob exists or not. But it will work for blobs inside a container if the SAS was generated for that container. 
  • A StorageClientException with the message: The specified resource does not exist, is thrown if the SAS does not grand enough access. So Azure hides the container/blob if the client does not have the right access level. 
  • DeleteIfExists will never fail if SharedAccessPermissions.Delete is not specified. As mentioned above, Azure hides containers/blobs if access rights are missing. 

Tags:

.NET | Azure | C#

About the author

Martin Ingvar Kofoed Jensen

Architect and Senior Developer at Composite on the open source project Composite C1 - C#/4.0, LINQ, Azure, Parallel and much more!

Follow me on Twitter

Read more about me here.

Read press and buzz about my work and me here.

Stack Overflow

Month List