I blogged about this about a week back, and I was thoroughly confused and thought I was imagining things. I now know I wasn't dreaming and can now reproduce it.
If you update (overwrite or (delete + add)) an existing blob in Azure Storage Services there is a chance you will be served up stale data, if you are working on multiple servers (note single server is ok)
My Test Environment
To prove I wasn't mad, I ran my Azure Blob Browser application on 2 different machines (my local machine + on a remove desktop machine).
All I did was Add / Delete / Update blobs in a container in my storage account.
Adding + Deleting a blob
So if I added a blob or deleted a blob, the change was reflected instantaneously (or at least as quick as my mouse) on both machines.
Updating a blob (replication delays)
If however if I updated a blob (either by overwriting the blob or by deleting and adding a new blob), the blob had not replicated to the storage account server that the remote machine connected to.
The blob was still correct in my blob browser on my local machine, however on my remote machine it was still displaying (etag + last modified) and downloading stale data.
Although you can assume that once you add or delete a blob in azure blob storage services that all instances of your application will have access to the data, at the moment you cannot make the same assumption for an update (unless you are a single instance service).
I'm not sure if this is intentional or is a bug (after all this a CTP release).
As I said on my previous post, I do need to dig into things a little more and get more details on the replication mechanism used.