This came into my head last night when uploading a couple of blobs into the cloud.
Since I developed my blob browser, I've found myself using it to transfer files from one machine to another.
For the sake of efficiency I found myself with one remote desktop connection to one machine, and me being on my main machine. Since this was an update to an existing file that I was transferring, I copied over the existing zip file in the cloud. I was being particularly lazy, and I just used a public uri to get the file at the other side (rather than using a private uri, and my browser).
The interesting thing is that when I retrieved the file at the remote site, i didn't get the update version of the file, it served me up the previous version.
At first I thought there was something odd with the replication mechanism, session affinity with the posting machine, now I just think that I had some caching getting in the way, as I can't seem to reproduce what i saw last night.
If I post a file into blob storage services, I should be able to retrieve the file straight away. This is different to Amazon's model which is that the file might not be there when you attempt to access it (as it hasn't been replicated yet).
To be honest I'm not quite sure how this is achieved in Azure. I know that data must be replicated at least 3 times. I don't know if it has to replicate 3 times before a response is given, replicated to all participating machines or whether the replication is completely loosely coupled. (I will try and do some digging).
I guess my point here is that I need to look a bit more into the workings of replication with Azure