We have used the Shared Access Signature feature in Azure quite a few times to achieve the Key Valet Pattern. Basically, it helps to implement temporary read/write access for private storage entity to the outside world without being able to share the security credentials for the same. We have used SAS in Azure to provide remote devices, tools and utilities temporary access to upload the files.
Basically, a shared access signature is a URI that grants restricted access rights to containers, blobs, queues, and tables for a specific time interval. By providing a client with a shared access signature, you can enable them to access resources in your storage account without sharing your account key with them.
To know more about SAS in Azure here are the links
How do I Achieve SAS in AWS?
Now, the interesting part, I was trying to implement the similar feature SAS on AWS S3 storage. Basically the idea is same the Key Valet Pattern, to be able to share read/write access on s3 bucket or object with remote device or utilities without sharing the security credentials.
For read access on S3 object AWS provides Pre-Signed object URL as indicated below.
For write access- to be able to upload files into S3 objects here is the way out in .Net world
Being from the Azure world, I had to spent time to reach to these solutions in AWS and I am sure there would be few developers like me who are looking for similar solution in AWS.
Hope this is helpful.
As always, Microsoft heard the pain felt by the Azure community around shared cache service quota and throttling. We have been using cache to a high extent for improving performance and scalability of our Azure applications and welcome the release of dedicated Azure Cache Service.
Let’s take a quick look at what is in store with the new Azure Dedicated Cache Service.
Dedicated Cache Service is offered in three tiers: Basic, Standard and Premium. Prices below include a 50% preview discount, and are based on cache size provisioned.
|Price Per Unit (Preview)||$12.50 / month
|$50 / month
|$200 / month
|Cache Size||128 MB||1 GB||5 GB|
|Scale||Up to 8 units||Up to 10 units||Up to 30 units|
|High Availability||Not Available||Not Available||Available|
- Dedicated – Very important upgrade. Earlier the cache service was shared with other tenants and hence there were quotas on transactions, connections causing throttling. Now, the service is dedicated to each tenant and hence no question of throttling, a big scalability relief!
- Performance and Interoperability – The cache service is available across Cloud Services, Windows and Linux VM with high performance as 1ms read and 1.2ms write, this includes end to end round trip from requestor to Cache service. Soon we should expect it to be available for Windows Azure Mobile Services, Great going!
- Scale – Remember the transient exceptions when we cross the size limit in shared cache? When you exceed the size of your cache, the Shared Caching service will evict items in the cache until the memory shortage is resolved. During this time, you could get memory-related exception. The scale attribute signifies auto scaling of the cache size. X unit would mean the cache size can auto scale up to X times of the cache size. No more worry of losing the old data and memory exceptions
- Uniform APIs – The APIs to access the new dedicated cache service is exactly same as dedicated cache service, in role cache and on-premises appfabric cache. Same existing session and caching providers can be used with dedicated cache service. Easy to switch and extend!
- Cheaper – Prices are lower and there are no transaction charges, save $!
- High Availability – Though it’s a premium feature and you got to spend more for it. Useful in critical applications for high redundancy
- Notifications – Another premium feature (available with standard tier too). Cache object add/update/delete sends a callback notifications
- Monitoring and Size Upgrade – Azure Management Portal dashboard provides in built cache service monitoring. You can also increase the size of the cache without impacting the application state. Smooth Upgrade!!
Read more details about caching here
With the premium cache as your key value database and spending $200 per month, we can develop mobile or web applications talking to 150GB database with superfast speed. There are already a few cases where we needed it and looking forward to use it as our database. Be aware it is in preview mode at the same time take advantage of 50% discount on the pricing.
Since, Microsoft is upgrading Azure really fast, I am hoping they implement the same concept on SQL Azure and release Dedicated Azure SQL Database Service. Let’s see how soon! 🙂
Last week, I delivered an interenal talk on Scalability Designs Principles. It was an interesting session and interactive audience made it more interesting.
Discussions around How to handle the scalability issue is something which audience liked a lot with Traffic example.
Basically, there three ways to handle the scalability issues
1. Do Nothing
3. Adding Resources
a. Vertical Scaling
b. Horizontal Scaling
Attached is the presentation I used for this talk.
Disclaimer: Some of the images are taken from web and due credit to the websites in the notes sections.
High availability is one of the most important NFR in application deployment. We are used to measuring the availability of services in nines (three nines, five nines). In terms of cloud services it is imperative to be AWARE of all the SLAs of different services and accordingly publish the SLA of our application being deployed in the Cloud.
Before I put the point across, just a recap of what high availability SLA downtime and Azure SLAs are. The Service Level Agreement will be calculated based on the percentage of the availability defined for the system.
Following table shows the availability percentage and corresponding maximum possible downtime period within the SLA.
|Availability %||Downtime per year||Downtime per month||Downtime per week|
|90% (“one nine”)||36.5 days||72 hours||16.8 hours|
|95%||18.25 days||36 hours||8.4 hours|
|97%||10.96 days||21.6 hours||5.04 hours|
|98%||7.30 days||14.4 hours||3.36 hours|
|99% (“two nines”)||3.65 days||7.20 hours||1.68 hours|
|99.5%||1.83 days||3.60 hours||50.4 minutes|
|99.8%||17.52 hours||86.23 minutes||20.16 minutes|
|99.9% (“three nines”)||8.76 hours||43.2 minutes||10.1 minutes|
|99.95%||4.38 hours||21.56 minutes||5.04 minutes|
|99.99% (“four nines”)||52.56 minutes||4.32 minutes||1.01 minutes|
|99.999% (“five nines”)||5.26 minutes||25.9 seconds||6.05 seconds|
|99.9999% (“six nines”)||31.5 seconds||2.59 seconds||0.605 seconds|
|99.99999% (“seven nines”)||3.15 seconds||0.259 seconds||0.0605 seconds|
Considering the Azure services, SLA varies for each component. The following table shows the SLA for each component that Microsoft offer.
Now, let us consider a simple application with few cloud services used to develop. Windows Azure is just one of the providers here due to my familiarity with the platform, but generally the principle is applicable to almost every environment providing hosting.
Web Role – 2 instances – Availability 99.95% – Maximum Downtime per month 21.56 minutes
SQL database – Availability 99.9% – Maximum Downtime per month 3.60 hours
Storage – Availability 99.9% – Maximum Downtime per month 3.60 hours
All the above components are critical and should be available for desired usage of the application.
Now, what is the SLA of this application?
We might be under the impression that the application’s SLA would be 99.9% since that’s the lowest possible of all the critical components. Meaning your application has maximum down time of 216Mins in a month. Think again!! These components have their own SLAs for working independently so it is possible that these services would go down at different time. Let us take the worst scenario (design for failure’s principleJ)
The above possibility is within the SLA of the service provider, but for the application the possible down time in a month is 453Mins=7.55Hours which is approximately 99%. Coming down from earlier perceived 3 nines (99.9) to 2 nines (99) (believe it!!)
Please be aware of this absolute high availability of dependent services and the worst-case failure scenarios before committing about your application’s SLA. Kindly note this is applicable to every deployments, irrespective of deployment host as On Premise, Amazon, Azure or any other data center for that matter.
Please remember SLAs are not a promise, they are just a goal. There may be some penalties such as refunds if your service provider fails to meet their SLA and ideally you should also pass the similar consequences to your application users.
It is important to think about absolute SLA of your application’s availability and also
1. monitor the application for delivered SLA
2. Provide the SLA breach benefits to the customer
3. Handle the failure gracefully (may be another blog post )
Tags – Absolute High Availability, How to calculate high availability, Cloud Service SLA, Application SLA, Calculate Application SLA
We are working on a migration project an asp.net web site to be migrated to Windows Azure. In the process, we added a cloud project in the solution and added ASP.Net Web Site as a web role. After setting the other Azure related stuff when we tried running the solution (Cloud Project) got a weird error as below :
After spending a good amount of time in validation web.config, assemblies and even restarting the Azure Emulator we couldn’t fix this, still the same error.
In the event viewer saw following warnings:
A process serving application pool ‘249f00d5-fd0c-4d07-848b-3dd39e5a824b’ terminated unexpectedly. The process id was ‘7860’. The process exit code was ‘0xfffffffe’.
Site 1273337584 was disabled because the root application defined for the site is invalid. See the previous event log message for information about why the root application is invalid.
The application ‘/’ belonging to site ‘1273337584’ has an invalid AppPoolId ‘249f00d5-fd0c-4d07-848b-3dd39e5a824b’ set. Therefore, the application will be ignored
Seemed like something wrong with AppPool or at least in Deployment.
After disabling the Web Deployment Package Settings for the Web Site project we could resolve this problem.
Right click on the ASP.Net Web Site project go to properties and from Package/Publish Web tab disable check box Create deployment package as a zip file as indicated below.
After disabling the Web Deployment Package Setting, the project worked like a charm.
It took us couple of hours to resolve this, hope to save some of your time.
I had the opportunity to listen to leading CIOs in India and was also honored to speak in front of them. Here is the agenda of the summit. I also attended and presented at a similar event held in Delhi.
My topic was “Monetizing Platform as a Service (PaaS) and Implementation Models” where PaaS here is (what else :)) Windows Azure.
Please do read the complete post here
Also, the slides are available at ssachin7
Microsoft India TechEd 2012
Microsoft India TechEd 2012 took place at Bangalore 21-23 March, I attended as one of the TechEd Rock Star 2012, https://twitter.com/#!/sachinsan/status/184191621083054080/photo/1 image is taken out of the collage of rock stars. I was selected as Windows Azure RocksStar.
As usual every TechEd gets bigger than the previous one. Moorthy Uppaluri
announced in his awesome Kolawari D dancing entry J for closing note that 10000 people attended TechEd this year.
This time TechEd was all over with Windows 8, Metro Style apps, Windows Phone. You can sense that Microsoft is no more just a software company; it is all over into devices, services, games etc.
I was mostly following architecture track and the most interesting session was “Architecting your Life” by Ranganathan
known as Ranga. Architecting life was about knowing functional and non-functional requirements of personal life and living intelligently. Ranga spoke about the bitter realities of software professional’s life and some very funny situations which almost everybody might have gone through in their professional life in IT.
Some of other sessions I attended from Stephen Forte on Agile Estimation and the other one on Kanban process, he is indeed a very engaging speaker who brings a lot of energy on stage and shares very good tips from his extensive experience in IT.
Demo extravaganza from Nahas and Harish were very enthralling with lots of real demos mostly around Windows 8 and Windows Phone. These guys know how to get high decibel response from audience; these were high voltage sessions.
At “Rock Star Felicitation” got an opportunity to meet Microsoft Senior Leadership team and other rock stars. It was a humble experience to get to know others contribution begin a rock star and their views on Microsoft Technologies in general. It was honor to be among these extremely talented and technology focused individuals. I was impressed especially with student rock stars, as they are only at the verge of starting their career in professional world and how much they already experienced. It would be a fast track highway career lane which would always keep these student rock stars ahead.
Finally, thanks to Microsoft for electing me one of the Rock Star and thanks for making our lives more exciting with technology.