Friday, March 5, 2010

Google App Engine downtime postmortem, nearly a perfect model for others

Google posted one of the most detailed and well thought out postmortems I've seen to explain what happened around their 2/24/10 App Engine Downtime. Let's run it through the gauntlet:

Prerequisites:
  1. Admit failure - Yes
  2. Sound like a human - Yes, more in some sections then others
  3. Have a communication channel - Yes, the Google App Engine Downtime Notify group. Ideally it would have been linked to from the App Engine System Status Dashboard as well.
  4. Above all else, be authentic - Yes
Requirements:
  1. Start time and end time of the incident - Yes, including GMT time, and a highly detailed timeline of the entire event
  2. Who/what was impacted - Partly, and also partly covered during the actual incident
  3. What went wrong - Yes, yes, and yes! Incredible amount of detail here.
  4. Lessons learned - Yes! Not only are there five specific action items, they are also introducing new architectural changes and customer choice as a result.
Bonus:
  1. Details on the technologies involved - Somewhat
  2. Answers to the Five Why's - No
  3. Human elements - heroic efforts, unfortunate coincidences, effective teamwork, etc - Partial
  4. What others can learn from this experience - Partial
Takeaways and thoughts:
  1. A vast majority of the issues were training related. This is an important lesson: all of the technology and process in the world won't help you if your on-call team is unaware of what to do. This is especially true during the stress of a large incident. Follow the advice of Google and run regular on-call drills (including rare issues), keep documentation updated, and give on-call people the authority to make decisions on the spot.
  2. Extremely impressed with the decision to use this opportunity to improve their service by giving their customers the choice between datastore performance and reliability. This is a perfect example of turning downtime into a positive.
  3. Interesting insight into their process of detecting an incident to communicating externally. From the start of the incident at 7:48am to the first external communication at 8:01am is not too shabby. Not sure why it took so long to post the update to the downtime forum and the health status dashboard (8:36am).
  4. The amount of time and thought that went into this postmortem shows how much Google is concerned about their service, and impressions around its reliability.
What could be improved:
  1. External communication could be faster. No reason not to post something as soon as the investigation begins, not to mention posting to the forum dedicated to downtime notifications and the health status dashboard immediately. When the incident started the dashboard had very limited data, which should be automatic and real-time.
  2. A post to this postmortem from the health status dashboard would make it a lot easier to find. I didn't see this until someone sent it to me.
  3. Timelines and concrete deliverables on the changes (e.g. on-call training sessions, documentation updates, new datastore feature release) would give us more confidence that things will actually change.

Wednesday, March 3, 2010

Cloud photos for presentations and documents

I've been working on a talk I'm giving at Cloud Computing Congress in China and (I'm not ashamed to admit) I spent a lot of time looking for just the right cloud photo. I thought I'd share the best photo's I came across, in case you ever need to give a talk about The Cloud (or just love photo's of clouds).

Note: These photos are all be licensed under Creative Commons, and came from a combination of Flickr Creative Commons search and Google Images (manually excluding non-Creative Commons images).

The photos are divided into four sections:
  1. Simple clouds

  2. Dark clouds

  3. Powerful clouds

  4. Unique clouds


Simple clouds

oct02cloud7aweb

195649679_6b66006d85_o

494749811_3407ce8489_o

425182690_0e8026ea85_b

Cloud_Queen

cloud-blue-sky3

clouds-blue-sky4

clouds-blue-sky2

cloud-blue-sky2

gray-clouds-blue-sky

cumulus-mountain-clouds

cloud-blue-sky2-1

195649679_6b66006d85_o

Some clouds for you


Dark clouds

265982218_0deaa583f0_o

2188831526_08c7d7f89a_o

fire-hole-sky

sea-storm-clouds

561660040_b6e795aeb3_o

476475563_74d547dbd6_o

clouds-evening-greys


Powerful clouds

sun_in_white_cloud-784254

Atlantic_cloud

CloudsSunRays

SunThroughCloud

sky3

169512408_83042be31e_o

2217512391_171170ddc7_o

2074716495_62a9a206b3_o

dark-clouds2

burning-evening-clouds

2625541464_a126c58750_b

silver-sunlit-clouds

3131005845_a2d6bc025b_o

bird-of-prey-sunrise

cloud-os-netbook


Unique clouds

Heart From Cloud-279799

xray-cloud-strands

gpw-20061021-NASA-ISS015-E-27038-huge-cloud-Earth-Pacific-Ocean-Chile-20070907-original


You can find all of these cloud photos here. If you have other favorites and are willing to share, please add a link to them in the comments.

Tuesday, March 2, 2010

A guideline for postmortem communication

Building on previous posts, the following is a proposed Guideline for Postmortem Communication:

Prerequisites:
  1. Admit failure - Hiding downtime is no longer an option
  2. Sound like a human - Do not use a standard template, do not apologize for "inconveniencing" us
  3. Have a communication channel - Set up a process to handle incidents prior the event (e.g. public health dashboard, status blog, twitter account, etc.)
  4. Above all else, be authentic - You must be believed to be heard
Requirements:
  1. Start time and end time of the incident
  2. Who/what was impacted - Should I be worried about this incident?
  3. What went wrong - What broke and how you fixed it (with insight into the root cause analysis process)
  4. Lessons learned - What's being done to improve the situation for the future, in technology, process, and communication
Bonus:
  1. Details on the technologies involved
  2. Answers to the Five Why's
  3. Human elements - heroic efforts, unfortunate coincidences, effective teamwork, etc
  4. What others can learn from this experience