Tuesday, July 20, 2010

Why Transparency Works

We've talked about the benefits of transparency. We've talked about implementing transparency. We've talked about transparency in action. What we haven't yet talked about is...why the heck does transparency work? Why does transparency make your users happier? Why do customers trust you more when you are transparent? Why do we want to know what's going on? What allows us to be OK with major problems by simply knowing what is going on? My theory is simple: Transparency gives us a sense of control, and control is required for happiness. Allow me to elaborate.

Downtime and learned helplessness
The concept of learned helplessness was developed in the 1960s and 1970s by Martin Seligman at the University of Pennsylvania. He found that animals receiving electric shocks, which they had no ability to prevent or avoid, were unable to act in subsequent situations where avoidance or escape was possible. Extending the ramifications of these findings to humans, Seligman and his colleagues found that human motivation [...] is undermined by a lack of control over one's surroundings. (source)
Learned helplessness was discovered by accident when Seligman was researching Pavlovian conditioning. His experiment was set up to associate a tone with a (harmless) shock, to test whether the animal would learn to run away from just the sound of the tone. In the now famous experiment, one group of dogs was restrained and unable to escape the shock for a period of time (i.e. this group had no control over its situation). Later this group was placed into an area that now allowed them to escape the shock; unexpectedly the dogs stayed put. The shocks continued to come, yet the dogs simply curled up in the corner and whimpered. These dogs exhibited depression, and in a sense gave up on life, because these negative events were seemingly random. Seligman concluded that "the strongest predictor of a depressive response was lack of control over the negative stimulus." What is downtime if not a lack of control over a negative stimulus?

The Cloud and loss of control
Many concerns come up when businesses consider the cloud, but as the survey by IDC below shows the overriding concern is rooted in a loss of control:

You give up a lot of control in exchange for reduced cost, higher efficiency, and increased flexibility. Yet that that desire for control persists, and the remaining bits of control you maintain become even more valuable.

Downtime kills our sense of control
Downtime is quite simply a negative event over which you have almost no control. Especially when using SaaS/cloud services your remaining semblance of control vanishes as soon as service goes down and you have no insight into what is going on. We are the dogs trapped in the shock machine, whimpering in the corner.

As I described in my talk, downtime is inevitable. Thanks to things like risk homeostasis, black swan events, unknown unknowns, and our own nature, there is no way to avoid failure. All we can do is prepare for it, and communicate/explain what is going on. And that is the key to keeping us from the fate of a depressed canine. Transparency gives us a sense of control over the uncontrollable.

How transparency gives us back the sense of control
Imagine walking through the park, the sun shining, the birds singing. All of a sudden you notice a strong pain in your arm. Your mind jumps to the worst. Are you having a heart attack, did something just bite you, are you getting older and sicker? Then a split second later you remember...your buddy jokingly punched you earlier in the day! The punch must have been harder than you remember, but it explains the pain. Instantly you feel better. Though the pain is the same, you understand the source. You have an explanation for the pain. Transparency delivers that explanation.

When Amazon goes down, or Gmail isn't loading, users feel pain. Part of that pain comes from the inconvenience of not being able to do what you want to get done, or the lost revenue that comes with downtime. But just as painful is the sense of fatalistic helplessness, especially if someone is breathing down your neck expecting you to fix the problem. Without insight into what is happening with the service, you are completely without control. If on the other hand the service provides an explanation, through a public health dashboard or a status blog or a simple tweet, your fatalistic reaction turns to concrete concern. Your mind goes from assuming the worst (e.g. this service is terrible, they don't know what they are doing, it always fails) to focusing on a real and specific problem (e.g. some hard drive in the datacenter failed, they had some user error, this'll be gone soon). A specific problem is fixable, an unexplained pain is not. Transparency brings the pain down to a specific and knowable problem, while also holding the provider accountable for their issues (which indirectly gives you even more control). Or better said:
Seligman believes it is possible to change people's explanatory styles to replace learned helplessness with "learned optimism." To combat (or even prevent) learned helplessness in both adults and children, he has successfully used techniques similar to those used in cognitive therapy with persons suffering from depression. These include identifying negative interpretations of events, evaluating their accuracy, generating more accurate interpretations, and decatastrophizing (countering the tendency to imagine the worst possible consequences for an event). (source)
By providing a sense of control, transparency is one of the keys to keeping us happy, productive, and sane in an increasingly uncontrollable world.


  1. Great points. I keep hearing a lot of frenzy about "cloud security" and when it comes down to it, 99% of that is the loss of a sense of control (though often, not really actual loss of control). Transparency (and, I would argue, moving from that, which is fundamentally read-only, into interactivity) helps with that a lot.

  2. Interesting, what do you see interactivity allowing users to do? Maybe vote on how the service provider should handle the issues, or how seriously they should be taking it?

  3. Well, so we were looking at this problem as we were speccing our public health dashboard. Let's say your monitoring doesn't detect everything that could be fundamentally wrong (always happens in complex systems). How do you detect from users that there's something wrong? The formal support channel has all kinds of hurdles, so people do things like "have a monitor on twitter to see if people are mentioning you a lot" which is pretty imprecise (a monitor on hits to your public health dashboard might be interesting...). The usual point to the traditional tiers of support is to weed out problems layer by layer, often introducing deliberate delays (e.g. free support for our hardware and software products is on our forums, and forum posts are embargoed so that we don't see them internally for a little while, to give the community the chance to answer the questions). Great for functionality questions. But we want to know about downtime problems immediately - it's a conflict.

    How can you let users notify you more precisely that they think there's a system problem? We were considering some kind of "digg"-like voting feature next to the lines on the dashboard - the "I think something's wrong" button. If a bunch of people are clicking and saying "I think something's wrong" within a given time slice, it probably is.

    It could also probably be used for a "this is pissing me off" button to drive severity of fix, if you have long-standing items. But yeah, adding the idea market/digg voting patterns to downtime.

    I'm also a fan of letting people put in their email address so that you will email them when a specific outage is over; perceived downtime of users is often from "last time I used it and it worked" to "when I come back and it works again" - time spent after you've fixed it waiting for them to poll your dashboard or whatever counts against you perception-wise. And you don't want to spam your whole user base every time there's a problem. So letting people come say "I think there's a problem" and, if there is, sign up to be contacted when it's better.


Note: Only a member of this blog may post a comment.