What We Should Mean by ‘Accountability’
The term ‘accountability’ is frequently used in the context of teams and organizations, but it is often deployed in ways that are vague, contradictory or counterproductive.
In fact, we sometimes define the term simply through its absence: it’s not uncommon for teams or organizations to identify a ‘lack of accountability’ as a problem, without clearly articulating how this void should be filled. This blog post surveys some of the more productive notions of accountability from the literature, and the role of management in implementing them.
Classifying organizational cultures
Before discussing accountability, we must be clear what a high performance culture looks like in order to assess the suitability of our preferred definition (and to see why some culturally ingrained definitions should be discarded). A helpful model for this purpose is the organizational typology proposed by Ron Westrum (2004), which defines three distinct organizational climates in terms of how information flows.
Pathological (power-oriented) |
Bureaucratic (rule-oriented) |
Generative (performance-oriented) |
---|---|---|
Messengers “shot” | Messengers neglected | Messengers trained |
Failure leads to scapegoating | Failure leads to ‘justice’ | Failure leads to inquiry |
Low cooperation | Modest cooperation | High cooperation |
Bridging discouraged | Bridging tolerated | Bridging encouraged |
Responsibilities shirked | Narrow responsibilities | Risks are shared |
Novelty crushed | Novelty leads to problems | Novelty implemented |
Westrum organizational typology model (Westrum 2004)
There is a growing body of evidence that these cultural climates predict organizational performance, and that generative cultures outperform others in a great many contexts (Westrum 2004; Edmondson 1996; Forsgren, N. and Humble, J. 2016). As Westrum observes:
The effects of information flow climate are pervasive. And they are pervasive because the climate shapes three key variables: alignment, awareness, and empowerment.
Literature on safety-critical industries, in which failure may lead to injury or fatality, will often examine ‘safety cultures’. Organizational development theorists and human resources professionals in other forms of knowledge work may speak of ‘learning cultures’. Regardless, while the impact of failure may vary across industries, a generative culture typically predicts organizational performance, and the methods by which we achieve one are broadly consistent. Therefore, this post uses the terms ‘generative culture’, ‘safety culture’ and ‘learning culture’ somewhat interchangeably.
A final note on terminology: some of the authors cited focus their work on self-organising teams or safety-critical industries in which individuals act relatively autonomously. My view is that we should apply these definitions widely: a customer call centre agent with structured scripts to follow must still make independent decisions (e.g. when to escalate a customer complaint) and is operating within a complex wider system consisting of department processes, customer interactions, competing business priorities, market competition, etc. The number of jobs in which the work is both completely and coherently defined, and also completely isolated from complex wider systems, are vanishingly few.
Flawed interpretations of ‘accountability’
In different contexts, ‘accountability’ may be used in the following ways.
-
Formal performance management processes, such as performance reviews, promotions and employee gradings/rankings, may be seen to provide some degree of accountability: in theory, good performance is recognised and underperformance is sanctioned. Moreover, if someone is over-promoted despite (perceived) poor performance then this may be seen as a ‘lack of accountability’ (even if we have not yet defined what accountability should be). However, formal HR processes like annual reviews and rankings are known to often be counterproductive due to delayed feedback, unfair review processes, and a focus on criticism (Carucci 2020). While managers should facilitate useful, frequent feedback to support personal development, an annual performance review is no good as a mechanism for accountability.
-
Holding someone “accountable” is sometimes a euphemism for finding someone to publicly blame. It is well known that such a punitive approach undermines psychological safety, fostering a culture in which problems are hidden rather than acknowledged and learned from (Edmondson 2018). Thus, punitive approaches diminish efforts to build a generative culture.
-
Sometimes the term is used to distinguish the key decision-maker from others responsible for doing the work (e.g. in a RACI matrix). This may tell us who is accountable, but it doesn’t tell us what accountability should look like.
-
Sometimes “accountable” is used as a synonym for “responsible” (e.g. Fisher, 2000). Certainly, accountability implies a responsibility to our work commitments, and this view may even be sufficient when things are going well. However, without also examining how we deal with failure (and especially how we learn from it), we are still missing something important.
-
We also speak of accountability for serious transgressions (e.g. behaviours which contribute towards a toxic workplace culture, or which may even be illegal). These cases are important to deal with quickly (Felps 2006) but should be distinguished from cases in which failures can occur despite the best efforts of all. Treating every failure as a moral failure of character to be sanctioned will engender a pathological culture.
-
Tiresomely, the term has also been co-opted from the language of therapy to gaslight an audience into thinking a mistake has been recognised. When a YouTuber promises to “take accountability” for their latest crypto scam, you know you’re witnessing a PR manoeuvre and that zero consequences will follow.
A better approach
To improve upon these limited approaches, we can take inspiration from two complementary views on what ‘accountability’ should look like in high performing organizations. We will then discuss the preconditions which management must institute in order for this approach to be effective.
‘Accountability’ as responsibility
Intuitively, accountability implies responsibility. As Kimball Fisher (2000) points out, individual clarity on roles is necessary to avoid ‘social loafing’. Management must ensure ownership and responsibilities are clear within and across teams.
Fisher also recommends individuals should be responsible for both tasks (the work being done) and results (the outcome of the work). OKRs are a popular method for implementing such systems of accountability today. Responsibility in this sense is a consequence of role clarity and transparency. It should be clear what each team member is working on, how these tasks contribute towards expected outcomes, the status of current work, and measures of results.
However, we could do all of this and still respond unproductively when things go wrong. We might regress to punitive measures, undermining efforts to build a generative culture. We might assume everyone did their best and move on, ignoring opportunities for learning. Or management might intervene directly, failing to engage frontline staff, diminishing team autonomy, and again missing important learning opportunities.
‘Accountability’ for learning
In the context of failure, Sidney Dekker (2014) suggests we should interpret ‘accountability’ literally:
You can hold people accountable by letting them tell their story, literally “giving their account.”
In other words, let people explain what happened, what should be done to fix things, and (ideally) how to improve in future. Accountability of this kind can be a great catalyst for learning.
This approach has other multiplicative benefits which are suppressed by punitive and bureaucratic interpretations of accountability:
-
As Dekker observes, “Storytelling is a powerful mechanism for others to learn vicariously from trouble.” Personal stories resonate with peers more strongly than a process change or training video ever could.
-
It engages frontline staff in identifying problems, from which it is a small step (with the right supporting structures) to have teams drive meaningful improvements to their own processes.
-
It contributes to a culture of psychological safety (in which individuals feel confident to speak up and volunteer information), improving information flow. In general, we must strive not to stagmatise failure, or else opportunities for learning will disappear.
Systems and the myth of ‘human error’
Dekker also recommends that we take a systems view of team performance. While failures may naively be blamed on mistakes or ‘human error’, most organizational systems are driven close to the limits of what they can safely deliver (and often beyond). Deadlines, competing priorities and brittle processes requiring workarounds all contribute to conditions in which failures aren’t due to personal mistakes or ‘human error’. Instead, they are merely a consequence of the organizational constraints and pressures which staff do their best to navigate with the limited information available to them in the moment. In this context, failure is not a moral deficiency but a product of systems complexity. Our job is to find ways to manage this complexity in order to reduce the prevalence or scale of failure, as outlined below.
Preconditions for accountability
Giving frontline staff a voice to tell their stories is a necessity, but there are other prerequisites to accountability if we wish to sustain a generative culture.
Authority
Dekker calls out the “authority–responsibility mismatch” as a common problem:
It is impossible to hold somebody accountable for something over which that person had no authority.
He observes that in most safety-critical applications, operators must continually balance competing demands for efficiency and thoroughness. If an organization doesn’t create time and space for staff to be thorough, if it doesn’t grant them the authority “to live up to the responsibility” asked of them, then the staff cannot be held accountable. The organization must first reform itself.
Team capabilities
Fisher (2000) provides a helpful model for thinking about self-organising teams: he suggests that in addition to authority and accountability, teams also require the resources (i.e. skills, personnel, tools) to do their job well, and relevant information to make informed decisions. (These four requirements — Authority, Resources, Information, Accountability — can be recalled with the mnemonic ARIA.) It is management’s job to develop these capabilities in order to grow the boundaries within which the team can operate with accountability.
Psychological safety
Another important prerequisite to a learning/safety culture is psychological safety. Writing about the crash of Valujet flight 592 in Miami and the corporate conditions which contributed to safety failures, Dekker (2014) observes:
Fear of prosecution stifles the flow of information about such conditions. And information is the prime asset that makes a safety culture work.
Without psychological safety, information won’t flow and efforts to build a generative culture will fail. Punitive forms of accountability revolving around sanctions and discipline will undermine these efforts.
Amy Edmondson (2018) provides helpful guidance for leadership to build and sustain psychological safety. She summarises the activities leaders should engage in across three categories of work (“setting the stage”, “inviting participation” and “responding productively”).
Category | Setting the Stage | Inviting Participation | Responding Productively |
---|---|---|---|
Leadership tasks |
Frame the Work
|
Demonstrate Situational Humility
|
Express Appreciation
|
Accomplishes | Shared expectations and meaning | Confidence that voice is welcome | Orientation toward continuous learning |
The Leader’s Tool Kit for Building Psychogical Safety (Edmondson 2018, Chapter 7)
While sanctioning clear violations of agreed standards is sometimes necessary, management must make every possible effort to determine no other factors are contributing to the problem.
How do we know when to sanction violations of standards? How should we reason about other contributing factors? To answer these questions we need to understand the nature of dynamic complexity.
Navigating dynamic complexity
While giving frontline staff the opportunity to give their own accounts of events is important, we must take care to draw appropriate conclusions from these accounts, since individuals are unlikely to have the full picture. To that end, we must learn how to distinguish different categories of failure.
When reasoning about outcomes in a complex dynamic system, we often erroneously apply what Dekker (2011) refers to as “linear, Newtonian-Cartesian logic” — that is, we view an outcome as the product of a linear chain of events with a “root cause”.
Allspaw (2012) points out that such linear thinking “validates hindsight and outcome bias”. Instead, failure in complex systems is typically the result of “multiple contributing factors, each necessary but only jointly sufficient”. Therefore, when we analyse and synthesise recommendations from the accounts we hear, we must guard against linear thinking. Learning from failure in dynamic systems requires diligence and structured methods to overcome human cognitive limitations in the face of systems complexity.
Edmondson offers three failure categories which we can use to orient ourselves in order to respond productively based on the nature of the failure.
Preventable | Complex | Intelligent | |
---|---|---|---|
Definition | Deviations from known processes that produce unwanted outcomes | Unique and novel combinations of events and actions that give rise to unwanted outcomes | Novel forays into new territory that lead to unwanted outcomes |
Common Causes | Behavior, skill, and attention deficiencies | Complexity, variability, and novel factors imposed on familiar situations | Uncertainity, experimentation, and risk taking |
Descriptive Term | Process deviation | System breakdown | Unsuccessful trial |
Productive Responses to Different Types of Failure (Edmondson 2018, Chapter 7)
An examination of the techniques available to learn from ‘complex’ and ‘intelligent’ failures is beyond the scope of this post, but one well-researched example is the US military’s approach to conducting after-action reviews (AARs). The following points are worth highlighting because they contrast with the relative lack of effort some of us have probably experienced in “post-mortems” or “Agile” retrospectives.
-
The role of the AAR facilitator is well defined and demanding. Facilitators are trained in how to conduct AARs. Doctrine is clear that facilitators should plan, prepare and rehearse AARs to be effective (US Army 1993).
-
AARs are a “knowledge management challenge” (Morrison 1999). Data are derived from personal observations from frontline personnel, local leaders and the facilitator; from informal discussions; through interactions in the AAR; from operational models; and from analysed outcomes. The facilitator should “filter and organize the data and then prepare to present it in the AAR”.
-
AARs should be held in a timely fashion while events are fresh in memory. Moreover, AARs are embedded into structured Deming/PDCA cycles (Darling et al 2005) in order to “connect past experience with future actions” — i.e. to operationalise learning. (Contrast this with how some organizations run Agile retrospectives, which are often held on an arbitrary cadence and become merely a routine.)
-
Formal AARs may still be complemented with frequent, informal discussions for realtime feedback (“hot washes”) as or immediately after events occur (Morrison 1999).
A complete definition
Having considered a number of perspectives and models in literature, we can now complete our definition.
Accountability entails accepting responsibility for the tasks we perform, the outcomes of our work, and for learning from those outcomes.
It is achieved by letting those doing the work give their account when failures occur. This transparency lets individuals and teams demonstrate ownership, and is an effective means to implement organizational learning through the power of personal stories.
Prerequisites include authority, team capabilities (including skills, resources and information) sufficient to do the work, and psychological safety.
Management should support learning and improvement actions based on the category of the domain, taking particular care not to conflate complex, intelligent and preventable failures. In most organizations this must include structured learning activities so that complex failures can be reasoned about.
References
- Allspaw, J. (2012). Each Necessary but only Jointly Sufficient
- Carucci, R. (2020). How to Actually Encourage Employee Accountability
- Darling, M. et al. (2005). Learning in the Thick of It
- Dekker, S. (2011). Drift into Failure
- Dekker, S. (2014). The Field Guide to Understanding Human Error (3rd Edition)
- Edmondson, A. (1996). Learning from Mistakes Is Easier Said Than Done: Group and Organizational Influences on the Detection and Correction of Human Error
- Edmondson, A. (2018). The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth
- Felps, W. et al. (2006). How, When, and Why Bad Apples Spoil the Barrel: Negative Group Members and Dysfunctional Groups
- Fisher, K. (2000). Leading Self-Directed Work Teams
- Mastiglio, T. et al. (2011). Current Practice and Theoretical Foundations of the After Action Review
- US Army (1993). A Leader’s Guide To After-Action Reviews
- Westrum, R. (2004). A Typology of Organisational Cultures