This article is extensively based on an excellent course on ethical decision making by the Professors Palazzo and Hoffrage, both at the University of Lausanne [1].
In the eighteenth century, the philosopher Immanuel Kant said that to do evil is to act against reason. But Kant’s view was fundamentally challenged by the horrors of World War II. For example, Auschwitz, where three million people were murdered, was the result of a careful application of reason. Consequently, post-war philosophy radically breaks with the link between evil and reason and focuses instead on the social conditions that promote evil. Asch, Milgram and Zimbardo demonstrated with their famous experiments [2], [3], [4] that good people can do evil things. The conclusion is that evil is a consequence of a strong context.
There are various theories for ethical decision making, e.g. the Categorical Imperative [5] and Utilitarianism [6]. But reason-based decisions are not always possible because we might be in a strong context that switches off reason.
Palazzo and Hoffrage use the term “ethical blindness” to define the temporary and unconscious deviation from one’s own values that can occur in a strong context. And it is this ethical blindness that can lead good people to do bad things without being aware of it. They cite the Challenger space shuttle disaster, the collapse of Enron, and the failure of Lehman Brothers as examples of unethical decision making resulting from ethical blindness.
So, how does a strong context arise?
There are three contextual layers that can contribute to ethical blindness. The context is especially strong when these three layers are aligned.
· Situational context where one or more types of pressure might prevail: authority pressure (e.g. Milgram), peer pressure (e.g. Asch), role pressure (e.g. Zimbardo), and time pressure.
· Organizational context. Discussed further below.
· Societal context that contains ideologies like maximizing shareholder value.
Let’s examine what organizations might be doing that promotes ethical blindness.
· Organizations adopt routines and create structures to operate efficiently in a complex world. But, routines switch off reason because we do not need to think when executing them. And while structures are a means of categorizing for efficiency, silos and their associated issues often result [7].
· Organizations can also inadvertently promote ethical blindness when they set objectives, define incentives, and evaluate performance. If objectives are too tough this increases the risk of rule breaking. If incentives are to reward individuals only, and not teams, this creates a highly individualistic competition where people look after themselves only. And if the evaluation system is “Darwinistic” in identifying low and high performers, people fear possible humiliation and fear is a strong driver of ethical blindness.
· Furthermore, authoritarian leadership can strengthen the context: aggressive language creates fear and employees’ perception of not being in the locus of control could lead the employee to do horrible things.
Many organizations recognise the risk of unethical decision making and attempt to address it. They do this through compliance systems, but these are often highly ineffective because they work on the assumption that bad things are done by bad people. This is an old-style, mechanistic view of people, i.e. that people follow the “simple model of rational crime” [8] and rationally assess costs and benefits. But while this might work against bad people, it does not work against the unconscious routines of normal people.
To protect against unethical decision making, Palazzo and Hoffrage suggest that leaders should:
1. Analyse their organization for factors that may increase the risk of ethical blindness such as time pressure, obedience, performance management systems, role expectations, and perceived locus of control.
2. Manage these factors to reduce pressure on the team.
Spider-Man’s Uncle Ben put it nicely when he said: “With great power comes great responsibility”.
References:
[1] https://www.coursera.org/learn/unethical-decision-making
[2] https://en.wikipedia.org/wiki/Asch_conformity_experiments
[3] https://en.wikipedia.org/wiki/Milgram_experiment
[4] https://en.wikipedia.org/wiki/Stanford_prison_experiment
[5] https://en.wikipedia.org/wiki/Categorical_imperative
[6] https://en.wikipedia.org/wiki/Utilitarianism
[7] “The Silo Effect; The Peril of Expertise and the Promise of Breaking Down Barriers”, Gillian Tett
[8] “The (Honest) Truth About Dishonesty”, Dan Ariely