Thinking, Fast and Slow Summary

Intuition or deliberation? Where you can (and can't) trust your brain

By Daniel Kahneman
19-minute read

What guides you in your decision-making process?

Thinking, Fast and Slow should be read slowly. World-famous psychologist and Nobel Prize recipient Daniel Kahneman, summarizes decades of research on thought and decision making. It's an insight into the less-than-perfect performance of the rational mind, that we can use to identify mental glitches that get us into trouble.

A large part of Kahneman's research was done collaboratively with his colleague Amos Nathan Tversky, who passed away in 1996, and to whom he dedicates the book. Kahneman argues that our rational minds fall prey to subtle irrationality without us even realizing it, and errors trip us up even when we think that we're logical. So if you're interested in behavioral economics and cognition, or want to engage your slower, more rational self to make better decisions, you'll find this an interesting and insightful read.

The central theme of the book is around what Kahneman terms our System 1 and System 2 thinking. He calls these the "fictitious characters" of our minds, which govern our judgment and decision making. He is at pains to point out that they do not have an actual "home" in the brain, but are just nicknames that give us a language to understand both our intuitive and deliberate ways of thinking, how we use them together, and how they influence our choices.

System 1 is fast and emotional, and requires minimal effort and no voluntary control. System 2 on the other hand, is slower, deliberate and logical, and needs more concentration. The first is automatic, and the second is effortful. While we may think that System 2 is where we spend most of our time, our story's main character is System 1. System 2 has to bring self-control to the more impulsive, continuously active workings of System 1. However, it isn't always available or inclined to do so.

Both systems are remarkable, with unique capabilities. However, each comes with some flaws. Kahneman believes that these errors are not caused by the intrusion of emotion but by glitches in our cognitive design. He explores these flaws in our thinking, such as our biases, anchors, and heuristics (the thumb-suck we often do when making decisions). We also have difficulty thinking; statistically, we have to tone down our liking for hindsight and become more aware of how overconfidence creates a sense of certainty where none exists. Kahneman then expands this into the context of behavioral economics, arguing that our financial and practical decision making isn't as rational as economists previously thought. He concludes with an explanation of our remembering selves and our experiencing selves, and how the two systems influence both aspects of self. This leads to different ways of understanding happiness, which impacts on questions of policy around well-being.

We'll briefly explain the two systems that drive the way we think, what cognitive biases we're susceptible towards, and how our author relates his research on judgment and decision making to the field of behavioral economics, and the psychology of wellbeing.

The Two Systems: The Language for Understanding Our Minds

Are you ready to meet your two fictitious friends?

The mind's processes are divided into two distinct systems. There's the automatic "System 1," and the effortful "System 2".

System 1 operates automatically and intuitively, with little or no effort, and without much conscious control. How often do you get gut feelings? Our gut reactions and intuitive answers stem from System 1.

Let's look at simple questions such as, 'What's 2 +2?' Or complete the phrase, 'bread and…' We would typically conclude the answers to be four, and butter. Knowing the answers to these questions results from a fast mode of thinking, and our ability to deal with information swiftly and efficiently.

System 1 has its evolutionary advantages. We remain alert and reactive to the world around us. If we hear a loud unexpected sound, System 1 is triggered, and we can't help but divert our attention to it. Thanks to System 1, we don't jump at just any surprise. This is because of one of System 1's marvels; our associative memory. Our associative memory distinguishes unexpected occurrences from regular events in a fraction of a second. It also generates a quick idea of what might be coming our way so that we can react rapidly.

Associative memory has a fantastic repertoire of acquired skills and a lifetime of practice. The more skills we have, the more our intuitive judgments and decisions are. An experienced doctor can often make an accurate diagnosis after just minutes of interaction with a patient. A master chess player can, within seconds, analyze a chessboard and intuitively know the right moves that follow. However, skills are context-dependent, and life isn't a chessboard. System 1 helps us to process quickly, but it's not always reliable. And, unfortunately, it doesn't let us know when it gets things wrong.

Enter System 2. System 2 on the other hand, is our mind's slower analytical mode. Whereas System 1 quickly solves 2+2, System 2 is activated by problems such as 17x24. Which, if you want to avoid mental strain, equals 408. Unlike System 1, this is more challenging and complex thinking and problem-solving, and requires a greater level of conscious thought. System 2 is the more demanding of the two systems, but it also tends to be a tad lazy.

How Do These Systems Work Together?

System 2 is the supporting character who thinks it has the lead role. Unfortunately, it generally does the bare minimum, so as a result, we spend most of our time in System 1 mode. However, System 1 isn't so faultless either: it's a little quirky and prone to making mistakes. Let's check out our System 1 and 2 in action by looking at this problem.

'A bat and a ball together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?'

The majority of us probably guessed 10¢. If you did, you've been fooled by your fast-thinking System 1. We tend to quickly think it's 10¢ because the sum $1.10, divides very naturally into a unit of $1 and a unit of 10¢. In other words, 10¢ intuitively sounds correct. However, if we slow down and call upon System 2, and put in a bit more effort to do the math, we'll realize where we went wrong. If the ball costs 10¢, then the bat would be $1.10, bringing the total to $1.20. So, the correct answer is actually 5¢.

Our mind favors the more straightforward System 1, over the more demanding System 2, because it opts for the least effort. However, our two friends work in tandem, with System 1 running automatically, and System 2 functioning at low-effort mode. Only when we encounter something unexpected, or exert cognitive strain, do we engage System 2's slower thinking.

As we may have experienced in the bat and ball puzzle, System 2 isn't always "on the ball" so to speak, when it comes to letting System 1 know when it gets things wrong. The catch is that System 1 makes quick responses, which is impressive, but unfortunately not always accurate. The problem is that System 2 doesn't always come to the rescue to make us pause and reflect, and at this stage, we don't realize that they haven't worked in tandem.

Let's unpack some of System 1's weaknesses and inconsistencies, such as priming, anchoring, heuristics, and faulty statistical thinking.

We're Usually on Autopilot, Making Us Susceptible to Priming

In order to understand priming, let's look at how our minds are primed to think in specific ways. For example, if you're given part of a word, S, O, blank, P, what letter fills in the blank?

Well, a lot depends on context. What if you were told to think of the word "EAT," before you were asked to fill in the blank? Maybe you'd think that the word is "SOUP." However, if you were given the word "SHOWER," rather than "EAT," you'd be primed to think of the word "SOAP" instead. This is what's known as priming.

Studies on memory reveal that priming also impacts our actions and emotions. In a fascinating experiment, psychologist John Bargh asked two groups of students at New York University to assemble four-word sentences from two sets of five words. The one set was, 'finds, he, yellow, it, and instantly.' For the other group of students, the scrambled sentences contained words associated with the elderly, such as 'Florida, forgetful, bald, gray, and wrinkle.'

Once they had finished this task, the young participants were sent to another venue down the hall, ostensibly to take part in another experiment. That short walk was central to the experiment. The researchers unobtrusively measured the time it took each group of students to get from one end of the corridor to the next venue. As Bargh had predicted, the young students who had made sentences from words with an elderly theme, walked down the hallway significantly more slowly than the others who had the more neutral words. Interestingly, slow movement had been primed by the words associated with old age.

Priming Makes Us Resort to Anchoring

Let's try this exercise on priming and anchoring.

'Was Gandhi more or less than 95 years old when he died? How old was Gandhi when he died?'

What age did you think of?

The fact is that Gandhi was 78 years old, but you may have guessed older because you were primed by the number 95. If we rephrased the question to, 'Was Gandhi more or less than 40 years old when he died?' you may be tempted to suggest he died at a younger age. This example shows how first we are primed to feel a certain way, and then we solidify this in the form of anchoring.

Anchoring has complex dynamics. Anchoring effects are particularly powerful in money-related transactions, such as how much we would offer on a property based on the asking price, and how much we would contribute to a cause. How often have you been into a shop and been influenced by special deals and discounts suggested by price anchoring? Probably more than you would like to admit.

So, now that you're primed on how susceptible our thoughts and actions are to outside forces, let's move onto how we fall prey to making quick inaccurate judgments.

We Often Jump to Conclusions, With Very Little Evidence

The acronym WYSIATI, stands for 'What you see is all there is.'

Sometimes this can sound like: 'They didn't want more information that might spoil their story.'

WYSIATI means that we treat the information we have as if there's no other information, and we don't ask ourselves what we might be missing. We use what's available and make up the best story that we can, with what's at our disposal.

WYSIATI leads to other biases such as overconfidence, framing, and base-rate neglect. For example, overconfidence is when we believe that our actions change outcomes more than they actually do. We then make mistakes and form unfair judgments about others. Hindsight bias is another example. After something has happened, we believe that the outcome appeared more obvious than it first seemed. We then unfairly judge people based on evidence that no-one had at that particular time. The more serious the consequences, the greater the hindsight bias, as was seen in the harsh judgment of the CIA, for not anticipating the 9/11 attack.

Hindsight bias is a problem because it inflates our confidence in predicting the future.

Framing, Base-Rate Neglect, and Jumping to Conclusions

How things are framed matters; for example, if we're told that something is 90% fat-free, most of us find this more appealing than being told that it contains 10% fat. We prefer the first frame, and even if we're aware that other possible frames exist, we stick to our formulation.

Another way we formulate opinions is through what's known as base-rate neglect. Base-rate neglect is illustrated in the following example.

'Steve is very shy and withdrawn, invariably helpful, but with little interest in people or the world of reality. A meek and tidy soul, he needs order and structure, and a passion for detail.'

In your opinion, is Steve more likely to be a librarian or farmer? Most of us would probably be more inclined to follow cultural norms and perceive Steve as a librarian. This is what's known as base-rate neglect because we ignore the statistical realities that there are more than 20 male farmers for each male librarian in the USA.

Two other System 1 cognitive biases cloud judgment; these include the halo effect, and confirmation bias.

Imagine meeting someone at a party who you think is likable. Later you may ask them for a donation. If you formed a positive first impression of them, you'll probably believe that they're likely to be more generous. This is known as the halo effect, a cognitive bias where our initial favorable judgment colors our wholistic perceptions. So, in short, we often approve or disapprove of a person even when we know little about them. However, this isn't the only way our minds take shortcuts when making judgments.

Kahneman poses the following question: 'Is James friendly?'

Here we're more likely to conclude that James is friendly instead of unfriendly because subconsciously, our minds confirm what's been suggested. The absence of further information, and the fact that we tend to scan for information to support our views, is what's known as confirmation bias. As with priming, our cognitive biases unconsciously drive our judgments of people and situations. Sometimes we're right, but when we're wrong, we're often unaware of our errors. Our faulty conclusions are due to the subconscious mental shortcuts we take when we make decisions and form judgments. These are known as heuristics, which provide adequate answers to more complex questions, but which aren't always correct.

Heuristics Ease Cognitive Strain

Heuristics are mental shortcuts that ease the strain of decision-making. Most of the time, heuristics are helpful, but the trouble is that System 1 overuses them, and System 2 doesn't double-check the facts. There are different types of heuristics, but let's look at one example.

Here's a relatively complex question, 'Is the price of the shares in this company, likely to increase or decrease?'

When faced with a tricky question such as this, System 1 substitutes a more straightforward heuristic question, which might be inadequate but imperfect. System 1 might jump to a heuristic question such as, 'How much do I like this company?' System 2 could reject this question, but it may go along with System 1 without any serious thought because it's often lazy.

Availability is another type of heuristic bias where we judge an event's likelihood based on what most easily comes to mind. Kahneman gives an example of how many more deaths worldwide are caused by strokes rather than accidents. One study found that 80% of participants believed that an accidental death was more likely to happen to them. This is because we hear more about accidental deaths than strokes because of media coverage and the lasting impressions that remain with us.

Another of System 1's glitches link to its struggle with statistics.

The Law of Small Numbers

The smaller your sample size, the more likely you are to have extreme results. This is called the Law of Small Numbers.

We can all be led astray. Some years ago, statistics led the Gates Foundation to believe that the most successful schools were small schools. Therefore they began investing heavily in developing small schools. Unfortunately, the statisticians had not asked: 'What are the characteristics of the worst schools?' Had they asked this question, they would have realized that the answer would also have been "small schools." The fact is that small schools aren't better; they're just more variable. The lesson here is that if we succumb to the Law of Small Numbers, our System 1 creates faulty links between events, and our minds jump to conclusions.

When Psychology and Economics Join Forces

Psychologists view the world as one where people don't always operate in a consistently rational manner. On the other hand, economists believe that people are analytical thinkers and therefore, logically handle monetary decisions.

It's almost as if they're two entirely different species. Behavioral economist Richard Thaler dubbed these two species "Econs" and "Humans." Econs tend to operate using System 2. This view maintains that our beliefs and preferences are internally consistent, logical, and coherent. Hence, Econs are likely to make the same decisions over and over again.

According to behavioral economists, this isn't the case. We're not Econs; we're Humans. An Econ would not be susceptible to priming, WYSIATI, or heuristics. Humans use both System 1 and 2, with all their glitches. However, this doesn't mean Humans are irrational; we just need help to make better decisions, which has led to considerable research and implementation in the field of liberal paternalism in countries such as the USA and the UK.

For example, an Econ will read and understand the fine print before signing anything, but Humans won't. Most of us go on "heuristics." That means that an unscrupulous firm could play on this and design a contract that hides essential information from a client because few will pay careful attention. The devil's in the details, and we need informed and non-intrusive ways to be assisted by way of policies and regulations.

The model of System 1 and System 2 underpinned Kahneman's work in behavioral economics, which resulted in him receiving the Nobel Prize in Economics. It formed the basis of Prospect Theory, which looks at decision-making when confronted by risk. We make decisions based on our expectations of loss or gain, but these are relative to our current situations.

Both Economists and Psychologists Are Interested in Well-Being

Well-being is considered part of government policy. Kahneman points out that because of two aspects of self—the experiencing self and the remembering self—we have to be cautious about what we're measuring. The peak-end rule, where we evaluate a negative experience based on how positively it ends, as opposed to the event's duration, seems to indicate that the remembering self ignores the reality of time, and exaggerates good endings. It also falls prey to hindsight. This means that it does not accurately reflect what was noted by the experiencing self.

Indicators of well-being can inform government policy, but measuring happiness can be done from two perspectives. First, we can measure life satisfaction, which calls on the remembering self, or we can measure happiness in the here-and-now, by obtaining regular feedback from the experiencing self. Both represent different ways of measuring happiness.

In Conclusion

We live in the real world. We've got remarkable systems. However, at times we're faced with our error-prone System 1, and our efficient, but sometimes lazy System 2. Both work together to assist us in forming good judgments and making sound decisions. However, it does take considerable effort for System 2 to continually be on the lookout for System 1's tendency towards biases, anchors, or overconfidence. It's also worth noting that it's a lot easier to identify these tendencies in others, rather than in ourselves.

We can recognize when System 1 is taking control, help it calm down, and call in for reinforcement from System 2, but this isn't always possible. A language that creates awareness of potential glitches in both systems can help us be more aware of our minds' inner workings.

Being forewarned is being forearmed; when it comes to thinking and decision-making, we need to integrate both the fast, and the slow.

Find this book on:
Amazon | Audible