Krishane Patel
  • Home
  • About
  • Portfolio
  • Contact Me
  • Background
  • Cognitive biases
  • Stability
  • Conclusion & references

The Bias Bias

How useful are cognitive biases?

Author: Krishane Patel | 29/05/2024
Tag: Case Study

Summary

Do cognitive biases work in real-world decision-making as they do in the lab? While biases explain systematic deviations from rationality, their practical impact is often overstated. Many interventions based on biases, such as defaults or loss aversion—fail to produce meaningful change outside controlled settings. Context, adaptation, and complexity frequently override their effects, suggesting that behavioural science must move beyond listing biases and instead focus on the mechanisms that drive behaviour.

Background

Behavioural economics is gaining considerable traction in Government departments across the world as well as the private sector. There are now over 300 Behavioural Science units worldwide, and the number is growing (see the full listing by the OECD). Behavioural economics emerged as the intersection of economics and the cognitive sciences. It applies psychological and cognitive descriptions of behaviour to normative models to explain departures from axiomatic (self-evident or unquestioning) principles. Unfortunately, many economic models assume individuals are self-interested, consistently rational, and utility-maximising, but empirical research suggests this is not always the case. Instead, cognitive biases and heuristics provide psychological explanations for these model failures. As a result, biases have become integral to behavioural economics.

Biases represent systematic errors in information processing, often leading to sub-optimal outcomes. Identifying biases has greatly improved our understanding of why individuals do not always make optimal choices. The introduction of psychological mechanisms into economic theory has significantly enhanced our comprehension of behaviour and decision-making. For example, researchers have been able to reduce NHS missed appointments (Hallsworth, Berry, Sanders, & Vlaev, 2015), encourage savings through behavioural interventions (Thaler & Benartzi, 2004), help investors choose portfolios that align with their risk tolerance (Benartzi & Thaler, 2001), and improve decision-making in healthcare contexts using behavioural insights (Loewenstein, Brennan, & Volpp, 2007).

Cognitive biases

Human "irrationality"

Information processing theory provides a framework for understanding how the human brain handles information. Decision-making is an executive function, requiring constituent processes to work together to generate an outcome. This theory suggests that cognitive processes can be understood in terms of how information is perceived, encoded, processed, stored, and retrieved, akin to a computer system. Issues in processing occur due to limited cognitive capacity or where the assumptions of such processing are disrupted. Attention, as a finite resource, allocates cognitive resources to selected information at the cost of neglecting others. For example, engaging in a highly stimulating conversation on a mobile phone while driving (even hands-free) impairs performance to a degree equivalent to driving under the influence of alcohol (blood-alcohol level of 0.07 - 0.1; Leung, Croft, Jackson, Howard & McKenzie, 2012).

Cognitive biases stem from underlying systematic errors in information processing. These errors may be bottom-up, emerging from specific cognitive constraints, or top-down, arising from limited or incomplete information. As a result, when individuals act in ways that are not in their best interest or fail to make the most optimal choice, the resulting behaviour is often described as a cognitive bias, such as loss aversion, ambiguity aversion, and mental accounting.

These all seem like systematic issues so why should researchers and practitioners not be using biases so much? The answer is that cognitive biases are not mechanisms, they are in fact descriptions of behaviour. A bias can not tell you what has happened and why, it cannot explain nor can it even produce behaviour.

Cognitive biases are not mechansism, they are in fact descriptions of behaviour

Despite their prevalence, cognitive biases are not mechanisms but rather descriptive labels for behaviour. A bias cannot explain the cause of an observed behaviour nor determine the underlying cognitive process at play. Instead of treating biases as explanatory constructs, practitioners and researchers should focus on mechanisms—the fundamental processes that generate behaviour. Consider aversion behaviour, such as the so-called ostrich effect, where individuals avoid exposure to negative information. While the ostrich effect describes the observed aversion, the mechanism driving this behaviour is operant conditioning, where avoidance of negative information is reinforced. By referring directly to the mechanisms involved, researchers and practitioners can provide more precise explanations for observed behaviours. A similar approach is being led by Prof Susan Michie at the Centre for Behaviour Change in the Human Behaviour Change Project, which looks at behaviour change techniques.

There are many biases that align with similar behavioural outcomes. For example, anchoring and the primacy effect both demonstrate a disproportionate weighting of initial information, yet they arise from completely separate mechanisms. Similarly, risk aversion and loss aversion both stem from aversive mechanisms, but in slightly different constructs—loss aversion refers to avoiding a reduction in value, whereas risk aversion involves aversion to probabilistic uncertainty. However, both likely emerge from similar underlying cognitive processes.

Biases are not as stable as claimed

A serious critique of the notion of cognitive biases is that they are contextual and therefore unreliable, which speaks against them as a universal phenomenon and raises questions over their validity. Let us consider the topic of loss aversion, despite its foundational status in behavioral economics, faces serious challenges to its validity as a stable cognitive bias. Recent research reveals several key problems. First, experimental evidence for loss aversion has proven highly inconsistent. While Kahneman and Tversky's original work (Kahneman & Tversky, 1979) suggested people universally weigh losses more heavily than gains, subsequent studies by Gal and Rucker (2018) and Ert and Erev (2013) found that this effect varies dramatically based on context and often disappears entirely in dynamic decision-making environments. These findings raise troubling questions about the methodology of early behavioral economics research and suggest that many of its foundational assumptions may need to be reevaluated. The field's heavy reliance on one-shot experiments with small samples may have created an artificially narrow view of human decision-making.

This connects to the broader replication crisis in psychology, where many supposedly robust cognitive biases have failed to replicate under more rigorous conditions (Simmons, Nelson & Simonsohn, 2011). Just as famous effects like ego depletion (Hagger et al., 2016) and power posing (Ranehill et al., 2015; Garrison et al., 2016) have crumbled under replication attempts, loss aversion appears increasingly questionable when tested across different contexts and populations. This pattern suggests we may need to fundamentally rethink how we conceptualize and study human decision-making processes. The very notion of stable, universal cognitive biases may be more reflective of our desire for simple explanations than the complex reality of human cognition.

When biases fail to deliver

The disparity between experimental displays of cognitive biases and their practical implications in real-world scenarios is notably illustrated by the phenomenon of default effects. Johnson and Goldstein (2003) showcased the significant influence of default options on organ donation choices within controlled environments. This led to considerable excitement regarding the potential of defaults to influence critical social results. Their research demonstrated that simply changing the default from "opt-in" to "opt-out" could significantly increase organ donation rates, a result that seemed to provide an elegant solution to organ shortages worldwide.

However, the translation of these lab findings into practical healthcare outcomes has proved to be significantly nuanced. A comprehensive review carried out by Arshad, Anderson, and Sharif (2019) looked at actual organ donation rates in the real world and found no significant difference between countries with opt-in versus opt-out systems. Such stark differences between laboratory and field findings suggest that any robust effects of defaults found in a controlled setting probably will be overpowered by some other factors of complex real-world settings, which might include those from cultural norms, religious beliefs, family, and characteristics of the healthcare systems.

The problem whereby laboratory results fail to find meaningful translation to real-world applications is by no means limited to organ donation defaults. Similar gaps have been observed in numerous domains in which behavioral interventions based on cognitive biases have been implemented. While the laboratory experiments consistently show clear and large effects, these very same interventions commonly produce smaller, inconsistent, or null effects when implemented in applied settings (DellaVigna, S., & Linos, 2022; Hummel & Maedche, 2019; Osman et al, 2020). This indicates that the contrived conditions in which demonstrations of the cognitive biases are most striking simultaneously make them very bad predictors of what will happen in natural environments.

Not all cognitive biases are biases

Cognitive biases are often conceptualised as part of a unified framework explaining systematic deviations from rational decision-making. However, they originate from fundamentally different cognitive mechanisms. Some biases arise due to incomplete or missing information, prompting individuals to rely on heuristics in response to uncertainty rather than as a result of inherent irrationality (Tversky & Kahneman, 1974). Others function as heuristics themselves—evolved mental shortcuts that are generally adaptive but may appear biased in controlled experimental settings (Gigerenzer & Gaissmaier, 2011). Meanwhile, certain biases are better understood through attentional mechanisms, where disproportionate focus on particular information distorts judgement (Yechiam & Hochman, 2013). This creates a lack of coherence because it conflates multiple types of cognitive phenomena (such as cognitive biases and heuristics) under a single conceptual umbrella.

This variation raises concerns about the theoretical coherence of biases as a category, as well as whether they genuinely represent cognitive errors or are instead context-dependent adaptations (Gigerenzer, 2018). The tendency to group all deviations from classical economic rationality under the label of "biases" risks obscuring the underlying diversity of cognitive processes that drive decision-making. While some biases represent genuine systematic errors, others reflect ecologically rational strategies that help individuals make effective decisions under uncertainty (Todd & Gigerenzer, 2012).

Rather than treating biases as fixed distortions, behavioural science would benefit from adopting a mechanism-based approach—one that classifies biases according to their underlying cognitive processes rather than subsuming them under a broad behavioural economics framework (Lieder & Griffiths, 2020). Many so-called biases are not inherently irrational. Instead, they reflect adaptive strategies optimised for specific environments, meaning their effects vary depending on context (Todd & Gigerenzer, 2012). The observed variability of biases—for instance, the fact that loss aversion manifests in some situations but not others—suggests that a universal model of human decision-making is inadequate (Gal & Rucker, 2018).

By shifting focus from categorising biases to examining the mechanisms driving behaviour, researchers and practitioners can develop a more precise and predictive framework for decision-making. This approach helps reduce the risk of oversimplified or misapplied interventions (Hertwig & Grüne-Yanoff, 2017). Instead of treating biases as flaws in reasoning, we should view them as part of a broader cognitive system that balances efficiency and accuracy—one that is shaped by evolutionary pressures and environmental constraints.

What does this mean for behavioural science?

Critiquing cognitive biases does not mean we should discard them altogether. These biases have played a crucial role in reshaping economic and psychological models, demonstrating the ways in which human decision-making deviates from classical notions of rationality (Kahneman, 2011; Thaler, 2015). However, as behavioural science continues to evolve, it is essential to adopt a more nuanced perspective—one that recognises that cognitive biases are not mechanisms in themselves but descriptions of behaviour arising from underlying cognitive processes (Gigerenzer, 2018; Lieder & Griffiths, 2020). Instead of treating biases as fixed distortions, the focus should be on understanding the mechanisms—whether attentional, heuristic, or information-based—that generate these effects. Only by doing so can behavioural insights be applied effectively, ensuring that interventions are based on causal mechanisms rather than surface-level patterns (Hertwig & Grüne-Yanoff, 2017).

At the same time, it is crucial to guard against oversimplification and misapplication. As behavioural science gains increasing influence in policymaking, business, and technology, there is a growing temptation to rely on readily accessible lists of biases as though they represent universally applicable truths (Felsen & Reiner, 2015; Osman et al., 2020). However, identifying a bias is only the starting point; it does not necessarily lead to a deeper understanding of human behaviour. Effective behavioural interventions, like all scientific applications, require rigour, empirical validation, and adaptability (DellaVigna & Linos, 2022). By focusing on underlying mechanisms, avoiding oversimplification, and applying behavioural insights with nuance and precision, we can maximise their impact (Marchiori, Adriaanse, & De Ridder, 2017).

References

  • Arshad, A., Anderson, B., & Sharif, A. (2019). Comparison of organ donation and transplantation rates between opt-out and opt-in systems. Kidney International, 95(6), 1453–1460.
  • Benartzi, S., & Thaler, R. H. (2001). Naive diversification strategies in defined contribution saving plans. American Economic Review, 91(1), 79–98.
  • DellaVigna, S., & Linos, E. (2022). RCTs to scale: Comprehensive evidence from two nudge units. Econometrica, 90(1), 81–116.
  • Ert, E., & Erev, I. (2013). On the descriptive value of loss aversion in decisions under risk: Six clarifications. Judgment and Decision Making, 8(3), 214–235.
  • Gal, D., & Rucker, D. D. (2018). The loss of loss aversion: Will it loom larger than its gain? Journal of Consumer Psychology, 28(3), 497–516.
  • Garrison, K. E., Tang, D., & Schmeichel, B. J. (2016). Embodying power: A preregistered replication and extension of the power pose effect. Social Psychological and Personality Science, 7(7), 623–630.
  • Hagger, M. S., Chatzisarantis, N. L. D., et al. (2016). A multilab preregistered replication of the ego-depletion effect. Perspectives on Psychological Science, 11(4), 546–573.
  • Hallsworth, M., Berry, D., Sanders, M., & Vlaev, I. (2015). Stating appointment costs in SMS reminders reduces missed hospital appointments: Findings from two randomised controlled trials. PLOS ONE, 10(9), e0137306.
  • Hertwig, R., & Grüne-Yanoff, T. (2017). Nudging and boosting: Steering or empowering good decisions? Perspectives on Psychological Science, 12(6), 973–986.
  • Johnson, E. J., & Goldstein, D. (2003). Do defaults save lives? Science, 302(5649), 1338–1339.
  • Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291.
  • Leung, S., Croft, R. J., Jackson, M. L., Howard, M. E., & McKenzie, R. J. (2012). A comparison of the effect of mobile phone use and alcohol consumption on driving simulation performance. Traffic Injury Prevention, 13(6), 566–574.
  • Loewenstein, G., Brennan, T., & Volpp, K. G. (2007). Asymmetric paternalism to improve health behaviors. JAMA, 298(20), 2415–2417.
  • Mellers, B., Schwartz, A., & Ritov, I. (1997). Emotion-based choice. Journal of Experimental Psychology: General, 126(1), 45–61.
  • Marchiori, D. R., Adriaanse, M. A., & De Ridder, D. T. (2017). Unresolved questions in nudging research: Putting the psychology back in nudging. Social and Personality Psychology Compass, 11(1), e12297.
  • Osman, M., McLachlan, S., Fenton, N., Neil, M., & Löfstedt, R. (2020). Learning from behavioural changes that fail. Trends in Cognitive Sciences, 24(12), 969–980.
  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359-1366.
  • Thaler, R. H., & Benartzi, S. (2004). Save More Tomorrow™: Using behavioral economics to increase employee saving. Journal of Political Economy, 112(S1), S164–S187.
  • Todd, P. M., & Gigerenzer, G. (2012). Ecological rationality: The normative bridge between bounded and rational analysis. The Oxford Handbook of Thinking and Reasoning, 197–212.
  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.
  • Yechiam, E., & Hochman, G. (2013). Losses as modulators of attention: Review and analysis of the unique effects of losses over gains. Psychological Bulletin, 139(2), 497–518.

Disclaimer

Krishane Patel

  • Disclaimer
  • Privacy Policy
  • Terms & Conditions

© 2024 Krishane Patel, London, United Kingdom. All Rights Reserved