Large-scale experiments and machine learning can bring new insights into the theories of how people make decisions between "risky choices," according to a report published in the June 11 issue of Science.
Risky choices are decisions between two unequal “gambles” that involve uncertain outcomes, and are one of the most well-studied paradigms in psychology. A real-world example of a risky choice problem could be deciding whether or not to purchase car or home insurance. If insurance is purchased, we are covered with certainty. However, it may never truly be needed. If the insurance is not purchased, there is some probability that money will be lost.
A better understanding of how people make these decisions could help consumers make better decision strategies and inform policymakers of suboptimal choices people make that result in negative outcomes.
Understanding and predicting how people make decisions has been a longstanding goal in many fields, particularly in the behavioral sciences like psychology and economics, which has led to a proliferation of competing theories and decision-making models. However, despite being an active research topic for many decades, ideas are often difficult to distinguish from one another and few provide discrete or novel insights into human behavior.
"Decision-making is important because it ultimately determines certain life and societal outcomes and underlies a fair amount of economic behavior," said lead author Joshua Peterson, a researcher at Princeton University.
According to Peterson, the reason that there are so many competing theories is because human decision behavior is complex, and each theory often explains only a few phenomena from an ever-growing list. Due to this diversity, there remains little consensus on the best decision theory or model and little gain in their overall predictive power.
"The reason for this is not a lack of ingenuity," said Peterson, "but there is nonetheless only so much time a theorist can devote and the lack of new tools like those emerging in other sciences makes it hard to accelerate theory development beyond the pace of past decades."
Evaluating "Risky Choice" Problems
Technological advancements like machine learning have had a big impact on many of the physical sciences — less so on certain behavioral sciences. While efforts to discover and evaluate new models of human judgments have been enhanced using modern data-driven techniques, they are often hindered by small datasets, limiting their ability to fully explain behavior.
To address this, Peterson and the research team assembled a large dataset of human decisions to nearly 10,000 "risky choice" problems — the product of the largest experiment on risky choice to date — and applied it to machine learning to discover new and evaluate competing decision-making theories.
Risky choice — one of the most basic and extensively studied problems in classical decision theory — evaluates how a person decides between two unequal gambles or lotteries: choosing a 20% chance to receive $100 or an 80% chance for $50, for example.
"The risk term reflects that there is uncertainty in the outcome," said Daniel Reichman, Worcester Polytechnic Institute researcher and co-author of the study. "When we choose a lottery, we do not know for sure the monetary outcome we will receive.
Although Reichman noted that risky choices are but a single abstraction of the enormous complexity underlying the decisions people routinely make in their day-to-day lives, risky choices between real-world gambles are abundant.
While previous studies have focused on just a handful of choice problems at a time, Peterson, Reichman and the researchers leveraged Amazon's Mechanical Turk crowdsourcing service to generate answers to roughly 10,000 unique problems.
The researchers found that deep neural networks powered by this dataset could mimic human decisions to a surprising degree of accuracy — substantially outperforming existing, human-generated risky choice models. In learning to mimic human decisions, the networks revealed many of the psychological properties underlying established behavioral theories, allowing them to be evaluated and refined.
"The best theory we discovered highlights something some psychologists have previously suspected about human decision-making: the way we assign value to choose options is deeply sensitive to the context of competing options, even if the objective value doesn't change," said Peterson.
According to Peterson, the approach highlights the level of complexity and the amount of data required for a non-aided, human-understandable theory to start making predictions with the same accuracy as machine learning algorithms.
"Scientists are trained to generate the simplest possible candidate theories, and there's a very good reason for this," said Peterson. "But it's important to realize that it can also serve as a limitation of unaided theory development."
This sort of method could potentially save years — even decades — of time in discovering and refining decision-making theories, said Peterson.
"I think our study provides a really exciting new example of one way that behavioral science will be accelerated in the future."