How did you become interested in the topic of risk perception and how it impacts peoples’ decisions?
My PhD is in psychology from Harvard, almost 30 years ago, and when I first started studying decision making from a psychological perspective, it was the heyday of the intersection between psychology and economics. Some of my classes had to do with the rational economic models of risky decision making. Just around that time, Daniel Kahneman and Amos Tversky — two very big names in psychological models of risky decision making who later got a MacArthur and a Nobel Prize — had developed their theories, and I was really curious to what extent those models were being applied in the real world. So I went across the river to the business school and started taking some classes over there. At that point I really started to work between psychology and economics. When you do that, you very quickly realize that people don’t do what the normative models say they ought to be doing. And then the big question is, what are they doing instead? Are they all just completely irrational, or is there something that’s systematic and perhaps even understandable?
Typically when people say somebody’s risk-averse or somebody’s risk seeking, they’re talking about somebody who gets excited about risk and therefore goes for the risky option, or somebody who’s scared about the downside possibility. Since 1985 I’ve been trying to get a handle on what people mean by saying, well, I’m risk-averse, therefore I didn’t pick that option. That’s how I started to come across the finance definition of risk taking, which is a trade-off between returns and risk. But I’ve been interpreting it much more broadly, by saying, OK, returns might be something like expected value — average returns — but risk definitely is something that’s different from the variance of possible outcomes.
First of all, downside variability has much more impact on people than upside variability, even though it’s equally uncertain. This is something that Kahneman and Tversky showed, that people are loss averse. So if I say, would you like to toss a coin for $100 — if you get heads you win $100, if you get tails you lose $100 — few people want to do that. And the reason is that losing $100 hurts much more than winning $100 feels good. So one of the ways in which psychological interpretations of risk deviate from financial interpretations is that downside matters more.
But then there are other factors, and a lot of them have to do with affective reactions, which are completely out of the picture in economic models. In economic theory, the only individual difference variable or group difference variable is risk attitude, and that term is really just meant as a label for a utility function that describes people’s choices. It does not explain them. But by conceptualizing risk taking as an explicit trade-off between perceived benefits and perceived risks, you have much more flexibility to both describe and explain differences in behavior. So I might do something that looks very risky from the outside, like bungy jumping, but I might not feel that it’s very risky. There might be differences in past experience, in familiarity, in cultural exposure — all sorts of reasons why some people think something is risky and others don’t.
I do think that there’s a large emotional component associated with risk. If you look at gender differences or cultural differences in risk taking, how do you explain the differences in observed behavior? Sometimes it’s risk attitude, but sometimes — most of the time — it’s actually risk perception, or perception of the benefits that explain the differences in behavior.
Why do people tend to make riskier decisions when they judge from experience rather than from other sources of information?
They don’t always make riskier choices. Whether choices based on past personal experience are more or less risky than choices based on statistical description of all possible outcomes depends on what the distribution of outcomes is. In particular, it depends on whether there are some rare events, and whether those are good or bad.
If somebody tells you, there’s a 99% chance that this medication will cure your ailment, but there’s also a 1% chance of some really terrible side effects if you take this medication, even if it’s a low-probability event, you probably consider the negative consequence more than it deserves by probability alone when you make that decision from description, i.e., when your doctor or the drug package insert tells you about all outcomes and their probabilities. You probably think about the terrible side effects more than just 1% of the time that you spend weighing your options, and our decisions are influenced by the attentional weight that outcomes get. That could lead to either riskier behavior or less risky behavior. In this case, you might be risk-averse and not take the medication. It’s just that a low-probability event gets weighted more than the probability warrants when you make decisions from description.
Now let’s say that you make this decision about the medication based on personal experience. You don’t know that there’s a 1% chance of negative side effects, but each day you take the pill and you find out whether you feel better or worse. If you take the pill for a week, chances are that you’re never going to experience the side effects, even if each day there was a 1% chance of the negative side effects occurring. Similarly if you buy a stock and it has a low probability of some catastrophic loss, again, if you only hold it for two or three months, the bad event might never happen.
As a result, we tend to underweight low-probability events most of the time when we make decisions based on personal outcome feedback. We don’t think that terrorism in the United States is anything worth worrying about until, every once in a while, the low-probability event does occur, and then we tremendously overreact. As human beings who live in changing environments, we’re very sensitive to recent events because they are more likely to reflect what will happen next. Overall that is quite adaptive, but it has the consequence that our decisions from experience can be either too conservative when small-probability events are involved (like not worrying about global warming) or can be very volatile or overfocused on rare events. In some sense we are oftentimes fighting yesterday’s war.
In most real contexts, we have both description and experience. If you decide to invest in a stock or a mutual fund, chances are you get a prospectus — a description of what happened in the last 10 years — and you look at it. But then you also might have had some experience. You might have been following that stock in the newspapers. You’re buying it for a reason, so you probably have heard about it, or you have some already and it’s done well so you’re buying more. In those situations where we have personal experience, even when we have the more reliable statistical description of possible outcomes, experience wins because it’s much more vivid.
In a study you did on five different types of risk — investing, health/safety, recreational, ethical and social — you found that women are more risk-averse in all domains except social situations. How do you account for that gender gap?
This study was designed to look not just at why there are differences in people’s risk taking as a function of who they are, men versus women, but also why people aren’t consistent across situations. This has puzzled decision researchers for a long time, because if you say someone is risk-averse, you typically mean that as a personality trait, so that person should look timid across situations. That’s not typically what you find. One way we’ve been explaining this is to say, well, situations differ in the degree of familiarity. Let’s say you have a very dangerous job working in a nuclear power station and you’ve been working there for 20 years. If you are still alive and well after that length of time, then maybe the job is not that dangerous after all. Again, I think evolution has wired us to say, well, if nothing bad has happened for a long time, this is a pretty safe occupation, or this is a pretty safe activity or a pretty safe investment option.
And so you would predict that in those situations where people have more personal experience with risky choices and their typically positive consequences, they would perceive the risk to be lower, and therefore they would look like they're more risk seeking in their choices. But they’re not really risk seeking, it’s just that they think it’s not very risky. And so we have hypothesized that in those domains where women have more familiarity with making decisions and getting feedback by experiencing their consequences, they might look like they’re more risk seeking or less risk averse. And in those situations where men have more experience, culturally or historically, they would be the ones who look more risk seeking or less risk averse. Which is essentially what we found.
Can you talk a little bit about your research methodology?
I use a broad range of methods — in my lab, we think of it as triangulating on a given result. So obviously we look at people’s behavior. But in addition to that, there are certain mental constructs, like this notion of perceived risk, that don’t have behavioral measures. Preference is easy to measure. I give you two options and I say, choose one, or give me a price for this investment option and if I’m willing to sell it to you for that price, you can have it. Perceived risk is a trickier construct, as it doesn’t really covary with anything that can be externally observed. So the way we’ve approached such constructs is through a model-based approach. Let’s say I give you a number of investment option that differ in expected value and variability of outcomes. I can ask you about perceived benefits or returns of each option: on average, how much do you think this option is going to pay next year? And about perceived riskiness: on a scale from 0 to 100, how risky is this option compared to other investment options that you have?
And then you make certain assumptions. You say, OK, if people have accurate introspection into the return expectations and risk perceptions that drive their choices, then I should be able to take their reported risk and return judgments for the 15 or 20 options and predict from those the price they’re willing to pay for these options from a regression model. You assume that the constructs are real and that your model fits, and then the proof is in the pudding — it’s a good model and the constructs are useful constructs if the model fits, and if it fits better than other models.
The other thing that we’re starting to do now is looking at brain activity related to perceptions of risk using functional Magnetic Resonance Imaging (fMRI), which measures blood flow in specific regions of the brain. There is an emerging new research area, called neuroeconomics, that looks at the neural basis of economic behavior. In such a study, I would present you with 20 investment options while you are in the fMRI machine and ask you to tell me for each one what you would be willing to pay. They might vary in returns and in variability and thus will be perceived as differing in return and in risk.
Again, you make certain assumptions. You assume that if people actually make a pricing decision based on a trade-off between perceived risks and benefits, then probably somewhere in the brain there’s a representation of riskiness, and somewhere there’s a representation of benefits or expected value. We already know for expected benefits where that center is, and right now we’re looking for the center that represents perceived riskiness. What tends to be true is that these areas that represent mental constructs light up proportional to the magnitude of the feeling or the perception. The bloodflow in the region depends on how much this center is activated by mental processing. As a result we can test whether our construct of what perceived risk ought to be like actually corresponds to the brain activation in that region of the brain.
Was there anything about your research findings that has surprised you?
One aspect that has fascinated me over the years is how much of individual differences in behavior is actually accounted for by differences in the perception of the risk rather than differences in what you might truly call risk attitudes. Once you start measuring people’s perceptions of the risk, then you can take out of the equation those differences in the perception, and then what’s left — what is relatively stable across situations — is how much benefit you’re going to give up to not take a certain unit of risk.
It turns out that most of the time, the perception of risk is a much larger causal variable in determining differences in behavior than the attitudes per se. In a way it’s a confirmation of the old microeconomic notion that people for the most part are risk averse. Some people are more risk averse, some people are less risk averse, some people are moderately risk seeking, and so there are differences in the population. But these differences tend to be small and oftentimes don’t explain very much of the variability in observed behavior, which is much more affected by what people think is risky.
Elke Weber is the Jerome A. Chazen Professor of International Business and chair of the Management Divisions at Columbia Business School. She is also co-director of the Center for Decision Sciences and director of the Center for Environmental Decisions at Columbia University.