Why do we worry too much about some environmental risks and not enough about others?
By David Ropeik
For a PDF of this article click here
Michael, a 55-year-old friend of mine, has cut way back on eating certain species of seafood because the government says those species may carry high levels of mercury. But the levels of mercury in those fish pose almost no risk to 55-year-old males, although they can be risky for fetuses and infants. What’s more, the fish Michael is forgoing are rich in omega-3 fatty acids, which protect against heart disease—a very real threat for my friend.
——
Nancy loves the sun and is deeply tanned, even in the winter. She also fears and opposes nuclear power. Yet studies of atomic-bomb survivors have shown that nuclear radiation, while carcinogenic, isn’t nearly as much a cancer threat as radiation from the sun. Of roughly 100,000 bomb survivors, about 600 (fewer than one percent) have died from radiation-induced cancer, and there have been no multigenerational genetic effects. Solar radiation, on the other hand, causes 12,000 melanoma deaths in the U.S. each year.
——
And then there’s my friend Andrea, who eats only organic food and resists taking prescription drugs because she worries they haven’t been tested enough. Yet she has no problem taking all sorts of herbal remedies, many of which have not been tested for safety or efficacy and several of which have been found to do serious harm. Some have even killed people. Kava root can damage the kidneys. Some Ayurvedic medicines contain heavy metals at thousands of times the levels deemed safe. Ephedra is linked to more than 100 deaths.
——
Why are my friends (and I) more afraid of some environmental threats than the evidence warrants, and less afraid of some perils than the evidence warns of? Why don’t our fears match the facts? And more importantly, what does the gap between our fears and the facts, a phenomenon I call “the perception gap,” do to human and environmental health?
A growing body of research into the neuroscience and psychology of fear and risk perception offers some provocative answers. Investigators are discovering that our health and safety rely on a system of risk perception that is instinctive—and mostly subconscious. It seems that no matter how hard we try to reason carefully and objectively, our brains are hardwired to rely on feelings as well as facts to figure out how to keep us alive.
The system has worked well throughout most of human history. But in the face of modern and complex environmental threats, it can make dangerous mistakes. Perhaps it’s time to let go of our Enlightenment-based faith in the power of rational analysis and attempt to better understand how risk perception works. It’s time to learn how to avoid the risks that the perception gap creates.
Your Brain on Fear
Feel first, think second
Imagine you’re walking down a country path in the shadows of late afternoon. There are wetlands on either side. Thin, curving tree roots occasionally run across the path at your feet. You sense that one of them moves—even slithers. You freeze, and your heart races just a bit. You have an instinctive reaction to a potential threat before you are even conscious of it. This is the neural beginning of risk perception, and it is bad news for those who think we can objectively think our way to the right decisions about keeping ourselves safe.
In the 1980s, using rats, neuroscientist Joseph LeDoux conducted a series of pioneering experiments on fear. After ringing a bell, he shocked the rats’ feet, conditioning them to jump with fear at the sound of the bell—even when no shock was applied. By using microelectrodes implanted in rat brains, LeDoux identified where fear begins. Even before the tone stimulus reached those parts of the brain associated with hearing, it sped to a group of cells called the amygdala. In both rats and people, the amygdala is the brain’s 24/7 “Could there be danger?” radar. It quickly screens incoming data and, when it senses danger, sends out an alert, triggering a “fight or flight” response.
After the stimulus (the bell’s ring or the sight of the root that could be a snake) has been perceived by the amygdala—setting off a fear response—it finally makes its way to the cortex, the outer layer of the brain responsible for higher-order factual analysis and purposeful thinking. The cortex thinks things over, then sends its thoughtful two cents’ worth to the amygdala. But all this takes time—about 20 milliseconds in humans. During that split second, the instinctive reaction is already under way. The system is set up to be fast rather than smart. Our brains are hardwired to feel first and think second.
That’s great if your reaction time might mean the differ-ence between life and death. But it’s not the most effective system for coping with risks such as mercury or nuclear power or herbal drugs. Thankfully, after these first moments of the risk response, the cortex and its powers of reason do indeed get to add their analysis. But LeDoux found that even though both instinct and reason have their say in risk assessment, instinct and emotions have the decided edge. As LeDoux puts it in The Emotional Brain, “ . . . the wiring of the brain at this point in our evolutionary history is such that connections from the emotional systems to the cognitive systems are stronger than connections from the cognitive systems to the emotional systems.” So not only do we feel first and think second—in general, we feel more and think less.
Mental Shortcuts
Big decisions based on little information
Answer the following questions with yes or no:
1. Are pesticides a serious threat to public health?
2. Is genetically modified food a serious threat to public health?
3. Is bisphenol A (a chemical ingredient of plastics, also used to line food cans) a serious threat to public health?
Now, the most important question:
4. Did you have all the facts you needed to make a fully informed, analytical, reasoned decision about any of the first three questions?
I have posed these sorts of questions to thousands of people at various speaking engagements and in educational settings. Typically, I get a mix of yeses and nos to the first three questions. The only question that gets a unanimous answer is the fourth question. No one thinks they have all the facts necessary to answer any of the other questions. Nevertheless, most people make such judgments about risks without having all the facts.
This represents the second part of our risk response. Faced with a situation that the amygdala doesn’t have the built-in tools to recognize (like many of the risks we face in our more complex, modern world), the brain draws upon a set of subconscious, mental shortcuts to help us quickly judge whether we are in danger.
The researchers who first identified these hidden shortcuts (known as heuristics or biases) initially focused on economics. Princeton professor, Nobel laureate, and author of the seminal book Thinking, Fast and Slow, Daniel Kahneman (with many others) devised a variety of experiments that revealed why people make apparently irrational choices about money. But these mental shortcuts apply to risk perception and decision-making in general.
For example, people are more sensitive to, and more troubled by, loss than they are pleased by equivalent gain—a mental shortcut called loss aversion. Kahneman asked doctors to choose between two treatments, one which would save 10 percent of patients and one which would cause 90 percent to die. The options are equivalent, but doctors overwhelmingly preferred the choice that would save 10 percent because losing patients feels terrible. Losing polar bears feels bad, too, as does losing rainforest and soil and the Arctic ice—in part because the very word “losing” evokes a mental shortcut that makes circumstances feel more painful.
Another shortcut is known as the “representative effect.” We routinely assess bits of information based on how they fit into patterns of what we already know and believe. When we make quick initial judgments about people, we term this stereotyping. So if I told Michael, “There’s a new industrial chemical called bovine growth hormone that’s being used to increase milk production,” he would instantly place that statement into a context based on what he already knows and feels about “industrial chemicals.” Consequently, his first response to the mention of bovine growth hormone (BGH) would probably be worried and negative. He knows practically no facts about the chemical itself, but he is able to make an initial judgment by fitting it into a framework of what the first few facts represent.
The way information “feels” is also powerfully influenced by the context and meaning in which it is initially presented. This is known as “framing.” If I tell my environ-mentalist friend, Nancy, “There are new power-plant designs that most environmentalists support,” she will feel differently than if I say, “There are new power-plant designs that the energy industry supports.” She may not know anything about those designs, but she will be more worried about them simply because of how I first framed the few facts I presented.
Fear Factors
What skews our sense of danger?
Some risks may be instinctive—the dark, enclosed spaces, snakes, spiders. But we aren’t born afraid of mercury or nuclear power or “chemicals.” What makes them particularly scary? And why are huge environmental threats such as climate change, particulate air pollution, and ocean acidification perceived as less scary than the evidence indicates?
In the 1970s, some of Kahneman’s partners—including Paul Slovic and Baruch Fischhoff—conducted a series of studies, finding that risks essentially have personality traits or psychological “risk-perception factors” that make factual information feel more, or less, frightening. Risk-perception factors give the cold, hard evidence its affective meaning, its emotional valence. More than a dozen of these factors have been identified; some with the strongest influence on our perception of environmental risks include:
Is it natural or human-made? If a risk is human-made, it will feel scarier than if it’s natural. That’s one reason why radiation from nuclear power scares Nancy more than radiation from the sun, and why Andrea worries more about industrial pharmaceuticals than herbal remedies. It’s a big part of why some people worry about genetically modified food but not about hybrids created “the natural way.”
Is the risk imposed or voluntary? A risk we choose to take won’t worry us as much as a risk that’s imposed on us. Nancy is less afraid of exposing herself to the carcinogenic radiation of the sun because she chooses to do so. The radioactivity from Chernobyl or Fukushima, which is actually less carcinogenic, is imposed on her. That makes it feel scarier.
Risk versus benefit. The greater the benefit of a choice or behavior, the less afraid we’ll be of any risk that it may involve. Nancy enjoys looking tanned, a benefit that makes exposing herself to carcinogenic radiation less scary. She perceives no direct personal benefit from nuclear power, which is part of the reason why that risk feels greater.
Trust. Trust has a huge influence on our risk perceptions. Most of what we claim to know has actually only been learned from others—sources that, for various reasons, we trust. Michael has not studied the toxicology of mercury. Nancy has no first-hand knowledge of the science regarding the biological effects of nuclear radiation. But sources they trust have warned them about these threats, and trust has made those warnings powerful.
Can it happen to me? We worry much more about dangers we think will affect us personally. This is why concern about climate change is broad but thin. Polls have found that most people believe that the negative impacts of climate change will affect only polar bears—or somebody else. Perhaps the recent spate of extreme weather will change that risk perception.
Risks to kids. We’re much more concerned about risks to children than risks to adults. Recently, environmental groups successfully pressured Johnson & Johnson to remove potentially carcinogenic ingredients from its products, by specifically targeting the company’s baby shampoo.
Safety in Numbers
For tribal cohesion, open minds are dangerous
So how do we sense whom to trust? Research in cultural cognition theory by Yale University law professor Dan Kahan and colleagues offers some important answers. They found that how people see facts depends on which groups they belong to. In other words, people routinely cherry-pick the evidence that supports the opinions of the group(s) with which they most strongly identify. According to the theory of cultural cognition, people can be categorized by four descriptors, based on deeply underlying worldviews about how society should operate.
Hierarchists prefer a predictable, “the way it’s always been” approach to issues as well as a rigid hierarchical ladder of social and economic class. They don’t like change, and they don’t want governmentshaking things up and leveling the playing field in an effort to make things fair. Hierarchists tend to deny climate change because the solutions will require government intervention and economic change.
Egalitarians prefer a society that feels fairer and more flexible and not stuck in rigid hierarchies. They like government intervention that challenges the entrenched power of the economic status quo, so the seriousness and urgency of climate change is music to an egalitarian’s ears.
Individualists prefer that society and government let individuals decide things for themselves. Communitarians, on the other hand, prefer a “we’re all in this together” society. Individualists generally deny the evidence of climate change and communitarians generally accept it, because solving such an immense problem is more compatible with their worldview.
This tribalism is reinforced by the trusted “thought leaders” who carry a tribe’s ideological banner. What Al Gore or Bill McKibben says about climate change is taken as a matter of faith by egalitarians and communitarians, not because Gore or McKibben is a scientific expert, but because they champion the sorts of egalitarian and communitarian approaches to the problem that ring true to those worldviews. To hierarchists and individualists, climate-change deniers such as George Will or Senator James Inhofe speak “the truth” because they are the intellectual standard-bearers of a more conservative/libertarian worldview that appeals to those groups.
Cultural cognition explains why bright, educated people can see the same facts in such different ways. It also explains why we argue about them so fiercely. We are social animals, and we have come to rely on our particular group for our health and safety. By agreeing with our group, we are accepted as members in good standing. That helps keep us safe, and it reinforces the solidarity and influence of our group within the larger society. If society is functioning in the way we prefer, we feel safer; when society isn’t operating by our preferred rules, we feel threatened.
Narrowing the Gap
There is no doubt that getting risks wrong can itself be risky. No matter how smart we like to think we are, our instincts sometimes produce these dangerous mistakes. The perception gap can lead to dangerous personal choices and behaviors, such as Nancy’s getting too much sun or Michael’s giving up healthy foods that “feel” scary. The perception gap can also lead to dangerous policies. Nuclear-power fears have profoundly shaped national energy strategies. In the absence of other large-scale energy alternatives, we continue to burn coal—which kills tens of thousands of people, sickens millions due to particulate pollution, and plays a major role in perturbing the climate. Furthermore, worrying more than the evidence warrants creates unnecessary stress. Chronic stress can raise blood pressure, weaken immune systems, and increase the likelihood of suffering from clinical depression and Type 2 diabetes.
So where does this leave us? Are Michael and Nancy and Andrea, and you and I, the powerless victims of what environmental writer Andy Revkin calls our “Inconvenient Mind”? Was Scottish philosopher David Hume right when he observed, “Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.”
No. There is hope. Existential philosopher Nicola Abbagnano put it better: “Reason itself is fallible, and this fallibility must find a place in our logic.” We can fight back against this fallibility by intelligently applying what we’ve learned about why our perceptions of risk sometimes don’t match the evidence. But that will require some difficult and giant leaps. Here are a few places for each of us to start.
Slow down. Think things through. Give yourself more time than you usually take. Don’t just go with your initial gut instinct, which is shaped by all those risk-perception factors and mental shortcuts that may get you into trouble.
Get more information. Having more facts will give reason a bit more say in the process. Also, use your own brain; don’t just rely on somebody else’s. Do a little digging yourself.
Expand the range of sources from which you get information. Don’t just rely on the ones from your tribe because they feel trustworthy. And be just a little more cautious about what your trusted sources say. After all, environmental groups and leaders have agendas too, just like politicians and corporations and government agencies. Just because their views might match yours and feel good doesn’t mean your sources aren’t spinning the facts to advance their point of view—rather than to honestly and objectively inform you.
Will thinking more carefully make us perfectly objective Cartesian rationalists? Hardly. The instinctive, subjective way we interpret things is powerful and deeply embedded in the way our brain works; it actually operates subconsciously and beyond our free will. We can’t completely overcome it. But if we can achieve a post-Enlightenment acceptance of the limits to reason and realize the very real danger those limits create, we can engage our powerful mental capacities to combat the instincts that send us tumbling into the perception gap.
David Ropeik is an instructor in Harvard University’s Continuing Education Program and a former award-winning broadcast journalist. He coauthored Risk: A Practical Guide for Deciding What’s Really Safe and What’s Really Dangerous in the World Around You, and he has written articles for the New York Times, the Washington Post, USA Today, the Los Angeles Times, and the Boston Globe. His latest book, How Risky Is It Really? Why Our Fears Don’t Always Match the Facts, is published by McGraw Hill. David lives in Concord, MA.
Image ©Robodread/Dreamstime.com
Use this article in the classroom:
Classroom Resources (pdf)