How cognitive psychology can inform conservation biology
By Judith L. Anderson
It was an affair only a statistician could love, but close to 100 regional biologists toughed it out, spending the day in Seattle listening and commenting as the [National Marine Fisheries Service] described its latest work in the arcane world of extinction mathematics….By late afternoon, Lloyd Moody, policy wonk from Washington Governor Gary Locke’s salmon advisory team, admitted he had a hard time following most of the discussion. (1)
It’s not just extinction mathematics. Conservation science in general is tending toward ever-increasing units of analysis, more mathematical and comprehensive analyses, and more complex decision processes (e.g., metapopulations, multivariate statistics, and ecological risk assessment). Integrating multiple objectives, explicit considerations of uncertainty, and discussions of probability and risk have become daily fare.
These approaches seem appropriate, even essential, for moving conservation science forward into the new millennium. But there’s one problem — our minds were designed back in the Stone Age. They are not well equipped to think about problems this way.
The purpose of this article is to show how understanding the stone-age mind can help us communicate science more effectively as we address complex conservation problems. I introduce some recent developments in cognitive psychology emerging from the hypothesis that the human mind includes an array of evolved cognitive skills that were useful in solving day-to-day problems faced by our ancestors. Thus, though most people may have a hard time with highly mathematical and complex science, they can do a number of closely related mental tasks very easily and efficiently.
For example, we package the world into distinct whole things and events, we manage future uncertainties by noting past frequencies of random events, and we capture our experiences by structuring them as cases and stories that can be “filed” and retrieved from memory. The trick is to bring these evolved skills on-line in scientific contexts.
If we are mindful of these skills, we can recognize opportunities to activate them. Sometimes a problem can be restructured to harmonize with mental processes that come easily to people, without sacrificing scientific rigor. Often we can enlist these skills to communicate a complex result more efficiently. It may even be possible to use them to enhance conservation biology as a discipline.
Cognitive Psychology: A Dose of Humility
Before the 1980s, cognitive psychology was great fun for anyone who thought humanity needed to be taken down a peg. At every turn, psychological research demonstrated that people were not smart at all, even those with advanced degrees and training in mathematics and logic. People couldn’t use decimal probabilities and elementary statistical concepts correctly. They were terrible at simple problems in abstract logic, and they failed to find optimal solutions in decision-making problems. The mistakes people made with these problems were so automatic, pervasive, and incorrigible that some were labeled “cognitive illusions,” a parallel to optical illusions. Given widespread mental incompetence, it seemed a wonder humanity had survived at all.
But that “wonder” was the key to understanding the cognitive shortcomings. We had survived, after all, having evolved from the common ancestor we shared with apes about 5 million years ago. For most of our evolutionary history, humans were highly social hunter-gatherers, living in small egalitarian groups and having low reproductive rates and long childhoods. Somewhere along the way, we developed language and culture, which vastly increased our ability to store and share information and alter our behavior and our environment. This long, successful history was achieved without the help of writing and mathematics, which have probably been around for less than 6000 years. Our brains must have been doing something right.
How People Turned Out to Be Smart After All.
In the last 15 years, a number of researchers in cognitive psychology, including Leda Cosmides, John Tooby, and Gerd Gigerenzer, began to apply evolutionary thinking to the paradox of human mental “disabilities.” They reasoned that cognitive illusions and other errors did not arise from bad problem solving and decision making. Instead, they argued that the traditional problems people had been given in the psychology lab failed to tap into abilities that had evolved by natural selection. So the researchers looked for evidence of aptitudes that might have been useful as ancestral, preliterate humans interacted with each other and with the natural world.
This evolutionary approach is bearing fruit. Often, the same problems that had proven so difficult previously became easy for people when they were restructured in ways that took advantage of the corresponding evolved ability. Table 1 shows examples of some of these cognitive skills. These abilities seem to proceed from our intuitive tendency to parse our environment and experience into discrete, bounded wholes.
Several arguments support the hypothesis that these skills may be, at least partly, “hard-wired” into our brains as evolved adaptations. They appear as children grow, without being taught explicitly. We employ them easily and automatically, often without being aware of it, just as we are not aware of the complex processing that produces vision. These skills — problem solving, dealing with uncertainty, and decision-making in the real world — clearly could have contributed to our ancestors’ survival.
The world doesn’t always accommodate our evolved skills. We often must grapple with problems that don’t package easily because they have no clear boundaries or are hard to define. For example, in conservation science, we need to understand ecosystems and biodiversity, work with decimal probabilities in risk analysis, and make predictions from theory (Table 1). We find these problems difficult precisely because they require the human mind to process inputs it is not designed to expect. It is a little like asking a calculator do addition using Roman numerals.
Packaging Information in Discrete Units
Our brains can’t be hard-wired for both discrete and continuous approaches to understanding the world. The ability to divide up material objects and experience into separate things and events and to classify and name them is the basis for language and, hence, for much of human understanding. We pay the price for this spectacular achievement in our corresponding “disabilities” with continuous processes and with things that are poorly bounded or hard to define.
Understanding this constraint suggests ways to get around it. For example, Bunnell and Huggard (2) show how to report a cross-scale spatial analysis of habitat use while satisfying the reader’s desire for discrete units at a particular scale. They repeated their analysis of habitat use at four different forest spatial scales — patch, stand, landscape, and region — and presented the results for each scale within concentric circles. The reader can focus on each scale separately, picturing bounded, appropriately sized units, while still visualizing how they are nested. Bunnell and Huggard also suggest that analyzing the world as discrete units on limited spatial scales may actually be a good choice for applied conservation research. If research is to be useful for managers, in most cases it must relate to politically defined populations, species, and units of habitat.
People’s ability to classify the organisms around them seems to be an evolved skill, according to Scott Atran, who has studied “folk taxonomy” in a variety of cultures. Regardless of their cultural background, children effortlessly recognize and learn names for the creatures around them, and they classify them into a predictable structure whose main divisions are “kingdoms” (plants and animals), life-form “classes” (trees, birds), and “species” (oak, gull).
The existence of this intuitive folk taxonomy is important for conservation scientists. First, it suggests that a term such as “evolutionarily significant unit,” however important it may be in technical discussions, is likely to be problematic for non-scientists who may feel more comfortable with units representing levels of the folk taxonomy structure. The United States would probably never have passed an “Endangered Evolutionarily Significant Units Act.” Second, it suggests that difficult concepts such as ecosystem may be more easily managed when connected with a familiar folk taxonomy level. For example, despite theoretical criticisms of the idea, conservation biologists continue to be attracted to umbrella or indicator species as bellwethers for otherwise ill-defined ecosystems or habitats, especially when practical management considerations loom large.
Focus on Frequencies
We find probabilities difficult. But like many non-human animals, we easily and automatically track and compare frequencies of significant events. This ability had obvious survival value for our ancestors. A hunter’s plan for the day may have depended heavily on frequency information such as, “In my last 10 trips to that valley, I killed a deer six times.”
In mathematics, probabilities are closely related to frequencies. For our brains, however, the difference between probabilities and frequencies is dramatic. A variety of serious errors crop up when human brains grapple with decimal probabilities; this is the favored habitat of cognitive illusions. One of the most famous is the “Linda problem”:
Linda is 31 years old, single, outspoken, and bright. She is interested in political issues, concerned about human rights, and active in her local community. Which of the following statements about Linda is more probable?
1. Linda is a bank teller.
2. Linda is a bank teller and active in the feminist movement.
If you chose statement two, as most people do, you have been fooled by a cognitive illusion. Statement one is actually more probable because the group of bank tellers includes both feminists and non-feminists. Probability theory tells us that a person is more likely to be a member of the larger group of statement one than the smaller group of statement two. This illusion is called the conjunction fallacy because statement two is a conjunction (joining) of two smaller statements. Statement two can be true only if both of its component statements are true.
Most people easily avoid the conjunction fallacy when information is presented in frequency format: Picture 100 women like Linda. How many do you think are bank tellers? How many are bank tellers and active in the feminist movement?
What’s happening here? If people were good at intuitively using probabilities, you might expect that asking whether a statement is “more probable” should set off a mental process of estimating and comparing decimal probabilities. But that does not occur. Instead, people apparently substitute plausibility for probability. Once they have made that unconscious substitution, they think that statement two is the right answer. It is indeed more plausible than statement one, but it is not more probable! In contrast, the use of frequency format — asking, How many (out of 100)? — seems to keep people firmly in the quantitative realm of counting and classifying. In this context, they are much more likely to give the answer that probability theory dictates, statement one.
In the context of conservation biology, the conjunction fallacy might crop up if you asked an expert, What is the probability that this population of the pacific giant salamander (Dicamptodon tenebrosus) is declining? What is the probability that it is declining and disappearing from marginal habitats? The same question, stated in frequency format, shows clearly that the second estimate should be smaller: Imagine 100 populations of the pacific giant salamander in forested mountain streams of British Columbia. How many are declining? Of those, how many are disappearing from marginal habitats?
In situations where frequency format can’t be brought in to help, we should be wary of the ease with which people confuse probability and plausibility. For example, it is tempting to include a lot of detail in an ecological model because the detail seems to improve its plausibility. However, the conjunction (joining) of a model and its assumptions implies that a model’s results can be true only if all its explicit and implicit assumptions are true. A cruder model that assumes less and promises less may produce more reliable results. Hilborn and Mangel (3) suggest that “… the optimal model size is much smaller than intuition dictates.” They cite an example in which fisheries management actions were better predicted by a model with no age structure than by one that included age structure, even though information was available about age-specific processes.
Despite our problems in dealing with probability, it is entwined in most discussions of risk and uncertainty. In conservation biology, where risk and uncertainty abound, we might expect to encounter many probability analyses. However, the commentators on a recent feature on uncertainty in Conservation Biology (Ralls and Taylor (4) observed that only 2 percent of papers in the journal explicitly analyzed risks. Perhaps one reason for this neglect is that working with probability is daunting for most people, especially when risks are involved. If conservation biologists are to feel more comfortable about embracing uncertainty, it is important to recognize why decimal probabilities are so problematic (5).
Telling Stories and Counting Cases
The easiest way to recognize an expert is by the relevance and richness of the stories he tells. Roger Schank, a researcher in cognitive science and artificial intelligence, suggests that a good part of intelligence consists of being reminded of an appropriate case or story from the collection in one’s memory. In conversation, for example, people often respond to the topic at hand by telling a story from their own experience; a good story is one that relates to, and whose details augment, what the other person has said.
Storytelling as a method of sharing information sounds a bit unscientific, and, as the “Linda problem” demonstrates, our predilection for stories can lead us astray. Nonetheless, storytelling can enhance science. An effective working group is usually one with an attractive coffee room, where science becomes partly an oral tradition. Anyone who has been to professional meetings regularly over the last few decades will have noticed the trend toward shorter presentations that tell a story (without weakening the science). The reason for telling stories at scientific meetings is only partly that they’re entertaining. Stories work because people are good at understanding, remembering, and retelling them. Eventually these vivid cases are stored in many individual human memories.
Schank explains that humans store experiences in discrete chunks. He suggests that many experiences seem to be structured as “scripts” or “schemas” — series of routine steps with decision points, alternatives, and room for experimentation. For example, a “bird census script” might include the following steps: 1. Drive truck to site. 2. Walk to first sampling location. 3. Listen and observe for 15 minutes while recording in notebook. 4. Walk to the second sampling location with an option for coffee break. And so on. Cases of the bird census script that involve departures from the routine, such as “the time I ran into a bear.” are stored in memory as “stories,” which rely on the script for their structure.
Our ability to use cases and stories grows directly out of our evolved skills with discrete wholes. The human mind sees a case or a story as a bounded unit, even though the experience it represents may not be bounded in reality at all. This enables us to count cases and note their frequency.
Using Cases to Solve Problems.
When trying to solve a real-world problem, people often look for a case similar to the problem at hand and adapt a solution that has worked previously (6). This approach to problem solving seems simple-minded because it is automatic and effortless. But it is useful — especially where there are few rules to fall back on.
Solutions drawn from previous cases come with an assurance that they’re likely to be easily understood and used by scientists and non-scientists alike. In their book, Berkes and Folke have collected examples of traditional societies in which people, unaided by computers and calculus, have relied on their own cognitive resources to solve resource management problems and share their solutions with others (7). One chapter in the book, authored by Janis Alcorn and Victor Toledo provides an example: Milpa is an agricultural system that specifies rotation between crop production and forest regeneration on small patches within the tropical forests of Central America. Milpa has sustained both agriculture and the forest over many generations. The milpa process is transmitted from one generation to the next as a script, they suggest, embodied in religious and social customs.
Over the centuries, each farmer could call upon many cases of the milpa process from his own experience, that of his neighbors, and stories from the past. Relying only on their collective experiences with milpa, the farmers have found solutions to new problems by modifying the milpa script appropriately. For example, the Huastec people successfully modified the milpa process by shortening the fallow period to maintain some forest cover despite pressure to clear land due to high population density.
While having many cases to draw from is advantageous for problem solving, the case-based approach can be useful even when there is only one case, as with global climate change. No one alive has experience with global climate change, but it is possible to look for solutions by discussing stories or cases that are relevant to part of the problem. For example, David Suzuki, a leading Canadian environmentalist, suggests that an “energy revolution” to diminish the rate of greenhouse gas emission sounds less formidable when we remember earlier cases of energy revolutions — from wood to coal and from coal to oil and gas.
Indexing Cases: A Tool for Intelligent Reminding.
Smart people are good at indexing stories, Schank suggests. Indexing is the key to remembering and retrieving stories from memory. Our minds automatically and unconsciously associate each case in storage with index variables that help to identify what category of experience the case falls under, what aspects it shares with other kinds of experiences, and when it would be appropriate or useful to retrieve it.
A group of people with common interests, such as conservation biologists, should be able to help its members index their experiences intelligently to improve communication and problem solving. The critical step is to agree explicitly on index variables. They are more than just keywords writ large. Index variables for conservation biology should be both easy to assess in the field and reasonable indicators of important ecosystem variables. In general, such variables are easier to use if they are constrained to only a few levels — qualitative states or ranges of data values. For example, Keddy and Drummond (8) proposed a set of 10 simple indicators of forest condition for eastern deciduous forests, such as tree size and numbers of large carnivores. If adopted generally, such variables could be easily tracked in every forest stand of that type under study or management.
The failure to agree on a core set of standardized variables currently impedes information sharing. For example, Bunnell and Huggard compared 11 studies of habitat use by shrews in western forests of North America (2). Of a total of 120 habitat variables measured, 76 percent were unique to a single study, and only 8 percent were used in more than two studies. Even high-profile efforts like the National Audubon Society Christmas Bird Count and the USGS North American Breeding Bird Survey produce data sets that cannot be compared (9).
Choosing common measurements and achieving widespread use of good index variables are daunting tasks. However, they can be done. Consider as an example the fields of law and medicine where teaching and information sharing have been case-based to a considerable extent. In law, indexing includes, among other variables, a name for each case. Naming helps people remember, talk about, and use cases. In medicine, physicians explicitly agree on a small group of variables (e.g., vital signs and diagnoses) so that they can index their experiences similarly.
Of course, many conservation cases will not resemble court decisions or patients. If we are to use case-based problem solving, it will have to be purposefully adapted to the needs of conservation science. Several authors have begun to do just that. For example, Brunner and Clark (10) suggest that a management discipline that successfully uses cases will progress by setting many prototypes in motion, sharing information about their performance, and selecting the most successful cases as models for the future. In effect, this process is another kind of evolution — the evolution of collective knowledge and effective practice.
Dragging Conservation Science Forward into the Stone Age
Some mental skills come to us for free, apparently as a legacy of natural selection. Knowledge of our evolved skills can make conservation science more user-friendly by helping us package information and structure problem solving to fit our stone-age abilities. As scientists with an interest in ecosystems, we decry the human tendency to draw boundaries where they don’t occur naturally, to focus on frequencies when a more sophisticated analysis might demand probabilities, and to engage in storytelling. Perhaps we need not be so hard on ourselves. Our limited yet impressive evolved abilities may help more effective conservation practice evolve in our lifetime.
We should beware of drifting heedlessly toward the kind of science only a statistician could love. A successful conservation program must inspire positive responses from people who quite rightly depend on sophisticated processes for communication, judgment, and problem solving that date back a long, long way.
Table 1. Our Evolved Abilities & Difficult Cognitive Tasks Demanded in Conservation Science
|Evolved Abilities||Examples||Related Cognitive Tasks that Are Difficult||Examples|
|Packaging information||Dealing with continuous processes|
|Focusing on frequencies||Using decimal probabilities|
|Telling stories||Making decisions in the absence of experience|
1. Rudolph, B. 2000. NMFS extinction analysis questioned at workshop. NW Fishletter. (www.newsdata.com/enernet/fishletter/fishltr110.html).
2. Bunnell, F. L. and D. J. Huggard 1999. Biodiversity across spatial and temporal scales: Problems and opportunities. Journal of Forest Ecology and Management 115:113-126.
3. Hilborn, R. and M. Mangel. 1997. The Ecological Detective. Princeton, N.J., Princeton University Press.
4. Ralls, K. and B. L. Taylor. 2000. Better policy and management decisions through explicit analysis of uncertainty: New approaches from marine conservation. Conservation Biology 14(5):1240-1242.
5. Anderson, J. L. 1998. Embracing uncertainty: The interface of Bayesian statistics and cognitive psychology. Conservation Ecology 2(1):2 (http://www.consecol.org/vol2/iss1/art2).
6. Schank, R. C. 1999. Dynamic memory revisited. Cambridge University Press, Cambridge.
7. Berkes, F. and C. Folke (eds). 1998. Linking social and ecological systems: Management practices and social mechanisms for building resilience. Cambridge University Press, Cambridge.
8. Keddy, P. A. and C. G. Drummond. 1996. Ecological properties for the evaluation, management, and restoration of temperate deciduous forest ecosystems. Ecological Applications 6(3):748-762.
9. Boarman, W. I. and S. J. Coe. 2000. Finding value in pre-existing data sets. Conservation Biology in Practice 1(1):32-34.
10. Brunner, R. D. and T. W. Clark. 1997. A practice-based approach to ecosystem management. Conservation Biology 11(1):48-58.
Samuels, R., S. Stich, and P. D. Tremoulet. 1999. Rethinking rationality: From bleak implications to Darwinian modules. In LePore, E. and Z. W. Pylyshyn, eds. What is Cognitive Science? Blackwell, Malden, MA. pp. 74-120.
Gigerenzer, G. 2000. Adaptive thinking: Rationality in the real world. Oxford University Press, Oxford, UK.
Riesbeck, C. K. and R. Schank. 1989. Inside case-based reasoning. Erlbaum, Hillsdale, New Jersey.
Schank, R. C. 1999. Dynamic memory revisited. Cambridge University Press, Cambridge.
Wilson, M., M. Daly, et al. 1998. The evolved psychological apparatus of human decision-making is one source of environmental problems. In Caro, T. M., ed. Behavioral Ecology and Conservation Biology. Oxford University Press, Oxford, pp 501-523.
About the author:
Judith L. Anderson is an adjunct professor in the Department of Psychology at Simon Fraser University in British Columbia. She works on the evolutionary psychology of decision-making processes. (firstname.lastname@example.org)
Illustration ©Eric Westbrook/SIS