Nonprofit journalism dedicated to creating a Human Age we actually want to live in.

Note: This article is from Conservation Magazine, the precursor to Anthropocene Magazine. The full 14-year Conservation Magazine archive is now available here.

Virtual Ecosystems

July 29, 2008

Animated by a few simple yet baffling rules, virtual ecosystems growing in supercomputers bear an uncanny resemblance to real ones. The simulations challenge conventional wisdom about extinctions and invasions. It is time to start thinking about how these models could be used—or misused—to inform conservation decisions.

By W. Wayt Gibbs

Conservation biology is an endless struggle with complexity. Most people try to focus their work sharply enough that they can ignore the myriad external factors beyond the scope of their project or theory. But a few step back, survey the larger landscape, and try to envision an underlying code, a simple set of rules that creates complex phenomena by mere iteration and interaction—much like the rules that build millions of species from just four bases of DNA and 22 amino acids. Such theorists dream of having a massively complex matrix of empirical data. If their programs can generate patterns that are indistinguishable from those in the matrix, then they might just describe the real system well enough to control it.

It would be a great boon to conservation biologists, for example, to be able to model how the extinction of a threatened species or the addition of an invasive species will alter an ecosystem. Such changes ripple through the food web, changing who eats whom in ways that are sometimes profound, yet are rarely obvious. Any ecosystem worth preserving typically has hundreds or thousands of plant and animal species, and nearly all the animal species are both predator and prey. (Even an eagle is prey to its parasites, for example.) A graph of these interrelationships—a food web—is intimidating in its complexity.

Food webs, in other words, are just the sort of matrices that appeal to mathematically minded theorists such as Neo D. Martinez, Visiting Professor of Nonlinear Systems at Cornell University’s Center for Applied Mathematics, who is based at the Rocky Mountain Biological Laboratory (RMBL). Undeterred by the earlier failure of famous ecologists to devise accurate theories of food webs, Martinez has spent the past dozen years both adding to the matrix of data and, more significantly, peering through the fog of complexity to the simple, invariant rules underneath. Using statistics and computer simulations, Martinez and his current and former RMBL postdoctoral researchers1 seem to have uncovered some fundamental properties shared by most—perhaps even all—natural ecosystems.

The virtual food webs that Martinez’s models generate on supercomputers look remarkably similar in their structure to those documented by field biologists. He and his collaborators are already expanding the simulations so that they can also explore the dynamics of the webs. What happens when human activity removes or adds species to these virtual worlds? Which species are most important to sustainability: the predators at the top or the primary producers at the bottom? Are highly connected food webs more or less vulnerable to extinctions?

The answers that emerge from the supercomputer studies are often counter-intuitive, although they await verification by fieldwork. And it is still far from clear what the basic mathematical rules mean in biological terms. It does seem clear, however, that ecological computer models are approaching the level of sophistication that will thrust them into the public eye, ready or not. Even as he hopes to apply them to solve real problems in conservation management, Martinez worries about how others may misuse the technology.

Simple Rules

Ecology was ripe for the new theory of food webs that Martinez published in 2000. Until the 1990s, the dominant theory held that food webs of all sizes and sorts share eight or so characteristics—some scientists in the field even referred to them as laws of nature. But these “scale invariant properties” turned out to vary quite a bit in large, high-quality food web data that was collected (some of it by Martinez) in the early and mid-1990s (see sidebar: A Brief History of Food Webology).

Williams and Martinez claimed in Nature that a disarmingly simple model could generate the wide variety of trophic structures seen in real-life ecosystems (1). By plugging two simple parameters—the number of species and the “connectance” of the web (links divided by the square of the species number)—into the model, they were able to reproduce the structure of seven large, well studied food webs with tenfold greater accuracy than previous theories.

Martinez calls their theory the “niche model” because at its core lies the idea that species eat and are eaten by other species within a certain niche. Models are simplifications of nature, and the “niche” here is not a conventional ecological niche but rather a mathematical abstraction. Imagine a number line stretching from zero to one. Each species is assigned a random spot on that line and is allowed to eat only those species that fall within a certain (randomly chosen) distance from that spot. Cannibalism is allowed, as are food “loops,” in which A eats B, B eats C, and C eats A.

Martinez and Williams wrote a computer simulation based on the model. They tell the machine how many species to include and the desired connectance value; the program then turns out thousands of randomly generated food webs based on those two basic criteria. The interesting, almost magical result came when they compared the simulated, one-dimensional food webs to the matrix of experimental data carefully collected from real ecosystems: Skipwith Pond, the Coachella desert, the Ythan estuary, the Chesapeake Bay, and so on. By any of a dozen different statistical measures, the computer-generated webs look amazingly similar to real food webs that have the same number of species and the same level of connectance.

Further tests have confirmed that the niche model does seem to really mimic or tap into some fundamental property of ecosystems. But neither Martinez nor anyone else has yet worked out what “niche distance” means in the real world. The model is anonymous: species are tossed onto the line at random. Before the model can be directly applied to conservation, biologists will have to work out where each particular species in an ecosystem under study should go on the number line.

So far, Martinez says, he only knows what “niche distance” isn’t. It is not a simple body size relationship, although body size seems to be one factor. Nor is it just trophic level, although that too seems to figure in somehow. Rather, the position and breadth of each niche is a complex combination of many ecological and physiological factors.

Although the model is not yet able to serve as an oracle that predicts the future of particular food webs, it has led to the discovery that these networks operate in strikingly similar ways to other kinds of networks—but not the “small worlds” that many had assumed.

Not a Small World After All

You have no doubt heard of the notion that each person is no more than “six degrees of separation” from every other person on Earth. There are many such examples of highly connected networks: Hollywood movie co-stars, U.S. airline routes, chain letters, Internet web sites. All of these collections share two properties. They are “small worlds,” meaning that the elements (actors, airports, web sites, etc.) tend to connect in multiple clusters of various sizes. And these networks are also “scale-free,” which is a technical way of saying that the system is dominated by hubs that have a huge number of links (Kevin Bacon, Chicago O’Hare, Google, etc.).

Scale-free networks tend to be more resistant to widespread collapse than are systems in which elements are linked at random. Because food webs seem stable, many theorists natur-ally assumed that they shared these properties as well.

In fact, food webs usually exhibit neither characteristic, according to recent studies by Martinez and his collaborators including Albert-László Barabási, a leading network theorist. “People wanted to see food webs as scale-free and small world,” Martinez says. “But they just aren’t.”

That is not to say that species aren’t tightly connected. In large communities, both terrestrial and aquatic, more than 95 percent of species are within three links of each other. It is just that they don’t seem to group into clusters the way that social networks typically do.

Rather, food webs tend to take on shapes more reminiscent of the branching patterns seen in river systems and blood vessels. In an intriguing paper this past May, Diego Garlaschelli of the University of Rome and coworkers analyzed the same seven food webs Martinez had studied and found that the exponent in a simple statistical equation was very nearly constant in all seven communities (2). That equation holds for many kinds of natural transportation networks, and Garlaschelli speculates that perhaps food webs are optimized to distribute energy through the ecosystem as efficiently as possible. But as with the niche model, no one yet knows just how to interpret the statistical equation in biological terms.

Inflection Points

It may inhabit the realm of the abstract, but the niche model can still make predictions about the real world. One important prediction of Martinez’s simulations is that ecosystems can react to external pressures in nonlinear ways—in other words, a small push at the wrong place or time can have disproportionately large effects. “It’s a prevalent hypothesis in ecology,” Martinez notes. The idea was popularized decades ago by Paul Ehrlich’s “rivet hypothesis,” which states that removing species from an ecosystem is like extracting rivets from an airplane in midair. The plane can lose a few rivets without failure, but at some point, a wing falls off. “There hasn’t been much data to support the idea, though,” Martinez points out.

That is still true, and yet the niche models clearly predict that large ecosystems, almost regardless of their initial structure, will exhibit such thresholds. The position of the critical points varies quite a bit from one ecosystem to another. But cross that line—remove one species more—and the system takes a much bigger hit than was inflicted by the previous extinction. Small food webs, or those with low connectivity, are already very fragile and can tolerate few or no removals.

Testing these predictions will not be easy. Field biologists would have to reconstruct the order and relationship of extinctions in a community so that they could separate primary extinctions caused directly by human activity from secondary extinctions that occur as the effects ripple through the food web. As an intermediate step, Martinez, Williams, Brose, and Dunne have been working to add another dimension—time—to their ecosystem simulations.

Animating Virtual Life

Adding time means adding a huge amount of biological complexity to the model. A snapshot-like model can ignore many ecological processes that a movie-like simulation must capture. But Martinez knows that if simulation is ever to find use as a predictive tool for improving conservation decisions, it will have to incorporate the many dynamic forces that pull food webs into their distinctive shapes and that drive their responses to outside perturbation.

At Cornell’s supercomputer center, Martinez and his colleagues have been running programs that modify the niche model to include competition for plant nutrients and fighting among predators, avoidance behavior among prey, and the saturation and frustration effects that occur when prey is superabundant or terribly scarce. The programs now track biomass, accounting for metabolic rates and carrying capacities. “This makes for very dynamic models that are much more like what we see in nature,” Martinez says.

Many of these underlying phenomena are themselves nonlinear, which translates into a great deal of prey-switching among predators and counterintuitive results. Because people—even mathematicians—are not good at reasoning about nonlinear systems, computer models such as these could be a big help in understanding the potentially dramatic effects of what may seem minor environmental changes. But nonlinear problems also tend to be exceedingly sensitive to small errors in input data, so it can be dangerous to rely on them until they have been thoroughly tested by field experiments.

When one adds time and all its consequent ecological dynamics to previous food web models, the ecosystems quickly crash, Martinez reports. “But with the niche model, very few species disappear.” That fact in itself, he argues, suggests that the niche model has tapped into some kind of natural law that gives rise to sustainable ecosystems. “The more biology we add into the model, the more stable the food web appears.” The remarkable robustness of food webs governed by these rules, he says, “explains why nature works this way.”

A Silicon Crystal Ball?

Clearly, the next challenge is to move from description toward prediction. “We won’t be able to say: stop fishing this species at this rate and its population will double,” Martinez cautions. “But we might be able to get qualitative predictions that work.”

Martinez has already begun collaborating with Eric Berlow and Sarah Harper-Smith of the University of California White Mountain Research Station and Roland Knapp at the Sierra Nevada Aquatic Research Laboratory. Knapp has spent years collecting data on thousands of lakes in the Sierra Nevada to understand the ecological consequences of fish stocking by the California Department of Fish and Game. Knapp was able to show that the nonnative fish were decimating mountain yellow-legged frog (Rana muscosa) populations, among other species. (The state has already begun removing fish from some lakes.)

Berlow, Martinez, and Harper-Smith fed Knapp’s matrix of data into the computer simulation software and were able to visualize how the addition of fish had dramatically changed these aquatic communities. “When we looked at the webs, we realized the community was getting simplified to a much greater extent than we expected just from looking at the lists of species,” Berlow reports. Taxonomic species dropped by 30 percent in the stocked lakes—but the number of distinct trophic actors fell by 40 percent. And whereas in nonstocked lakes the food webs typically broke into several clusters, fish often reduced that structure to a single large group of links.

“Our next step is to incorporate dynamics and use the model to generate predictions of how virgin lakes would change when fish are added,” then to compare those predictions to the actual experience in the Sierra Nevada, Berlow says. If the model proves accurate, it should be able to help managers calculate how many fish, or which species, need to be removed from a lake to avoid damaging the food web.

This year, the National Science Foundation awarded a group of researchers including Martinez $3.8 million to develop ecoinformatic tools for putting together an Internet database of food webs. The “Webs on the web” project will also refine software (which will soon be publicly available at for analyzing and visualizing ecological networks. That kind of funding tends to make a science more credible and more visible—ready or not.

A Sword with Two Edges

For all the potential usefulness of ecosystem models, there is a potential for misuse as well. The debate over climate change exemplifies how nonlinear models can confuse rather than clarify the understanding of an issue in the minds of the public and political leaders. The day may not be far off when a lobby group or Congressional representative trots out the results of a food web simulation to support some controversial policy decision. What if the models predict that the extinction of a snail darter will not affect the stability of its ecosystem or that only the habitat-wide protection of a spotted owl will prevent a catastrophic collapse of a forest food web?

“I am hyperaware of this potential problem,” Martinez says. “In ecology we tend to brush off the possibility that people could use our good science for bad purposes. But it happens all the time. I think we need to have a lot more discussion about social ethics in ecology. It should be a focus of study.”


As he looks back at the dissertation he filed on food web structure in 1991, Martinez seems to take a guilty pleasure at his role in grounding the dominant theories of the time. “Throughout the 1980s,” he recalls, “the idea of scale invariance was rising.” Theorists thought they had identified the equivalent of pi for food webs, mathematical properties that remained constant regardless of the size of an ecosystem.

In a widely cited 1991 review of “Food web patterns and their consequences” in Nature, the noted ecologists John H. Lawton, Stuart L. Pimm, and Joel E. Cohen identified more than half a dozen such scale-invariant patterns. At least in small communities, the ratio of links among species to the number of species seemed roughly constant, for example. The same was thought to be true for the fraction of top (predator-only) species, bottom (prey-only, primary producer) species, and intermediate (both predator and prey) species.

Some empiricists pointed out that the food web surveys on which the theories of fundamental constants had been “proved” were, at best, small and incomplete—in fact, the largest food web cited in the Nature review contained just 33 trophic species. (One trophic species includes all taxa that share the same predators and prey.)

In conjunction with the U.S. Environmental Protection Agency’s project to alter the pH of Little Rock Lake in Wisconsin, Martinez had assembled an unusually comprehensive food web for the lake, comprising 92 species. “My data and another web of 44 species on the Caribbean island of St. Martin came out around the same time,” Martinez recalls. “Both were collected specifically for food web analysis.”

The new and improved data showed that, contrary to the predictions of scale invariance theory, “there is almost nothing out there that doesn’t get eaten, either in juvenile stage or by parasites,” Martinez observes. Omnivory was thought to be rare; the new webs found it to be common. And whereas conventional wisdom held that every species should have about two links to predators and prey, the average number was ten in the Little Rock Lake and about five in the St. Martin Island food web. Exit the “laws” of scale invariance.


1 Richard J. Williams at the National Center for Ecological Analysis and Synthesis, Ulrich Brose at San Francisco State University, and Jennifer A. Dunne at the Santa Fe Institute.

Literature Cited

1. Williams, R.J. and N.D. Martinez. 2000. Simple rules yield complex food webs. Nature 404:180-183.

2. Garlaschelli, D., G. Caldarelli, and L. Pietronero. 2003. Universal scaling relations in food webs. Nature 423:165-168.

Suggested Reading

Williams, R.J. et al. 2002. Two degrees of separation in complex food webs. Proceedings of the National Academy of Sciences of the United States of America 99:12913-12916.

Dunne, J.A., R.J. Williams, and N.D. Martinez. 2002. Network structure and biodiversity loss in food webs: Robustness increases with connectance. Ecology Letters 5:558-567.

Brose, U., R.J. Williams, and N.D. Martinez. 2003. A comment on “Foraging adaptation and the relationship between food-web complexity and stability”. Science 301:918.

Dunne, J.A., R.J. Williams, and N.D. Martinez. 2003. Network structure and robustness of marine food webs. Santa Fe Institute Working Paper 03-04-024 and in press at Marine Ecology Progress Series.

About the Author:

W. Wayt Gibbs is senior writer at Scientific American magazine, where he has worked as an editor and reporter since 1992.


Illustration ©Michael Gibbs

What to Read Next