How AI is reshaping wildlife conservation — for better or worse


Over the wetlands of Senegal, researcher Alexandre Delplanque pilots a drone to count waterbirds: pelicans, flamingos, and terns. He flies the drone, but AI analyzes the images to count individuals in a flock, speeding up analysis by thousands of hours per survey, he estimates. And time is of the essence.

Since 1970, wildlife populations have plummeted by over seventy percent. The world is in the throes of a biodiversity crisis and, according to some researchers, undergoing its sixth mass extinction. The planet has previously endured five mass extinction events, with the last ushering in the end of the Cretaceous period: the time of the infamous asteroid impact that unleashed a nuclear winter and killed the dinosaurs. That was sixty-six million years ago.

To rescue species from the brink of extinction, first you have to know what you have, and how many – which is often easier said than done, especially in fields with a lot to count. Scientists estimate less than 20 percent of insect species on Earth have been identified. After AI reviewed just a week’s worth of camera trap footage in Panama, researchers say they found over 300 species previously unknown to science.

Pelicans in Senegal.

Pelicans in Senegal.
Image: Alexandre Delplanque

The premise of AI in scientific research is not without critics. Proponents of high-tech in conservation cite the ability of AI to analyze large datasets in seconds that would otherwise take months, for AI to decipher patterns in species’ interactions and distributions undetectable to humans, and to unravel a dizzying array of genomes. Critics point to its environmental impact, potential for bias, and insufficient ethical standards.

Much of AI work in conservation is focused on analyzing thousands of hours of footage taken from remote cameras or aerial surveys, but it’s unlikely to end there. For now, researchers are focused on processing footage with object detection models, a type of AI that can identify and locate objects within an image or video. These models are often built with Convolutional Neural Networks (CNNs) and are trained to identify species or detect their presence or absence.

Projects employing AI to “save species” often generate a media frenzy. Researchers in South Africa generated a flurry of headlines asking if AI can save “the world’s loneliest plant.” Scientists deployed drones over inaccessible swathes of the dense Ngoye Forest in search of a female partner for a male cycad at London’s Kew Botanical Gardens. AI scanned the footage for signs of a species considered extinct in the wild, which researchers hope really isn’t extinct – just obscured under the canopy. But some say these headlines are overblown without considering the consequences.

Counting pelicans using a drone equipped with cameras and AI in Senegal.

Counting pelicans using a drone equipped with cameras and AI in Senegal.
Image: Alexandre Delplanque

“There is a tidal wave of enthusiastic research about the applications of AI and much less critical research that looks at the costs, environmentally and socially,” said Hamish van der Ven, head of the Business, Sustainability, and Technology Lab at the University of British Columbia.

The training process for an AI model, such as a large language model (LLM), can consume over a thousand megawatt hours of electricity. The less obvious problem, says Shaolei Ren, whose research focuses on minimizing the health impacts of AI, is the water consumption of data centers.

Data centers house the infrastructure needed to provide the processing power for AI, and all the technology must be cooled down, usually via freshwater sourced from the local water supply. Due to its cooling needs, AI is projected to withdraw between 4.2 billion and 6.6 billion cubic meters of water annually by 2027, much of which is lost to evaporation. And the environmental impact is not equally felt, as tech giants export their data centers overseas. Google’s plan to construct new data centers in Latin America sparked massive protests in Chile and Uruguay, biodiverse regions already suffering from severe drought.

“Data centers also create a public health crisis due to the air pollutants emitted, including fine particulate matter (PM2.5) and nitrous oxide (NOx),” said Ren. The public health burden triggered by data centers in the U.S. – primarily situated in low-income areas – is projected to cost twenty billion by 2030.

“The models we’re running aren’t huge – they’re big for us, but it’s not like Social Network Big Data.”

Yet the footprint of most biologists’ AI work, for the moment, is negligible. For his part, Delplanque has one local computer processing the images, and his HerdNet model – which aids in population counts of densely packed animals, such as elephants and antelopes on the savannah – took around twelve hours to train, compared to LLMs operating on massive servers that run for weeks during the training process.

“We have this concern as scientists all the time: are we actually harming the environment that we’re trying to help? At least for the cases we’re talking about, I don’t think so, because the models we’re running aren’t huge – they’re big for us, but it’s not like Social Network Big Data,” says Laura Pollock, Assistant Professor in quantitative ecology at McGill University, who aims to deploy AI to extrapolate species interactions.

But computational ecologist Tanya Berger-Wolf argues current low-power applications aren’t harnessing the full potential of the technology, referring to image recognition as “old-school AI.” Berger-Wolf and Pollock co-authored a paper exploring the “unrealized potential of AI” to expand biodiversity knowledge.

“We want to go beyond scaling and speeding up what people already do to something new, like generating testable hypotheses or extracting unseen patterns and combinations,” says Berger-Wolf.

“What we’ve been doing with AI so far is obvious, which is all of this rapid image detection and acoustic monitoring, but we should be doing much more than that: using AI to ask the right ecological questions,” says Pollock.

One potential application that generates attention, to both applause and denunciation, is the concept of using AI to decode animal communication. The Earth Species Project is using generative AI and LLMs in hopes of building a translator to communicate with non-human life. There is also Project CETI, which focuses on using a similar approach to understand sperm whales, which communicate via morse-code-like clicks that, theoretically, can be deciphered. Already, scientists have managed to employ machine learning to suggest elephants address individuals in their family by unique names. But the larger premise of decoding animal communication raises ethical questions and concerns over success. In other words: Will it work? Is it a waste of resources to try? Should we talk to animals at all?

Counting elephants using on the Ivory Coast with cameras attached to light-weight aircraft and AI.

Counting elephants using on the Ivory Coast with cameras attached to light-weight aircraft and AI.
Image: Alexandre Delplanque

“We have to choose where these models will make a difference, not just use them because you have a shiny new toy,” Berger-Wolf cautioned. Applications like LLMs foster a large environmental footprint, so it’s “irresponsible to spend resources if the research outcome does not change. And data is a resource.”

Models are only as good as the data they’re trained on, which can potentially lead to bias and a misprioritization of conservation actions. One of the most common issues include spatial bias, where species are overrepresented in certain regions in data sets, and taxonomic bias, where charismatic species like pandas receive more funding and thus more data is readily available on them than, say, an obscure beetle. But AI can also bias our perceptions and even subtly shape the questions we’re asking, argued van der Ven, who authored a paper on how LLMs downplay environmental challenges.

“There are far more options for AI to offer bias, extract resources, and drive overconsumption than there are conservation applications. If I could wave a wand and uninvent AI, I would,” he said. “If we weigh the benefits for conservation against how effective Amazon is using AI to get consumers to buy more things, it’s a vastly uneven scale.”

In 2024, for its part, Google announced the deployment of an AI model to listen to coral reefs: SurfPerch. Bioacoustics play a key role in assessing reef stability – healthier reefs sound different – and SurfPerch analyzes audio signatures to measure the success of coral restoration efforts or identify impending threats. Around the time of the tool’s deployment, Google also announced it was falling short of pledged climate targets due to the environmental demands of AI.

“It’s not hypocritical to use AI in conservation – it just needs to be used responsibly,” said Berger-Wolf. But when it comes to regulation, neither biodiversity nor AI neatly conform to geopolitical boundaries, she mused.



Source link