How eavesdropping on elephants is keeping them safe


Rachel Nuwer, The BBC

Date Published

See link for photos.   

A low rumble reverberates from a rainforest clearing in the Central African Republic. Occasionally, piercing roars and haunting wails emanate from among the trees. 

These are the calls of forest elephants that inhabit this tropical landscape. Hidden by the dense vegetation, they are the smaller and more enigmatic cousins of East and Southern Africa’s savannah elephants. 
They are more commonly heard than seen, but their diminishing populations are endangered by high levels of poaching.

Now, the calls these elusive elephants use to communicate with each other through the thick forests, could provide researchers with new tools they need to protect the animals.

“Our goal is to better understand and protect forest elephants, a keystone species roaming the second largest tropical rainforest on earth,” says Peter Wrege, a behavioral biologist at Cornell University who is part of a team attempting to decipher the elephants’ calls.

“We are using technology to improve their chance of survival and, in doing so, to conserve the biodiversity of their forests.”

Wrege and his colleagues recently teamed up with a company called Conservation Metrics to leverage technology on behalf of elephant survival.

The aim: to find the location of the elephants – and the poachers who seek to kill them – so the animals can be kept safe.

Wrege and his colleagues have collected around 900,000 hours of recordings from central African forests, thousands of hours of which include elephant vocalisations.

They have found, for example, that low frequency rumbles keep groups in contact with each other, while long, overlapping rumbles serve as greetings.

Such insights provides not only clues about elephant communication, but also an early warning to rangers that something might be amiss if the sensors pick up on elephant alarm calls or noises made by poachers, such as gunshots and human speech.

It remains to be seen, Wrege says, “whether technology can make it possible to do this at a truly meaningful landscape scale – tens of thousands of square kilometers where standard methods just won’t work.”

But the researchers are off to a strong start. Their largest current project includes a grid of 50 sensors monitoring 1,243 sq km (480 sq miles) of forest, recording the equivalent of two million noises from the forests every 3-4 months.

With the help of a form of artificial intelligence known as deep learning, analysing this huge volume of recordings, and picking out the 15,000 or so elephant calls, can be done in about 22 days.

Wrege and his colleagues are also now testing prototypes for real-time detection.

“AI just makes us so much more efficient in all of these things,” says Lucas Joppa, chief environmental officer at Microsoft, which is supporting around 200 AI-based research projects, including the elephant listening one, through its AI for Earth program. 

“No human would be able to sit there and listen to two million songs in a language they don’t understand.”

Conservationists are increasingly turning to the power of technology to expand their work to previously unimaginable bounds. 

According to Joppa, advances in artificial intelligence in particular are opening up a suite of tools that could fundamentally alter the way we study and protect wildlife.

“We’ve been talking about machine learning and conservation for a long time,” he says. “But what’s happened over the past several years is we’ve made incredible strides not just in core level algorithms – things like deep neural networks – but we’ve also gotten a lot better at training algorithms in the conservation space.”

Machine learning and other types of AI provide a means for processing the increasingly huge amounts of data collected through camera traps, acoustic recorders, sensors, satellites and people on the ground. 

Analysing all this information would be overwhelmingly time-consuming if undertaken by hand, but with AI, it can be done with the stroke of a few keys. 

The efficiency and scale that AI offers conservationists can give them unprecedented insight into the natural world, and it also helps to solve one of their field’s chronic problems: lack of funding and manpower.

As Enrico Di Minin, a conservation scientist at the University of Helsinki, puts it, “If the resources for conservation were plentiful, we wouldn’t be facing a biodiversity crisis.”

Di Minin is creating machine learning algorithms capable of identifying posts on social media that are related to illegal wildlife trade. 

He is applying natural language processing – a form of AI that allows machines to extract information from written or spoken language – to process messages on platforms such as Instagram and Twitter to understand their sentiment.

Initially, this method could shine a light on public perception of rhino horn use in places like China and Vietnam, for example – information that could then be used to design more effective demand-reduction campaigns.

Perhaps further down the line, law enforcement agencies could also use the program to help them elucidate how goods flow from the countries where the animals are poached to where they are used. It could provide a new way to identify emerging trends in the trade. 

“Most of the current work done by enforcers requires manual classification,” Di Minin says. “AI will help us elevate this to the next level, in which the crisis is analysed in real time.”

The possibilities only expand from there. A non-profit organization called Wild Me, for example, is using computer vision algorithms to provide instant identification of individual animals – including cheetah, giraffe, zebras, whale sharks and others – in camera traps and citizen scientists’ photographs. 

It is revolutionising researchers’ ability to follow the movements of animals without the use of costly, cumbersome tracking devices.

“I think there’s a dynamic here with machine learning that’s really well suited to conservation,” says Ted Schmitt, the conservation technology lead for Vulcan Philanthropic Initiatives in Seattle.

Many such initiatives are either led or supported by for-profit technology companies, including Microsoft, Google and several others. 

Through its AI for Earth program, for example, Microsoft is building and piloting robotic field agents to collect blood-feeding insects, sequence their samples using advances in genetic analysis and then spit out information about disease presence, insect feeding patterns and more. 
“What these for-profit companies can provide is a platform,” Schmitt says. “Then we and others can leverage those tools to build bespoke solutions.” 

The benefits for conservation have already begun to roll in. 

On iNaturalist, one of the world’s largest biodiversity citizen science monitoring applications, anyone can post a photo of a plant or animal they stumble across in the field, which experts then identify. 
This collaborative tool has led to discoveries of new species to science as well as to significant range expansions for known ones. But with hundreds of thousands of users, experts previously took an average of 18 days to provide identifications.

Through collaborations with Cornell University and Caltech, iNaturalist built a computer vision algorithm into the app that identifies a species’ genus with nearly 90% accuracy and presents users with its top five species suggestions based on where and what time of day the photo was taken. 

Citizen scientists or experts then apply human logic to determine the correct answer in just a few seconds.

Others are using AI not to make new discoveries, but to help protect the wildlife we already know exists.

At the University of Southern California’s Center for Artificial Intelligence and Society, researchers are honing “Paws” (Protection Assistant for Wildlife Security), a set of advanced algorithms that analyse landscapes and animal movements alongside information about past poaching activities and other factors to predict potential incursion locations.

When complete, the program will be able to instruct managers about the optimal places to send patrols, helping them to use their limited supply of rangers and resources to best protect a given area. 

More and more protected areas are rolling out software solutions that can strengthen traditional boots-on-the-ground protection. 

EarthRanger – a park management program built by Vulcan that analyses real-time data collected by animal collars, ranger radios, sensors, vehicles, drones and more – is one popular example, as is Smart, the Spatial Monitoring And Reporting Tool that allows wildlife managers to better monitor, evaluate and plan patrols.

Improvements in these tools means they “can now be put into the hands of people working in the field by basically just adding a few lines of code,” says Joppa.

Back in 2012, exuberant media headlines and public relations campaigns declared that conservationists in Africa had finally found the silver bullet to stop poachers: drones.

Park managers, the stories claimed, were using unmanned aerial vehicles (UAVs) equipped with thermal imaging and night vision technologies to spot intruders overhead and stop them before they could kill elephants or rhinos.

Resources and attention were diverted from other pertinent efforts, and drone programs began popping up in Kenya, South Africa, Namibia and more.

What didn’t make the news, however, was the fact that field trials largely failed to get off the ground. Finicky technology broke in the rugged African terrain, and hardier models cost too much for parks to afford.

When UAVs did get off the ground, poachers proved difficult, if not impossible to locate in expansive protected areas, some of which are the size of small countries.

“There were all these hidden costs to parks when people would show up with some piece of hardware, take a lot of staff time on the ground and disrupt operations,” Schmitt says. “A lot of mistakes were made right in from of managers and turned them sour on the technology.” 

In the end, nearly all the drone projects were called off.

Declarations of failure were premature, however.

According to Schmitt and other experts, it’s not that drones have no role to play in anti-poaching operations – it’s simply that they were deployed prematurely.

Now, seven years after the initial fanfare, their eventual use in conservation is looking more promising.

Hardware is becoming cheaper and the visual data they collect can be integrated into Paws, Smart and EarthRanger. With drones, however, there is an additional challenge: how to automate the detection process.

UAV and Drone Solutions (UDS), a South African company, weathered the early anti-poaching drone failures and is now the primary group in Africa putting the technology to use.

Their drone pilots fly in a number of parks, but to do so they must stay up all night, watching live video streams and trying to detect intruders by sight. This makes for a monotonous, error-prone and time-consuming task.

“We want to find a way to do this automatically, because detection is such a difficult process to do manually,” says Elizabeth Bondi, a graduate student in computer science at the University of Southern California.

To do that, Bondi and a team of computer scientists, including Joppa, are building SPOT, a deep learning system to automatically detect humans and animals in thermal videos captured by drones.

While the task sounds straightforward, training the program is incredibly challenging because of the vast amounts of data that are needed. 

“To learn you have to be taught, and to teach computers you need examples from the past,” Joppa says. 

Bondi began by manually labelling around 60 of UDS’ videos from the field by drawing boxes around objects of interest. Six months later, she and her colleagues had tallied around 180,000 animals and poachers.

After performing a field trial in Botswana with conservation charity Air Shepherd, however, they realised those 180,000 data points weren’t nearly enough: the program was producing too many false positives, and picking up on just 40% of poachers.

Manually labelling more videos by hand was time and cost prohibitive, however, so the teamed used AirSim to build a high-fidelity simulation of an African savanna as a drone would see it, complete with poachers, animals and lifelike features like bushes and trees – all with the appropriate heat signatures.

In laboratory tests, drones trained in the AirSim now pick up on 80% of poachers. Bondi and her colleagues plan to continue to improve detection rates and to eventually integrate the program with PAWS and other management tools.

Organisations like the charity Over and Above Africa are now attempting to increase the use of drones by conservation parks by funding projects that use them with AI.

In ten or twenty years, AI may give conservationists radical advantages compared to today. They will likely be able to perform highly accurate, regular counts of wildlife through overhead surveys. 

Satellites may monitor fishing vessels from space to ensure they do not stray into protected areas or engage in illegal activities like paired trawling. 

And so-called smart parks will use cameras, drones, sensors, fences and roving robots to send automated real-time alerts to rangers.

As such solutions ramp up, there may be some drawbacks, especially for national parks in Africa, Southeast Asia and other poaching hot-spots, says Serge Wich, the founding director of Conservation Drones and a biologist at Liverpool John Moores University in the UK.

“Large wilderness areas might become very highly technologically monitored, which may take away some of the charm of going to those areas,” he says. “But when it comes to protecting animals and their habitat, I think extreme monitoring and management is becoming essential.”

Even so, this does not mean that AI alone can save wildlife from extinction and habitats from degradation and development.

“People still need to solve conservation problems – to do what we’ve always known we need to do,” says Joppa. Without that will, all the smart machines in the world won’t be enough.