New acoustic tool identifies fish species from their underwater sounds.

New acoustic tool identifies fish species from their underwater sounds

Researchers have developed a novel underwater camera system that can pinpoint the source of individual fish sounds, a breakthrough that could revolutionize the acoustic monitoring of marine ecosystems. A study published in Methods in Ecology and Evolution details the new technology, which successfully identified the unique sounds of 46 fish species in the coral reefs of Curaçao.

The innovation, developed by a team from the conservation-technology nonprofit FishEye Collaborative, Cornell University’s K. Lisa Yang Center for Conservation Bioacoustics, and Aalto University in Finland, addresses a fundamental challenge in marine biology: knowing which fish is making which sound. By creating a reliable way to identify species by their acoustic signatures, the tool paves the way for using sound to monitor biodiversity, assess ecosystem health, and inform conservation efforts for the world’s increasingly threatened coral reefs.

What the New Report Says

The new tool, called the Omnidirectional Underwater Passive Acoustic Camera (UPAC-360), provides the most extensive collection of identified fish sounds in their natural habitat to date. During a four-day deployment in the Caribbean, the research team collected over 20 hours of simultaneous audio and video. Their analysis successfully attributed specific sounds—a chorus of thumps, pops, and grunts—to 46 distinct species. Notably, more than half of these species were not previously known to produce sounds, highlighting how little is understood about the underwater acoustic world.

Lead author Marc Dantzker, Executive Director of FishEye Collaborative, explained the core difficulty the tool overcomes. “When it comes to identifying sounds, the same biodiversity we aim to protect is also our greatest challenge,” Dantzker stated. “The diversity of fish sounds on a coral reef rivals that of birds in a rainforest.” Before this technology, the cacophony of sounds on a busy reef made it nearly impossible to decipher which species was responsible for a particular call.

Methods, Evidence, and Limits

The UPAC-360 system integrates a 360-degree video camera with a compact array of underwater microphones, known as hydrophones. The key innovation is its use of spatial audio, which captures the direction from which a sound wave originates. By visualizing the audio data and overlaying it onto the immersive video, researchers can see the precise location of a sound’s source, effectively matching a sound to the fish that produced it.

This method improves upon previous techniques. Traditional passive acoustic monitoring could record soundscapes but couldn’t identify the sources. Other approaches, like using divers or remote cameras with a narrow field of view, had a high probability of misattributing sounds and their small footprint limited the observation area. The UPAC-360 provides a large detection zone with a low impact on fish behavior.

The primary limitation is the current state of the global fish sound library. While this study made a significant contribution, the sounds of the vast majority of soniferous, or sound-producing, fish remain undocumented. Researchers note they are a long way from building a comprehensive underwater acoustic identification system comparable to those for birds.

Data at a Glance

  • 46: Fish species with sounds identified in the study off the coast of Curaçao.
  • 700+: Estimated number of fish species that produce sounds in the Caribbean alone.
  • 360°: Field of view of the integrated video and audio system.
  • 20: Hours of footage and sound collected over four days for the study.

Independent Context from Additional Research

This section is based on the model’s independent research from authoritative sources. Passive acoustic monitoring (PAM) has been a valuable tool in marine science for years, used by agencies like the National Oceanic and Atmospheric Administration (NOAA) since at least 2006 to monitor remote or degraded coral reefs. Scientists have long understood that healthy reefs have a rich and diverse soundscape, while degraded ones are often quieter. However, the inability to reliably connect sounds to species has limited the depth of these acoustic assessments.

The data generated by the UPAC-360 is critical for the next step in marine acoustics: machine learning. The lack of validated sound data for individual species has been a primary bottleneck in applying artificial intelligence to automatically analyze vast underwater recordings. With a growing library of identified sounds, researchers can train AI models to detect and classify fish species from audio alone, much like smartphone apps identify birdsong. This would allow for continuous, automated, and scalable monitoring of fish populations, tracking everything from biodiversity to spawning behavior without direct human observation. Research at institutions like Cornell is already developing machine learning toolkits, such as one named Koogu, designed specifically for bioacoustics applications.

The Bottom Line

By creating a reliable method to identify which fish make which sounds, the UPAC-360 system provides the missing link needed to decode complex marine soundscapes. This technology transforms passive acoustic monitoring into a more powerful and precise tool for conserving vital and vulnerable coral reef ecosystems worldwide.

Leave a Comment