AI-supported bioacoustics
How does artificial intelligence help recognize animal sounds?
Section titled “How does artificial intelligence help recognize animal sounds?”Modern technology goes one step further: Today, artificial intelligence (AI) is being used to assist in bioacoustics. But why is AI necessary? Imagine you have weeks of audio recordings from a forest—hours of birds chirping, frogs croaking, insects buzzing. Listening to and evaluating all of this by hand would take forever. This is exactly where AI-supported bioacoustics helps: computer algorithms take over most of the “listening work.”
Here are a few examples of how AI supports bioacoustics:
Automatic evaluation of large amounts of data: AI systems can analyze huge amounts of audio recordings much faster than the human ear. For a few years now, there have been algorithms that automatically evaluate recorded bird calls. Although this automatic voice recognition is not yet perfect, it already makes the work much easier. Researchers no longer have to listen to every recording in its entirety, but can have the AI highlight suspicious passages. For example, weeks of forest recordings can be used to determine which animal species were present without having to listen to every minute manually. The AI filters out the “exciting” moments, so to speak.
Species recognition through sound patterns: AI can assign animal sounds to specific species by recognizing characteristic sound patterns. To do this, AI is “trained” with thousands of known animal voices. When it hears a new sound, it can compare it to see which pattern fits best. One example is the BirdNET app, a joint project between German and US researchers: it uses artificial intelligence to recognize birdsong. A recording of a bird call is first converted into a spectrogram – a visual representation of the volume, pitch, and length of the call. The AI then compares this sound image with its extensive audio database (trained with recordings from the Cornell Lab of Ornithology) to determine the bird species. For example, BirdNET can figure out from a short chirp: “Aha, that’s a redstart!” – and it does so in a matter of seconds.
Citizen science and apps: AI-based animal voice recognition isn’t just for scientists—it’s also used in smartphone apps for nature lovers. Hobby ornithology is really booming right now, as increasingly sophisticated bird song apps enable a whole new way of experiencing nature. Even complete novices can find out in real time which birds are singing in their surroundings. Simply hold your cell phone microphone toward the birdsong, and the app (such as Merlin Bird ID or BirdNET) analyzes what it hears using AI. The display then shows, for example: “Song recognized: blackbird.” This even works if you can’t see the bird at all. Apps like these inspire many people to take an interest in bird life and turn a walk into an exciting discovery tour.
Monitoring and nature conservation: AI opens up completely new possibilities in research and nature conservation projects. For example, the Landesbund für Vogelschutz (LBV) has launched a project to better study and protect nocturnal bird migration. During the night, the flight calls of migrating birds are recorded with recording devices and automatically recognized by AI programs such as BirdNET. This provides new insights into the extent and routes of bird migration and enables targeted conservation measures to be derived – for example, against light pollution, which irritates birds during their migration. This example shows how digital tools and AI help to protect biodiversity in concrete terms. In general, AI-supported bioacoustics allows endangered species to be monitored more effectively without disturbing them. Trends (e.g., declines in a species) can be identified more quickly and countermeasures can be initiated earlier.
In short: AI is like a turbo hearing aid with built-in super memory for bioacoustics. It listens tirelessly, recognizes patterns, and constantly learns. Humans and machines form a team: AI provides clues and pre-sorting, and experts review the results and draw biological conclusions. This allows us to “translate” the language of animals more and more accurately. In the future, such technologies could perhaps even help us to understand complex animal communication—such as whale songs or monkey calls—in even greater depth. Exciting, isn’t it?