Finding whale songs in the ocean is extremely difficult. However, researchers at UNSW Sydney have developed a new artificial intelligence system that can do this job in a much simpler way. The system can find blue whale songs in huge underwater audio archives that cover many years and entire ocean regions.
The work was published in the journal Nature. It was led by UNSW PhD researcher Ben Jancovich. The team built a neural network, which is a type of deep learning model. It learns patterns in data using layers of connected digital “neurones”. In this case, it was trained to recognise blue whale songs.
What makes this study unusual is the amount of training data used. Most machine learning systems need thousands of examples to perform well. But this model was trained using only one recorded blue whale call. In that single example, the system learned to detect similar sounds in real ocean recordings. It achieved high accuracy despite the limited data. Researchers say this could help scientists study marine life more effectively.
Oceans have huge collections of passive acoustic recordings. These are made using underwater microphones that record sound continuously for long periods. They often cover decades of data. But most of this information is never fully analysed because it takes too much time and effort. The new method helps solve that problem. Large audio datasets can be scanned automatically. It can also find whale calls quickly and more efficiently than previously used manual methods, allowing researchers to analyse both a single call and thousands of calls in a fraction of the time.
From a single call to thousands
The main challenge was the lack of training data. Rare animals, like blue whales, do not have many labelled recordings available. To overcome this, the researchers developed a new training approach. They began with just one real blue whale call.
Then they expanded it into a large dataset. They changed the original sound in different ways. They shifted pitch, stretched time, and added background ocean noise. These changes created many new versions of the same call.
This method produced thousands of “semi-real” whale songs. These artificial examples helped the model learn what whale calls sound like in different conditions. The system was also based on an existing model that was originally designed to recognise human speech. The team adapted it for underwater sounds instead.
When tested on real ocean recordings, the system performed very well. In one case involving pygmy blue whales, it correctly identified 99.4% of calls. This showed that the method works even though it was trained on only one original example.
Why focus on blue whales?
Blue whales are suitable for this type of system because their calls are very consistent. Whales from the same region often produce nearly identical songs. For example, blue whales near Madagascar share one type of call.
Blue whales in Antarctic waters produce a different pattern. This makes their sounds easier to model. Because their vocal patterns are stable, it is possible to generate realistic variations from a single recording. The AI can then learn what to look for across different ocean conditions.
However, this approach does not work for all animals. Dolphins, for example, have very individual whistles. Their sounds vary too much between individuals. That makes them harder to model using this technique.
A lighter footprint
Training large AI systems usually requires powerful computers and a lot of electricity. Such tasks can take weeks of processing time and high energy use. The researchers wanted a more efficient approach. Their system was designed to run on much smaller computing resources. It can be trained in a few hours on a normal laptop.
This was possible because they did not build a model from scratch. Instead, they fine-tuned an existing system and used smart data creation methods. This reduced both training time and energy use. The result is a simpler and more accessible tool that still performs well.
Revealing decades of data
Around the world, scientists have collected huge amounts of underwater recordings. Long archives, sometimes spanning 20 to 30 years, store these datasets. However, most of this data has not been fully analysed. The main reason is the lack of tools that can quickly detect animal calls without large training datasets.
The new model could change this. It can process long audio recordings. It automatically detects whale songs. This helps scientists track changes in whale populations over time. The researchers plan to test the system using a 25-year dataset from the central Indian Ocean. This may reveal long-term shifts in blue whale behaviour and migration patterns.
The same method could also be used for other species. Birds, insects, and other animals that produce repeating sounds could be monitored in a similar way. If just one clear recording is enough to train a working detector, it could greatly improve how scientists study rare and hard-to-find wildlife.



