Researchers may have developed a dolphin-to-English translator that works in real time, and by this summer, they may have a two-way communication system.
Back in August of last year, Denise Herzing of the Wild Dolphin Project heard a dolphin say “sargassum.” To put it another way: While wearing a prototype Cetacean Hearing and Telemetry (CHAT) device in the Caribbean, a member of the dolphin pod she had been tracking for 25 years whistled. And CHAT translated it into “sargassum,” a genus of seaweed.
The dolphin whistle for sargassum was something she and her team invented when playing with the pod, hoping the dolphins would adopt the whistles and incorporate them into their own natural whistles. So when Herzing heard the word “sargassum,” it was her own voice saying it, she explains to New Scientist.
In addition to listening out for invented whistles, the team hopes figure out what the dolphins' natural communication means too. Using his Google Glass know-how, Thad Starner at the Georgia Institute of Technology built CHAT using pattern-discovery algorithms that are designed to analyze dolphin whistles, extracting features that we wouldn’t know to look for. After all, dolphins produce sounds with frequencies up to 200 kilohertz, or 10 times higher than the highest pitch we can hear.
The software analyzes these whistles by sifting through data and labeling features that deviate from an assumed average state. It then groups similar types of deviations -- distinct sets of clicks or whistles -- until all potentially interesting patterns are extracted. The secret is repetition. If dolphins are exchanging information, then their behavior wouldn’t be random – there’d be discoverable patterns. Information-processing tools would be able to figure out what those patterns are.
For now at least, the sargassum whistle was just one instance that hasn’t been repeated. Additionally, the audio profile looks different from the whistle the researchers taught the dolphins – the shape is the same but it came in at a higher frequency. However, during that same summer’s work, Starner’s algorithms discovered eight different components in a sample of 73 whistles, and they’ve been able to match certain parts of those whistles with mother-calf interactions. Coming up this summer: confirming two-way communication between humans and dolphins.
The work (make sure your volume is on) will be presented at the International Conference on Acoustics, Speech, and Signal Processing in Florence this May.
Image: Wild Dolphin Project