Rolf Mueller, a professor of mechanical engineering at Virginia Tech who is passionate about the action of the bat’s ear, has created a biohealthy technique that determines the origin of sound.

Mueller’s development is based on a simpler and more accurate model of sound location than previous approaches traditionally modeled by the human ear. His work is the first new insight in determining the location of sound in 50 years.

The results were published in The intelligence of the natural machine written by Mueller and former doctoral student, chief author Xiaoyan Yin.

“I’ve long admired bats for their strange ability to navigate complex natural environments based on ultrasound and I suspect that the unusual movement of the animal may have something to do with this,” Mueller said.

New model of sound location

Bats move in flight using an echo, constantly determining the proximity of an object by sending sounds and listening to echoes. The ultrasound call comes out of the bat’s mouth or nose, is reflected off the elements of the environment, and burns as echoes. They also get information about the surrounding sounds. The comparison of sounds to determine their origin is called the Doppler effect.

The Doppler effect works differently in the human ears. An observation in 1907 showed that people can find a location because they have two ears, receivers that transmit audio data to the brain for processing. Operating with two or more receivers allows you to tell the direction of sounds that contain only one frequency, and it would be familiar to anyone who has heard the car beep past it. The horn is one frequency, and the ears, along with the brain, build a map of where the car is going.

The 1967 discovery then showed that when the number of receivers is reduced to one, one human ear can find the location of sounds if different frequencies occur. In the case of a car in the past, this could be a car horn connected to the roar of a car engine.

According to Mueller, the action of the human ear has inspired previous approaches to determining the location of sound that have used pressure receivers, such as microphones, paired with the ability to either collect multiple frequencies or use multiple receivers. Based on the research career of bats, Mueller knew that their ears were much more versatile sound receivers than the human ear. This caused his team to aim for a single frequency and a single receiver instead of multiple receivers or frequencies.

Creating an ear

When working on a single-receiver, single-frequency model, Mueller’s team sought to replicate the bat’s ability to move its ears.

They created a soft synthetic ear, inspired by a horseshoe and old world leaf-squeezing bats, and attached it to a string and a simple motor that timed the flutter of the ear while receiving an incoming sound. These particular bats have ears that allow for complex transformation of sound waves, so nature’s finished design was a logical choice. This change begins with the shape of the outer ear, called the surface, which uses the movement of the ear when it receives sounds, to create several shapes for reception that direct the sounds to the ear canal.

The biggest challenge Yin and Mueller faced with a single-receiver, single-frequency model was the interpretation of incoming signals. How do you turn incoming sound waves into readable and interpretable data?

The team placed an ear above the microphone, creating a mechanism similar to a bat. The rapid movements of the resting surface created Doppler transfer signatures that were clearly related to the source direction but were not easy to interpret due to the complexity of the patterns. To solve this, Yin and Mueller adopted a deep neural network: an approach to machine learning that mimics many layers found in the brain. They introduced such a network on a computer and trained it to provide the source direction associated with each received echo.

To test the performance of the system formed by the ear and machine learning, they installed the ear on a rotating device that also included a laser pointer. Sounds were then transmitted from a speaker placed in different directions relative to the ear.

Once the direction of the sound was determined, the control computer turned the device so that the laser pointer hit the object attached to the speaker and pointed to the location within half a degree. Human hearing typically determines a position within 9 degrees by working with two ears, and the best technique has reached a position within 7.5 degrees.

“The capabilities completely exceed what is currently available in technology, and yet all of this is accomplished with much less effort,” Mueller said. “Our desire is to bring reliable and capable autonomy to complex outdoor environments, including precision and forestry; environmental monitoring, such as biodiversity monitoring, and defense and security applications.”


Story source:

Materials provided by Virginia Tech. Original author Alex Parrish. Note: Content can be customized to suit your style and length.



Please enter your comment!
Please enter your name here