On a bus or in a bar, cell phone conversations can lead to major frustration. But it's not just volume or poor reception that makes it so hard to hear the person on the other end of the line, according to a new study.
Cell phones also cut off the highest-pitched ranges of our voices. Those high-frequency sounds convey a surprising amount of information, according to the study.
The results suggest that we may be missing the full meaning of what people say when we talk to them on our mobile devices.
"The prevailing thought was that, because high frequencies are not as loud in the voice, that the brain must not pay much attention to them," said Brian Monson, a speech and hearing scientist at the University of Utah, Salt Lake City. "If the brain is paying that much attention to high frequencies, there must be some kind of perceptual information there."
A typical male voice measures about 100 hertz and an average woman speaks at about 200 Hz. Unlike a monotonic sound like a whistle, voices also contain quieter overtones with frequencies that range as high as 20,000 Hz. But because most of the energy in our voices falls below 5,000 Hz, scientists have long assumed that those high-pitched sounds are irrelevant.
Monson, who is also a singer with experience as a sound engineer, started to suspect that assumption a few years ago. While working with other singers, he noticed that they improved the quality of their voices by making adjustments in very high frequency overtones. In a follow-up project, he found that people could detect tiny differences in the volume of high-frequency sounds – on the scale of just a few decibels.
For the new study, Monson recorded people speaking and singing the Star-Spangled Banner. He filtered the recordings to keep only sounds above 5,000 Hz. He played those recordings to about 50 people in a handful of experiments. Then, he asked listeners to try to identify details about what they heard.
He was surprised at how well people did. Even though the recordings sound much like cricket chirps, just about everyone was able to quickly distinguish between talking and singing, he’ll announce next week at the annual meeting of the Acoustical Society of America in San Diego. It took listeners a little longer to tell whether the voice was male or female, but they all did that task really well, too.
Most surprising of all based on the current understanding of sound recognition, Monson said, listeners could tell that they were hearing the Star-Spangled Banner, not just when the voices were singing but also when they were just speaking. People were even able to identify key information about the recordings when distracting noises were added to make the task harder.
"If they can understand what's being said, that means there's an ability to extract intelligible information from high frequencies, and nobody would have predicted that," Monson said. "If you're in a situation where there's low-frequency noise covering all of the information you're used to getting from a voice, as long as you have the high-frequency stuff, you can still figure out what the person is saying and get the information you need."
That may be why talking on a cell phone in noisy places is so tough. Most mobile phones and landlines transmit sounds up to about 3,500 Hz, mostly because higher-frequency sounds were never thought to be very important.
According to other research, our brains have to work harder to extract information when it comes in limited bandwidth, Monson said, which explains why phone conversations can be more fatiguing than talking in person. And studies in kids have shown that they learn new words three times more quickly if they hear recordings that range up to 9,000 Hz instead of 4,000 Hz.
To improve the quality of our mobile phone conversations, the new findings suggest that it may be time for a technology upgrade.
"We listen to things over cell phones in pretty adverse situations, and I think their data strongly suggests you can give the listener more information by keeping high frequencies salient," said William Yost, an auditory perception researcher at Arizona State University in Tempe.