How to Make Podcasts Better for People With Hearing Loss

Here are some easy ways to make audio more accessible for everyone.
microphone
Photograph: Getty Images

The average American listens to over 16 hours of online audio content—like podcasts—each week. That’s 17 percent more than last year.

But not everyone finds it easy to listen.

Most people who have hearing loss are able to access podcasts: While 1 in every 6 adults in the UK is affected by some hearing loss, only around 12 percent of those adults are severely or profoundly deaf. So depending on the audio quality, listening environment, and access to hearing aids or noise-canceling headphones, it’s definitely possible for most people with hearing loss to enjoy podcasts.

However, people with auditory processing disorder can also find listening to podcasts a challenge. JN Benjamin, an audio producer with auditory processing disorder, described it as having no control over what her brain processes, which causes her to “hear too much.”

“In short,” she says, “I've got no control over what my brain chooses to process, and there's all sorts of things that trigger it and create stress.” And so when it comes to podcasts, sound design is especially important for Benjamin and other people with auditory processing disorder, because they hear lots of sounds that other people may not pick up on.

Auditory processing disorder may, on the surface, look like the opposite of hearing loss—with one, the listener picks up on sounds that others may not, and with the other, the listener can hear less than other people.

But when it comes to podcasts, the challenges are much the same.

Luckily, there are some things that podcasters and other audio content creators can do to make their content more accessible for hearing impaired listeners or those with auditory processing disorder, and even more luckily, many of those adjustments will make the experience better for all listeners.

Crisp and Clear Speech, Always

Professional-grade recording equipment and editing software may not be available to everyone, but you can get set up with the basic tools of the trade for a few hundred dollars.

Recording equipment isn’t the only indicator of sound quality, though.

Karen Shepherd, director of professional standards at Boots HearingCare and former president of the British Academy of Audiology (BAA), stressed the importance of good quality sound production, with very little competing sound. When you have multiple presenters, for example, it's important that they don't speak over one another.

As well as a technically clear recording, speech clarity can be important too. Lauren Ward, who researches media accessibility at the University of York, says that we find it easier to understand accents we are familiar with.

This doesn’t rule out podcasting for people with a strong regional accent, but speaking slightly more slowly and enunciating can be especially helpful for listeners who are hard of hearing.

Pay Attention in Post-Production

There are a number of things that creators can do in post-production to make the audio sound more clear.

Independent podcast and BBC radio producer Callum Ronan advises producers to take steps in recording and editing:

  • Balance audio for the left and right channels of headphones/speakers
  • Remove bleeding from microphones to avoid echoes or delays
  • Mix content to balance sound levels across multiple hosts
  • Work to a LUFS -16 to -18 loudness standard to prepare the file for publishing
Watch Your Backing Tracks and Ambient Sound

For most people, auditory scene analysis, or the ability to pick out one sound amid a noisy environment, is second nature.

Ward suggests thinking about the last time you were at a party, with multiple conversations, low background music, and clinking glasses. Most people with regular hearing are able to “zoom in” on the conversation that they're interested in and block out the other sounds.

"That's something that becomes a lot more difficult when you start to lose your hearing, or if your auditory processing isn't working optimally. You're getting all of that information, and you can't pick out the bit of interest, which might be overwhelming, but also meaningless."

The Central Auditory Cortex, Shepherd explains, is the “tuning in” center in the brain. For people with hearing loss, their cortex finds it difficult to sort multiple signals simultaneously, so they can't filter out a lot of the surrounding background noise and have to strain to pick out the speech.

"If you start to miss some of those speech cues, it makes it very difficult to ‘tune in’ to the word that you're trying to hear," Shepherd says. While your brain is trying to figure out whether the host just said "share" or "chair," because you didn’t quite hear that syllable correctly, you're also trying to catch up to what else the host just said.

When it comes to audio production, when you add in backing tracks and contextual sounds, it all becomes very fatiguing. "A lot of people who have a hearing loss find that level of concentration too exhausting to sustain for a period of time," says Shepherd. They end up “tuning out” of the content.

In a rich audio drama or highly produced content, a lot of things could be going on in the soundscape. Which of those are actually adding value, and which of those are just getting in the way?

Don't Forget the Context

Anything that gives us context or familiarity—an accent we know well or a topic we are familiar with—can help people to better understand the content. “You have a 'mental model' for the sound of your spouse's voice, for example,” says Ward, “and you can fill in the blanks if you miss something.”

Familiar topics also make speech easier to hear—and an abrupt change in subject matter can make it hard for listeners to follow. Cueing in a topic change can prime listeners to switch the vocabulary that they’re focused on.

This familiarity with the topic or voice affects the producer too, says Ward. "When someone's producing a piece of content, they will have listened to it a million times." But if you know what to expect, you can keep turning up the background levels and still pick out the speech. Get someone who doesn't know the script to listen through and check the balance between foreground and background sound.

Equally, regular listeners get used to the host’s voice, speech patterns, production style, and subject-specific language. Particularly for podcasts with limited seasons, producers could simplify the sound design and speak more slowly and clearly in early episodes, to allow listeners to become familiar before building up to more complex sound design later.

Always Add Transcripts

A lot of people with hearing loss find it much easier to enjoy audio content if it is supported by a transcript file that you can follow at the same time. "Having the transcript and the audio file working simultaneously," says Shepherd, "gives you a more enriched experience," because you're still benefiting from the audio context.

In fact, any additional visual or multimedia content that sits besides the audio—like images, video or graphs—can be helpful. "It gives you more context to work with, which can make understanding speech easier," says Ward, "because you've got that context of it being visually in front of you."

It's relatively easy these days to create transcripts of your audio content via an AI transcription service. Services like Otter and HappyScribe are able to transcribe podcasts, either live or recorded, quite accurately. And services like GoTranscript offer the accuracy of human transcription.

Thinking about accessibility shouldn't be an afterthought or a compliance exercise in the creative process, Ward impresses on us. "There are so so many little things that are very easy to do right from the beginning, but are harder to reverse-engineer into a project," she says. A lot of these accessibility adjustments are actually just good-practice things that will make the audio experience better—for everybody.


More Great WIRED Stories