🚨 JUST IN: Crypto AI Agent is here!!! Watch the video 🎥

Deutsch한국어日本語中文EspañolFrançaisՀայերենNederlandsРусскийItalianoPortuguêsTürkçePortfolio TrackerSwapCryptocurrenciesPricingOpen APIIntegrationsNewsEarnBlogNFTWidgetsDeFi Portfolio TrackerCrypto Gaming24h ReportPress KitAPI Docs
CoinStats

Brain-controlled hearing device boosts one speaker in real time

2h ago
bullish:

0

bearish:

0

brain-controlled hearing device

A new brain-controlled hearing device from Columbia is pushing a familiar frustration into new territory: the noisy room where every voice seems to compete at once. In a real-time demonstration, researchers showed a system that can detect which speaker a listener is focusing on and then boost that voice while lowering others.

That matters because the challenge is not simply making sound louder. In crowded settings, many listeners struggle because they need help separating one voice from the rest. Columbia’s approach goes after that problem directly, using brain activity to figure out who the listener actually wants to hear.

For some of the volunteers, the effect was startling. One participant reportedly thought the researchers must be secretly adjusting the audio by hand. Instead, the system was reading attention in real time and changing the balance between overlapping conversations as the listener focused on one stream of speech.

Columbia’s real-time brain-controlled hearing device

Columbia researchers demonstrated a real-time brain-controlled hearing system, marking a notable step for work on selective listening. In the study, the system identified which speaker a person was attending to in a noisy environment and automatically amplified that voice while suppressing competing speech.

The work was published in Nature Neuroscience. Just as important, the timing set it apart from earlier lab-only analysis. This system responded while the listener was hearing two overlapping conversations, adjusting volume in real time based on the person’s brain signals.

That turns the idea from a decoding exercise into something closer to an active listening aid. A real-time speech amplification system that follows attention could, in principle, help with one of the hardest parts of hearing in everyday life: deciding what matters in a crowd and making that sound easier to follow.

How the Columbia hearing system was tested

The study involved epilepsy patients undergoing brain surgery who already had electrodes implanted in their brains. Those electrodes were part of their medical care, and researchers used them to measure brain activity during the listening tasks.

Senior author Nima Mesgarani and colleagues had patients listen to two overlapping conversations played at the same time. As the patients focused on one conversation, the system tracked their brain activity through the implanted electrodes.

Machine-learning algorithms then examined the brainwaves and identified which conversation the listener was paying attention to. Once the system detected the attended speaker, it adjusted the audio balance in real time, turning up the chosen conversation and quieting the other one.

The researchers tested the setup in two ways:

  • when subjects were guided toward a particular conversation
  • when subjects chose freely which conversation to follow

The system worked in both situations, which matters because real conversations do not come with instructions about where to direct attention.

Why the brain-controlled hearing device matters in noisy rooms

The scientists found that the system correctly identified which conversation volunteers were paying attention to. It also improved speech intelligibility for the speech they focused on.

Just as important, it reduced listening effort.

That combination helps explain why volunteers consistently preferred the assisted listening experience over conversations without the system’s help. Better clarity is one thing; needing less mental strain to keep up is another. Together, those findings suggest the technology is not only functional but also easier to use from the listener’s point of view.

This is one reason the research is drawing attention beyond the lab. Many hearing technologies make sound more available. A brain-controlled hearing device points toward something more personalized: sound filtered according to intention, not just volume.

What the Nature Neuroscience study suggests next

The broader significance of the Nature Neuroscience study is that it tackles the “who do you want to hear?” problem directly. In noisy places, that can be the difference between following a conversation and giving up on it.

The work also shows how quickly neuroscience and machine learning are moving together. Here, brain signals were not just recorded and studied later. They were used in the moment to shape what the listener heard next. That real-time loop is the core breakthrough behind the Columbia hearing system.

For people interested in hearing impairment research, the result opens up a more ambitious vision of assistive listening: not simply louder sound, but smarter sound. For the scientists behind the Columbia hearing system, the finding suggests that attention itself can become a useful control signal when speech competes for the listener’s ear.

One volunteer framed the promise in deeply personal terms, thinking of a family member with hearing problems and imagining a more peaceful life if such technology could be accessed. That reaction helps explain why this research resonates. It turns a dense neuroscience experiment into something immediately human: the possibility of hearing the voice you care about most, even when the room will not cooperate.

2h ago
bullish:

0

bearish:

0

Manage all your crypto, NFT and DeFi from one place

Securely connect the portfolio you’re using to start.