Artificial intelligence turns brain activity into speech

Some of the recent studies which are focused more on using the Artificial Intelligence neural networks to generate the audio output from the brains signals have shown some of the promising results, which is named by producing the identifiable sounds up to the 80% of the time.

Some of the participants in the studies first have their brain signals which are measured while they were either reading aloud or listening to some of the specific words.

All the data was then given to a neural network to “learn” some of the new things to interpret the brain signals after which the final sounds were reconstructed for the listeners to identify.

These results also represent some of the hopeful prospects for the field of the brain-computer interfaces, where the thought based on the communication is quickly moving from the realm of the science fiction to reality.

The idea of connecting the human brains to the computer is much far from the new one. Some of the relevant milestones have also been made in recent years which includes enabling the paralyzed individuals to operate the tablet computers with their brain waves.

Elon Musk has also famously brought some of the more new field attention with the Neutralink, his BCI company interface technology also make its expansion and develop some of the new ways to foster the communication between the machines and brains, some of the studies like the originally highlighted by the Science Magazine, will soon continue to demonstrating the steady march of progress.

Report

Written by Udit Agarwal

Startup/Tech News Correspondent, Responsible for gathering information on the tech companies working on IOT, AI, ML, Cloud, Mobile Technologies,

Udit can be reached at [email protected]

Leave a Reply

Your email address will not be published. Required fields are marked *

Alphabet’s life sciences arm Verily raises $1bn from investors

Offices look to AI as companies seek to attract smarter workers