Home / Technology / Facebook is giving up brain typing as an AR spectacle interface

Facebook is giving up brain typing as an AR spectacle interface



A Facebook-supported initiative that aims to let people write by thinking, has ended with new findings that were published today.

Project Steno was a multi-year collaboration between Facebook and the University of California San Francisco Chang Lab, with the aim of creating a system that translates brain activity into words. A new research paper, published in The New England Journal of Medicine, shows the potential for implementing the technology for people with speech impairments.

But alongside the research, Facebook made it clear that it supports the idea of ​​a commercial head-mounted brain reader, and is expanding the wrist interface instead. The new research has no clear applicability for a mass-market technology product, and in a press release, Facebook said that they “focus”

; their priorities away from head-mounted brain computer interfaces.

“To be clear, Facebook has no interest in developing products that require implanted electrodes,” Facebook said in a press release. Elsewhere in the release, it noted that “while still believing in the long-term potential of head-mounted BCI optical technologies, we have decided to focus our immediate efforts on a different approach to neural interfaces that have a more short-term path to market.”

Chang Lab’s ongoing research involves using implanted brain-computer interfaces (BCI) to restore people’s speech abilities. The new newspaper focuses on a participant who lost the ability to speak after a stroke over 16 years ago. The laboratory equipped the man with implanted electrodes that could detect brain activity. The man then spent 22 hours (divided into more than a year of sessions) training a system to recognize specific patterns. In that training, he tried to speak isolated words from a word of 50 words. In another tutorial, he tried to produce whole sentences using these words, which included basic verbs and pronouns (like “am” and “I”), as well as specific useful nouns (like “glasses” and “computer”) and commands ( as “yes” and “no”).

This training helped to create a language model that could respond when the man intended to say certain words, even if he could not actually speak them. Researchers fine-tuned the model to predict which of the 50 words he was thinking of, and integrated a probability system for English-language patterns similar to a predictive smartphone keyboard. The researchers reported that in final experiments, the system could decode a median frequency of 15.2 words per minute, counting errors or 12.5 words per minute with only correctly decoded words.

Chang Lab previously published Project Steno research in 2019 and 2020, showing that electrode arrays and predictive models can create relatively fast and sophisticated thought-type systems. Many previous typing options involved mentally pushing a cursor around a keyboard on the screen using a brain implant, although some other researchers have experimented with methods such as visualizing handwriting. Where the laboratory’s previous research involved decoding brain activity in people who spoke normally, this latest article shows that it works even when subjects do not speak (and cannot) loudly.

The Facebook Reality Labs headset, which was not used in the study.

The Facebook Reality Labs headset, which was not used in the study.

In a press release, UCSF Head of Neurosurgery Eddie Chang says the next step is to improve the system and test it with more people. “On the hardware side, we need to build systems that have higher data resolution to record more information from the brain, and faster. On the algorithm side, we need systems that can translate these very complex signals from the brain into spoken words, not text, but actually oral, audible spoken words. “A major priority,” says Chang, is to expand its vocabulary greatly.

Today’s research is valuable for people who are not served by keyboards and other existing interfaces, since even a limited vocabulary can help them communicate more easily. But that falls far short of the ambitious goal Facebook set in 2017: a non-invasive BCI system that lets people type 100 words per minute, compared to the top speeds they can reach on a traditional keyboard. The latest UCSF research involves implanted technology and does not come close to hitting that number – or even the speeds most people can reach on a keyboard on the phone. It does not bode well for the commercial prospects of a technology such as an external headband that optically measures oxygen levels in the brain, as Facebook Reality Labs (the company’s virtual and augmented reality hardware wing) unveiled in prototype form.

Since then, Facebook acquired electromyography (EMG) bracelet company CTRL-Labs in 2019, giving it an alternative control option for AR and VR. “We are still in the early stages of unlocking the potential of wrist-based electromyography (EMG), but we believe it will be the core entry for AR glasses, and using what we have learned about BCI will help us get there faster,” says research director Facebook Reality Labs Sean Keller. Facebook will not completely give up the head-mounted brain interface, but plans to make the software open source and share the hardware prototypes with external researchers, while conducting its own research.


Source link