Facebook Reality Labs (FRL), a division of Facebook that studies long-term tech products and hardware, is supporting a team of researchers at the University of California, San Francisco (UCSFC) who are working to help patients with neurological damage speak again by studying their brain activity in real-time. In an article published in the journal ‘Nature Communications’, the UCSF team “has shared how far we have to go to achieve fully non-invasive BCI as a potential input solution for AR glasses”, said Facebook in a blog post on Tuesday describing the brain-computer interface device. The company had first introduced its Brain-Computer Interface (BCI) programme at its F8 2017 developer conference. Back then, the company had painted a fantastical picture to build a non-invasive, wearable device that allows people to type more than 100 words per minute without manual text entry or speech-to-text transcription. “Today we’re sharing an update on our work to build a non-invasive wearable device that lets people type just by imagining what they want to say,” Andrew “Boz” Bosworth, Facebook vice president of AR/VR, said in a tweet. “Our progress shows real potential in how future inputs and interactions with AR glasses could one day look.”
— Boz (@boztank) July 30, 2019 For the experiment, the researchers temporarily implanted electrodes into the brains of three epilepsy patients. The patients were asked to answer nine simple questions, such as “How is your room currently?” and “When do you want me to check back on you?” in a loud voice. At the same time, machine learning algorithms were able “to decode a small set of full, spoken words and phrases from brain activity in real-time,” according to Facebook. The algorithm was able to guess the question being asked correctly for about 75 percent of the time, while 61 percent of the time it managed to understand the answer the person was giving. “Real-time processing of brain activity has been used to decode simple speech sounds, but this is the first time this approach has been used to identify spoken words and phrases,” explained lead postdoctoral researcher David Moses PhD. “It’s important to keep in mind that we achieved this using a very limited vocabulary, but in future studies we hope to increase the flexibility as well as the accuracy of what we can translate from brain activity.” The company, however, doesn’t expect this technology to be available to consumers anytime soon. “We don’t expect this system to solve the problem of input for AR anytime soon. It’s currently bulky, slow, and unreliable. But the potential is significant, so we believe it’s worthwhile to keep improving this state-of-the-art technology over time. And while measuring oxygenation may never allow us to decode imagined sentences, being able to recognize even a handful of imagined commands, like “home,” “select,” and “delete,” would provide entirely new ways of interacting with today’s VR systems — and tomorrow’s AR glasses,” said researchers in the blog post. “Imagine a world where all the knowledge, fun, and utility of today’s smartphones were instantly accessible and completely hands-free… A decade from now, the ability to type directly from our brains may be accepted as a given. Not long ago, it sounded like science fiction. Now, it feels within plausible reach.” Facebook isn’t the only Silicon Valley company that is interested in building a brain-computer interface. Just earlier this month, Elon Musk’s brain-computer start-up Neuralink revealed its own device that can read, transmit high-volume data and even amplify signals from the brain directly to computers. “This has the potential to solve several brain-related diseases. The idea is to understand and treat brain disorders, preserve and enhance your own brain and create a well-aligned future,” Musk had said. Neuralink intends to begin human trials of its brain-machine chip before the end of 2020.