This is because in a research project that was funded by Facebook, scientists at the University of California in San Francisco have managed to take our brain signals, decode them, and translate them into text. In a journal published on Nature, Edward Chang, a neurosurgeon and lead researcher of the project says, “To date there is no speech prosthetic system that allows users to have interactions on the rapid timescale of a human conversation.”
David Moses, another researcher on the team adds, “This is the first time this approach has been used to identify spoken words and phrases. It’s important to keep in mind that we achieved this using a very limited vocabulary, but in future studies we hope to increase the flexibility as well as the accuracy of what we can translate.”
While the system is still in its rudimentary stages, it seems to be sufficiently proficient enough where it allowed patients to answer simple questions, like how they are feeling, is the room too hot or cold, bright or dark, and so on.