In tests, the man was able to achieve writing speeds of 90 characters per minute (about 18 words per minute), with approximately 94 percent accuracy (and up to 99 percent accuracy with autocorrect enabled).
As he did this, electrodes implanted in his motor cortex recorded signals of his brain activity, which were then interpreted by algorithms running on an external computer, decoding T5's imaginary pen trajectories, which mentally traced the 26 letters of the alphabet and some basic punctuation marks.
This is fascinating to me. I wonder what the speed bottleneck is? Is it human thought or the computer's ability to process his thoughts? And could this one day be used to parse entire words...
This is fascinating to me. I wonder what the speed bottleneck is? Is it human thought or the computer's ability to process his thoughts?
And could this one day be used to parse entire words instantly (I think of the word "hamburger" and the computer types it out without the need to individual letters)?
I would be surprised if a bottleneck would be on the computer side, or if that would be a significant technological hurdle. There's some evidence that the information throughput of spoken...
Exemplary
I would be surprised if a bottleneck would be on the computer side, or if that would be a significant technological hurdle.
There's some evidence that the information throughput of spoken languages ends up being roughly the same, but that doesn't necessarily apply to a BCI system.
In this case, the brain activity that is being detected and translated is the intent to move a paralyzed limb in a pattern corresponding to writing letters. The bottleneck for that would likely be the speed at which the individual is able to think about those writing motions, and that would depend on the language/alphabet (and possibly other things).
If I was freshly paralyzed I may find that I "think" of those motions more slowly because as a lefty I've learned to slow those movements to avoid hand cramps in a language not designed for left-handers. That slowness might go away as I got used to the BCI keyboard.
Lots of keyboards already are already predictive, like swiping on Gboard or Pinyin. To borrow an example: "typing wxhchbqlin becomes 我喜欢吃冰淇淋 (=I like eating ice cream. Wo xihuan chi bingqilin)".
You could probably get away with thinking of writing "hambu" and trusting the auto-correct to fill in the rest, but that might not always work.
In situations where you're predicting what someone wants to type you're going to have a tradeoff between accuracy and speed. You may end up having a secondary system that reads through a corpus of all English articles written that year or an individuals emails to find the prevalence of words. Or you may have dictionaries for medical terms that are loaded and allow the user to set the context.
You may have a system that auto-corrects on a sentence or paragraph level, turning "How to Wreck a Nice Beach Using Calm Incense" into "How to recognize speech using common sense" based on some statistical model for what words make sense together.
You hit the bottleneck where you consider the speed after corrections for typos are made. The only difference between the physical input of someone unparalyzed and someone paralyzed is how good the BCI is at mapping thoughts about physical movement into a virtual movement.
could this one day be used to parse entire words instantly
Pretty sure you can't with this approach, but I'm guessing the underlying question is whether BCIs will ever get to the point they can detect thoughts of complete "words" or "sentences", and the nature of brain activity that corresponds to that.
I had some (unimpressive) experience as an undergrad on brain-computing-interfaces. Part of that was reviewing literature in the area, and part was some coding with Emotiv (with a diversion into OpenEEG because shipments were delayed 5 months... ). My understanding of the topic is a bit dated and shakey, but I'll take a stab at answering.
There are a lot of different styles of BCIs, and they have different tradeoffs, especially in terms of how invasive they are.
Some improvements with BCIs are made on the hardware end:
Temporal resolution (how close to sampling the activity of a single neuron you get)
Spatial resolution (the rate at which you sample that activity)
Areas of the brain they can sample from (e.g., EEGs ain't getting much besides cortical activity)
Safety of more invasive approaches
Costs per use/device/areas sampled
Other improvements are on the software or human end:
The regimen for adapting to a cochlear implement could be where the most improvement can be made, reshaping the brain compared to improving the hardware
Emotiv had training features where a user would be told what to think about and the software would try to calibrate
Predictive typing is mostly a BCI-agnostic technology. Livescribe pens kinda do the same thing, but with a physical pen instead.
I would guess at some point--maybe not in our lifetimes-- there will be electrodes-on-a-chip that don't screw with regular brain function, that can sample small numbers of neurons, that can reliably report activity regardless of location in the brain. At that point you'd be pretty close to some sort of mind-reading being possible.
Along the road to that you may find that a system requires reteaching the brain in the way making someone read faster might involve training to not subvocalize and read chunks of words at a time. Or it may be that you'd have a system that involves you reliably being able to signal with brain activity a 1-5, and using sophisticated statistical approaches to present a 1-5 menu for the words/sentences you'd probably like to use like Google canned replies are doing. Or it may be that hardware improves past some critical threshold that allows you to sample the relevant part of the brain at the relevant resolution to pick up common thoughts. Probably improvements will come on all fronts, and it may be a bit more jagged in where it succeeds/fails?
the thing that makes this interesting to me is, aside from it hopefully not being used by capitalism to track and advertise, wouldn't something like this be helpful to describe concepts one just...
the thing that makes this interesting to me is, aside from it hopefully not being used by capitalism to track and advertise, wouldn't something like this be helpful to describe concepts one just can't put into words, or using more accurate words than one knows?
i, personally, sometimes have interesting ideas i feel like sharing, but am limited by not knowing the accurate words, so a lot of my writing ends up "half baked", if something like this can take that idea and put it into words accurately, it might be very useful to dumb people like me :p
This is fascinating to me. I wonder what the speed bottleneck is? Is it human thought or the computer's ability to process his thoughts?
And could this one day be used to parse entire words instantly (I think of the word "hamburger" and the computer types it out without the need to individual letters)?
I would be surprised if a bottleneck would be on the computer side, or if that would be a significant technological hurdle.
There's some evidence that the information throughput of spoken languages ends up being roughly the same, but that doesn't necessarily apply to a BCI system.
In this case, the brain activity that is being detected and translated is the intent to move a paralyzed limb in a pattern corresponding to writing letters. The bottleneck for that would likely be the speed at which the individual is able to think about those writing motions, and that would depend on the language/alphabet (and possibly other things).
If I was freshly paralyzed I may find that I "think" of those motions more slowly because as a lefty I've learned to slow those movements to avoid hand cramps in a language not designed for left-handers. That slowness might go away as I got used to the BCI keyboard.
Lots of keyboards already are already predictive, like swiping on Gboard or Pinyin. To borrow an example: "typing wxhchbqlin becomes 我喜欢吃冰淇淋 (=I like eating ice cream. Wo xihuan chi bingqilin)".
You could probably get away with thinking of writing "hambu" and trusting the auto-correct to fill in the rest, but that might not always work.
In situations where you're predicting what someone wants to type you're going to have a tradeoff between accuracy and speed. You may end up having a secondary system that reads through a corpus of all English articles written that year or an individuals emails to find the prevalence of words. Or you may have dictionaries for medical terms that are loaded and allow the user to set the context.
You may have a system that auto-corrects on a sentence or paragraph level, turning "How to Wreck a Nice Beach Using Calm Incense" into "How to recognize speech using common sense" based on some statistical model for what words make sense together.
You hit the bottleneck where you consider the speed after corrections for typos are made. The only difference between the physical input of someone unparalyzed and someone paralyzed is how good the BCI is at mapping thoughts about physical movement into a virtual movement.
Pretty sure you can't with this approach, but I'm guessing the underlying question is whether BCIs will ever get to the point they can detect thoughts of complete "words" or "sentences", and the nature of brain activity that corresponds to that.
I had some (unimpressive) experience as an undergrad on brain-computing-interfaces. Part of that was reviewing literature in the area, and part was some coding with Emotiv (with a diversion into OpenEEG because shipments were delayed 5 months... ). My understanding of the topic is a bit dated and shakey, but I'll take a stab at answering.
There are a lot of different styles of BCIs, and they have different tradeoffs, especially in terms of how invasive they are.
Some improvements with BCIs are made on the hardware end:
Other improvements are on the software or human end:
I would guess at some point--maybe not in our lifetimes-- there will be electrodes-on-a-chip that don't screw with regular brain function, that can sample small numbers of neurons, that can reliably report activity regardless of location in the brain. At that point you'd be pretty close to some sort of mind-reading being possible.
Along the road to that you may find that a system requires reteaching the brain in the way making someone read faster might involve training to not subvocalize and read chunks of words at a time. Or it may be that you'd have a system that involves you reliably being able to signal with brain activity a 1-5, and using sophisticated statistical approaches to present a 1-5 menu for the words/sentences you'd probably like to use like Google canned replies are doing. Or it may be that hardware improves past some critical threshold that allows you to sample the relevant part of the brain at the relevant resolution to pick up common thoughts. Probably improvements will come on all fronts, and it may be a bit more jagged in where it succeeds/fails?
the thing that makes this interesting to me is, aside from it hopefully not being used by capitalism to track and advertise, wouldn't something like this be helpful to describe concepts one just can't put into words, or using more accurate words than one knows?
i, personally, sometimes have interesting ideas i feel like sharing, but am limited by not knowing the accurate words, so a lot of my writing ends up "half baked", if something like this can take that idea and put it into words accurately, it might be very useful to dumb people like me :p