Still hasn't overcome the worst part of these wearable tech devices, that you look like a complete %^&* wearing one.
I do not want a record made of the things I say in my head.. No way.
I imagine this would be for people with certain difficulties?
Ttaskmaster
I imagine this would be for people with certain difficulties?
Yeah, there could be that…
it wouldn't understand anything I say unless it had a dictionary based on grunts and swearing…
So whilst that guy was wandering around the supermarket did they definitively answer the age old cliche of whether men think about sex every few seconds? Did his inner voice make lewd comments?
Trivial example but if someone said such things out loud they would be guilty of harassment, ‘hearing’ the inner voice brings up all kinds of issues about thought crime and privacy. What if a disabled user says something with their inner voice that would not be meant for repeat in Steven-Hawking-voice that is then broadcast unintentionally…
What about stuff you're saying subconsciously?
Minority Report right here.
It's obviously what will happen in the future. I mean the new tech overlords will have such devices embedded in their minds creating a neural/digital interface. That's why I find it funny that Elon Musk has closed a FB page over data harvesting. What does he imagine will happen when the interface is within our minds. As has always been the way new tech is tested and developed in conjunction with the military, so if the military want to create an enhanced soldier they will have to use the interface to change the perceptions of the soldier, ie: it will develop as a two way interface. I know Stephen Hawking was interested in this idea, but had got so used to his cheek switch interface it had become second nature.
Men Against Fire(ep 5 sea 3 Black Mirror) explored the consequences of this idea.
https://www1.gomovies.film/tv/black-mirror-season-3/watching.html/?episode=7218
There are enough internal verbalisations on the likes of Twitter!
I see these external devices as part of the development towards internal augmentation, not only in terms of development but also getting the public used to the idea. I think it would only function like a keyboard(ie: with a switch OFF). I think as our tech advances exponentially, direct control will increase the speed humans require to match the tech.
I think it's exciting. Imagine when gaming grows up, using VR and entering another world where you control the character and get feedback through a neural interface. That's the future of entertainment and all simulators.
All this idea of internal chatter is strange. I don't believe people would think or type ninety percent of what they do if there wasn't a medium that encouraged it. It interesting, companies can only supply tech, or a social media concept, and watch what people do with it. So I suppose the worst of human behaviour can get exaggerated if the medium suits that.
I'm still not sure how this works. Surely internal vocalisations don't leave your brain, so there's no nerve impulses to detect around the jawline? Or is this reliant on the user training themselves to make tiny motions of tongue & vocal chords without actually vocalising?
Xlucine
I'm still not sure how this works. Surely internal vocalisations don't leave your brain, so there's no nerve impulses to detect around the jawline? Or is this reliant on the user training themselves to make tiny motions of tongue & vocal chords without actually vocalising?
If you read the article it says that electrodes pick up the brain signals whilst still in the brain
3dcandy
If you read the article it says that electrodes pick up the brain signals whilst still in the brain
1) Where?
2) EEG from the jaw? Why?
Xlucine
1) Where?
2) EEG from the jaw? Why?
Underneath the first picture in the Hexus excerpt for starters! But yes - all a bit pointless
3dcandy
Underneath the first picture in the Hexus excerpt for starters! But yes - all a bit pointless
Explaining how this device works in more detail, the MIT News blog reports that the wearable contains electrodes in the jaw and face areas that "are triggered by internal verbalizations — saying words ‘in your head’ - but are undetectable to the human eye". Machine learning has been leveraged so the system understands a certain array of useful vocabulary.
The bolded bit implies that we're talking about conscious muscle activations, not a completely internal monologue
I swear I can't distinguish sci fi from reality any more. There's an overlap. I think the wearable one could only pick up internal ‘in the head’ verbalisations. They do have those caps that can pick up mental activity. It seems they've already tried the Neuralink idea on mice. Here's a vaguely informative clip.
https://www.youtube.com/watch?v=S6n26bBBr80
We present a wearable interface that allows a user to silently converse with a computing device without any voice or any discernible movements - thereby enabling the user to communicate with devices, AI assistants, applications or other people in a silent, concealed and seamless manner. A user's intention to speak and internal speech is characterized by neuromuscular signals in internal speech articulators that are captured by the AlterEgo system to reconstruct this speech. We use this to facilitate a natural language user interface, where users can silently communicate in natural language and receive aural output (e.g - bone conduction headphones), thereby enabling a discreet, bi-directional interface with a computing device, and providing a seamless form of intelligence augmentation. The paper describes the architecture, design, implementation and operation of the entire system. We demonstrate robustness of the system through user studies and report 92% median word accuracy levels.