Saturday, January 19, 2019

MIT's "Mind Reading" Wearable Let's You Silently Interact With All Your Devices

 Also, see this Guadian article on the same topic. For past posts about classified technology go here and here. Read the article below the two videos. 



WHY THIS MATTERS IN BRIEF

As computing becomes ubiquitous and embedded in the devices around us, we won’t always want to talk out loud to use them, that’s one of the many use cases for this technology.


MIT researchers have developed a new form of computer interface called AlterEgo that lets users silently converse with a computing device and that can transcribe words that the user verbalizes internally but doesn’t actually speak aloud.

The system consists of a wearable technology device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the user's jaw and face that is triggered by internal verbalization, in other words in the same way you say words just “in your head,” but that is undetectable to the human eye. Those signals are then fed to a machine learning system that has been trained to correlate particular signals with particular words which then lets the user “silently” converse and interact with, for example, Google as the clip below shows.

The device is thus part of a complete “silent computing system” that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer recommended responses.

“The motivation for this was to build an IA device, an ‘Intelligence Augmentation’ device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system.

“Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

“We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

The researchers described their device in a paper they presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they’re joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.

The idea that internal verbalizations have physical correlations has been around since the 19th century, and it was seriously investigated in the 1950s. One of the goals of the speed-reading movement of the 1960s was to eliminate internal verbalization, or “subvocalization,” as it’s known. But subvocalization as a computer interface is largely unexplored.

The researchers’ first step was to determine which locations on the face are the sources of the most reliable neuromuscular signals. So they conducted experiments in which the same subjects were asked to subvocalize the same series of words four times, with an array of 16 electrodes at different facial locations each time.

The researchers wrote code to analyze the resulting data and found that signals from seven particular electrode locations were consistently able to distinguish subvocalized words. In the conference paper, the researchers report a prototype of a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws. But in more recent experiments, the researchers are now getting comparable results using only four electrodes along one jaw, which should lead to a less obtrusive wearable device.

Once they had selected the electrode locations the researchers began collecting data on a few computational tasks with limited vocabularies which comprised of about 20 words each. One was arithmetic, in which the user would subvocalize large addition or multiplication problems; another was the chess application, in which the user would report moves using the standard chess numbering system.

Then, for each application, they used a neural network to find correlations between particular neuromuscular signals and particular words. Like most neural networks, the one the researchers used is arranged into layers of simple processing nodes, each of which is connected to several nodes in the layers above and below. Data are fed into the bottom layer, whose nodes process it and pass them to the next layer, whose nodes process it and pass them to the next layer, and so on. The output of the final layer yields is the result of some classification task.

The basic configuration of the researchers’ system includes a neural network trained to identify subvocalized words from neuromuscular signals, but it can be customized to a particular user through a process that retrains just the last two layers.

Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

In ongoing work, the researchers are collecting a wealth of data on more elaborate conversations, in the hope of building applications with much more expansive vocabularies.

“We’re in the middle of collecting data, and the results look nice,” Kapur says. “I think we’ll achieve full conversation some day.”

“I think that they’re a little underselling what I think is a real potential for the work,” says Thad Starner, a professor in Georgia Tech’s College of Computing. “Like, say, controlling the airplanes on the tarmac at Hartsfield Airport here in Atlanta. You’ve got jet noise all around you, you’re wearing these big ear protection things — wouldn’t it be great to communicate with voice in an environment where you normally wouldn’t be able to? You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press. This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.”

“The other thing where this is extremely useful is special ops,” Starner adds. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?”

As ever the potential for the technology could be as interesting as it is huge.



Friday, January 11, 2019

Watch Video- Nanotubes Self Assemble

See here for more about the inevitability of Smart Dust. Here is a good introduction post about Smart Dust and here is the Wikipedia entry on it. See here for more about the worlds smallest robots, nanotechnology. See here for 'Nanotechnology and the Brain.' See here for the 'Smart Dust' section of my blog. (Be sure to scroll through all of the articles.) 

See here for more about "Extreme Genetic Engineering: an Introduction to Synthetic Biology." See here for a documentary called "Playing God." See here for more about creating the Borg; molecular biology and nanotechnology. 

See here for weaponizing nanotechnology- creating viruses and bacteria with RNA. Go here to see what the FBI is saying about the dangers of this technology. See here for more articles located under the "Bio-terrorism" category of my blog, (be sure to scroll down and go through all of the articles there.)  

See here for more about Electronic Warfare on Wikipedia. See here for more about NASA talking about using nanotechnology and microwaves as weapons. See here for more about the DARPA Control Grid. See here for more about the "Five-Eyes Intelligence" and Echelon. See here for more about classified Scalar Waves.

See this video with Jose Delgado controlling a bull with a brain-computer-interface in the 60's. See this video from CNN from the 80's about mind control.


Thursday, January 3, 2019

Question Posed to Some Israelis: Do You Believe That Gentiles (Non-Jews) Will Be Slaves For The Jews?

For previous posts on this topic, see hereThe reality is, this contempt for non-Jews comes right out of the Talmud. See here for more articles about the Talmud, be sure to scroll through them all. Also, be sure to watch this short excellent video from an honest Israeli talking about Judaism. Jews are also the largest group involved with Transhumanism, (see here and here for more about this.) This is interesting to consider once you know Judaism's history with slavery and usuryThis being so, that doesn't mean that all Jews have crazy beliefs like this. But, this mentality is the driving force behind militant Zionism and Homeland Security. For one example of this, see here for the former head of Homeland Security Michael Chertoff and Chabad Lubavitch = Crazy Beliefs. Donald Trump's son in law Jared Kushner is a part of Chabad

Wednesday, January 2, 2019

More People Who Like To Gang Up On Truth-Tellers

Must-See Pictures Below, Remember 
These People. They Are Traitors To Canada! They Are Evil Organized Stalking Psychopaths, They All Know This Is Happening To Me

The scumbags below are all working with psychos like thisthisthis, and this. They are working with community watch and the RCMP. See here and here for RCMP censorship, see hereherehereherehere, and here for more about targeting political dissidents. See here for more about the Anti-Defamation League that works with the RCMP, (be sure to scroll down and go through all of the articles.)

See here for a neighborhood watch Stasi book. See here for slavery by satellite and here for satellite and microwave torture with Dr. John Hall. See here for a Canadian intelligence agency "recruiting" video featuring stalking and surveillance. 

This stalking, surveillance, and torture comes down from intelligence agencies in the United States and Canada. See here and here for the connections between Zionism and Homeland Security, see here for a lecture from a former employee from Homeland Security talking about some of the tactics used by the Stasi, See here for vigilante justice "Zionist" style.

See here for more about classified technology, see here for a list of links about microwave weapons, learn more about intelligence tactics here, see here for discrediting people with hi-technology.

All of the people below live in Mission, British Columbia. They are all stupid enough to be involved in organized stalking and to have Facebook accounts.


1. Henry Tusi - here is his LinkedIn page. (The coward took down his Facebook account and his picture on LinkedIn, I will contact people at your school scumbag.) This piece of crap is at York University so look for him there. He should be kicked out of University. Hey Henry, you like science so much, tell us the truth, you lying psychopath. You like stalking and torturing people, don't you? You don't like people who tell the truth?  


2. Cameron Snow - lives in Mission, B.C. 


3. Cole Matt - lives in Mission B.C. 


4.  Colby Noa- Address- #20- 32705 Fraser Crescent, Mission B.C.  



5. Liam Noa and hereAddress- #20- 32705 Fraser Crescent, Mission B.C.