The Military is Using Human Brain Waves to Teach Robots How to Shoot

The Military is Using Human Brain Waves to Teach Robots How to Shoot:

Modern sensors can see farther than humans. Electronic circuits can shoot faster than nerves and muscles can pull a trigger. Humans still outperform armed robots in knowing what to shoot at — but new research funded in part by the Army may soon narrow that gap.

Researchers from DCS Corp and the Army Research Lab fed datasets of human brain waves into a neural network — a type of artificial intelligence — which learned to recognize when a human is making a targeting decision. They presented their paper on it at the annual Intelligent User Interface conference in Cyprus in March.

Why is this a big deal? Machine learning relies on highly structured data, numbers in rows that software can read. But identifying a target in the chaotic real world is incredibly difficult for computers. The human brain does it easily, structuring data in the form of memories, but not in a language machines can understand. It’s a problem that the military has been grappling with for years.

“We often talk about deep learning. The challenge there for the military is that that involves huge datasets and a well-defined problem,” Thomas Russell, the chief scientist for the Army, said at a recent National Defense Industrial Association event. “Like Google just solved the Go game problem.”

Last year, Google’s DeepMind lab showed that an AI could beat the world’s top player in the game of Go, a game considered exponentially harder than chess. “You can train the system to do deep learning in a [highly structured] environment but if the Go game board changed dynamically over time, the AI would never be able to solve that problem. You have to figure out…in that dynamic environment we have in the military world, how do we retrain this learning process from a systems perspective? Right now, I don’t think there’s any way to do that without having the humans train those systems.”

Their research branched out of a multi-year, multi-pronged program called the Cognition and Neuroergonomics Collaborative Technology Alliance.

“We know that there are signals in the brain that show up when you perceive something that’s salient,” said researcher Matthew Jaswa, one of the authors on the paper. These are called P300 responses, bursts of electric activity that the parietal lobe of the brain emits in response to stimuli. Discovered in the 1960s, the P300 response is basically the brain’s answer to a quick-decision task, such as whether an object that appears suddenly is a target.

The researchers hope their new neural net will enable experiments in which a computer can easily understand when a soldier is evaluating targets in a virtual scenario, as opposed to having to spend lots of time teaching the system to understand how to structure different individuals’ data, eye movements, their P300 responses, etc. The goal, one day, is a neural net that can learn instantaneously, continuously, and in real-time, by observing the brainwaves and eye movement of highly trained soldiers doing their jobs.

“If you can improve this to the point where you can put it on guys in the field, you can get to the point where they’re just looking at things and doing their normal tasks,” Jaswa said. “All their years of experience that feed into that normal situational awareness. We’re peeking into what their brains are doing. If you can have enough guys in a squad looking at similar things, then we can say, ‘Three or four guys looked at this thing. It’s probably important.”

The research does not mean that robots can now outshoot humans. There’s a lot more work to do. But the neural net could make that study go a lot faster.

* * *

PayPal: Donate in USD
PayPal: Donate in EUR
PayPal: Donate in GBP

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.