Sunday, June 8, 2014

Controlling a Hex Bug with my Brain Waves

Ever since my effort with OpenBCI began, I've been looking to control something with my brain.  Sure, a while back, I was successful in lighting an LED with my brain waves, but that's pretty simple.  I wanted something more.  And now I can do it.  I can control a robot with my mind!  Yes!

Approach:  My robot has just a few actions that it can do...turn left, turn right, walk forward, and fire.  To make this brain-controlled, I need a way to invoke these commands using signals from my brain.  Ideally, I'd just think the word "Fire!" and the robot would respond.  Unfortunately, those kinds of brain waves are too hard to detect.  Instead, I need to use brain waves that are easy to detect.  For me, "easy" brain waves include the Alpha waves (10 Hz oscillations) that occur when I close my eyes, as well as the brain waves that occur when I watch my blinking movies (a.k.a. visual entrainment).  So, my approach is to use OpenBCI to record my brainwaves, to write software to detect these specific types of brain waves, and to issue commands to the robot based on which brain waves are detected.

Here are all the pieces that you see in the video
Hardware Setup:  The core hardware for this hack is similar to my usual OpenBCI setup: EEG electrodes, an OpenBCI board, an Arduino Uno, and my computer. Added to this setup is the Hex Bug itself and its remote control, which I hacked so that the remote can be controlled by an Arduino.  So, as shown below, my brain wave signals go from my head all the way to the PC.  The PC processes the EEG data looking for the Alpha waves or the visually-entrained waves.  If any are detected, it decides what commands to give the robot.  The commands are conveyed back to the Arduino, which then drives the remote control, which the Hex Bug receives over its usual IR link.

Here is the schematic of how the pieces work together.

EEG Setup:  I'm going to be measuring my Alpha waves and I'm going to be measuring the brain waves induced through visual entrainment.  Based on my previous experience, I know that both are best recorded using an electrode on the back of the head (at the "O1" position, if you're into your 10-20 electrode placement standard).  I do not need electrodes all over my head.  That's the only sensing electrode that I'm using.  That's it.  Of course, EEG also requires a reference electrode, which I put on my left earlobe.  And, finally, EEG often has a third electrode ("bias" or "driven ground"), which I placed on my right earlobe.

Looking at the Frequency of my Brain Waves:  As mentioned above, my approach is to control my robot by detecting Alpha waves and by detecting visually-entrained brain waves.  These are easily detectable because they occur at specific frequencies.  Alpha occur around 10 Hz and the visually-entrained brain waves occur at the blink rate(s) of whatever movies I use (my best results were from 5 Hz and 7.5 Hz movies). So, to control my robot, I will be looking for EEG signals at these frequencies: 5 Hz, 7.5 Hz, and 10 Hz.  I'm going to "look" for these frequencies by writing some EEG processing software that'll look at the frequency content of my EEG signal to see if these frequencies are present.

EEG Processing:  The flow chart above shows the steps that I use to process the EEG signal (my software is here).  Once the PC gets EEG data from the OpenBCI board, the first step is to compute the spectrum of the signal, which tells me the content of the EEG signal as a function of frequency.  I then search through the relevant part of the spectrum (4-15 Hz) to find the peak value.  I note both its frequency value and its amplitude.  In parallel, I also compute the average EEG amplitude across the 4-15Hz frequency band.  This average value is my baseline for deciding whether my peak is tall (strong) or short (weak).  By dividing the amplitude of my peak by this baseline value,  I get the signal-to-noise ratio (SNR) of the peak.  The SNR is my measure of the strength of the peak.  The output of the EEG processing, therefore, are two values: the frequency of the peak and the SNR of the peak.

Deciding My Robot's Action:  Once my EEG processing finds the frequency and SNR of the peak in my EEG spectrum, I now have to decide how to act on that information.  After some trial and error, I settled on the algorithm shown in the flow chart above.  It's got three steps:
  • SNR Check:  First, I decide whether the current peak in the spectrum is legitimate, or if it is likely to be just noise.  I don't want to issue a command if it is just noise because then my robot will be taking all sorts of actions that I didn't intend.  That is not what I want.  So, to decide if the peak is likely to be legitimate, I look at the SNR of the peak.  If it has a big SNR, I'll accept it as a legitimate peak.  If it is too small, I'll take no further action.  Right now, my threshold for this decision is at 6 dB.  Setting a higher threshold results in fewer false commands (which would be good), but it also makes the system less sensitive to legitimate commands (which is bad).  This 6 dB threshold resulted in an OK (but not great) balance.
  • Frequency Check:  If the peak seems legitimate, I decide how to command the robot based on the frequency of the peak.  If the peak is between 4.5-6.5 Hz, I must be looking at the right-side of my 2-speed blinking movie (ie, the portion that blinks at 5 Hz), so the computer prepares the "Turn Right" command.  Alternatively, if the EEG peak is 6.5-8.5 Hz, I must be looking at the left-side of my 2-speed blinking movie (ie, the portion that blinks at 7.5 Hz), so it prepares the "Turn Left" command.  Finally, if the EEG peak is 8.5-12 Hz, it must be my eyes-closed Alpha waves, so the computer prepares the "Move Forward" command.
  • New Command Check:  Before issuing the command, I check to see whether this command is the same as the last command that was extracted from my brain waves.  If the latest command is different, I hijack the command and, instead, issue the "Fire!" command.  If the latest command is the same, I go ahead and issue the left / right / forward command like normal.  The reason for this hijack is that I have no other type of easily-detected brain wave that I can use for commanding the robot to fire.  This approach of issuing "Fire!" on every change in command seemed like a decent way of getting a 4th command out of 3 types of brain waves.
Putting It All Together:  As you can see in the movie, I eventually able to get all of these pieces working together to allow me to command the Hex Bug using just my brain waves.  Of course, it didn't work the first time.  Even once I got all the hardware working, I still needed to tune a bunch of the software parameters (FFT parameters and the detection threshold) until I got something that worked somewhat reliably.  To help with this tuning process, I used the spectrum display that is in my Processing GUI.  Some screen shots are below.

Example EEG spectrum when I stared at the right side of my two-speed blinking
movie.  It induced 5 Hz brain waves.  I programmed 5 Hz to mean "Turn Right".
The SNR here is between 6 and 7 dB.

Here's an example EEG spectrum when I stared at the left side of my two-speed
blinking movie.  It induced 7.5 Hz brain waves.  When the GUI detected 7.5 Hz,
it issued a "Turn Left" command to the Hex Bug.  The SNR is only 6-7 dB.

Finally, here's an example EEG spectrum with my eyes closed so that I was
exhibiting Alpha waves, which are near 10 Hz.  When it detected 10 Hz, I
programmed it to issue a"Forward" command.  The SNR is > 8 dB.

Weak Peaks:  In the screenshots above, the red line shows the current EEG spectrum.  The heavy black circle shows the spectral peak that my software algorithms have detected.  The black dashed line is the "background noise" from which the SNR is computed.  To be declared a legitimate detection, the peak must be 6 dB higher than the black dashed line (unfortunately, I don't show this on the plot...sorry!).  As can be seen, the 5 Hz and 7.5 Hz examples are not very strong (the SNR is only 6-7 dB).  Other peaks within the plots are very close to being the same size, which would cause false commands to be sent to the robot.  In my movie at the top of this post, there were several false commands.

Balancing Sensitivity with False Commands:  To reduce the number of false commands, I could raise my detection threshold above 6 dB. Unfortunately, as see in the first two spectrum plots above, my 5 Hz and 7.5 Hz peaks are usually pretty weak (<  7 dB).  Therefore, any attempt to raise my detection threshold above 6 dB would cause me to no longer detect my legitimate brain waves.  I know because this is exactly the tuning process that I tried.  Bummer!  So, if I want more reliable performance, I'll need to develop a fancier signal processing beyond this simple FFT-threshold approach.  Future challenges!

Wrapping Up:  Even with the false commands seen in my movie, I was still able to command the robot to move around the table.  I could get it to go (roughly) where I wanted it to go.  And, I did it all with just my brain waves.  I think that this is pretty exciting!  Yay!  What are the next steps?  Well, maybe now that I have this under my belt, I can move on to control flying fish, or maybe a quadcopter!  Do you have any other cool ideas for things I can control with my brain?

Coolness:  This hack got picked up by IEEE Spectrum as part of an article on OpenBCI.  Cool!  Check it out here.

More Coolness:  This hack also got picked up by Wired.  Fun!

Follow-Up: I got to share this hack with Joel and Conor of OpenBCI.  You can see their luck with controlling the robot here.

Follow-Up: Follow-Up:  We used a similar approach to get a 5-person team to brain-control a swimming shark balloon.  It's cool.  Check it out here.