Monday, August 18, 2014

First Data with OpenBCI V3

It's here!  It's here!  The (near) future has arrived!  I have received an early OpenBCI V3 board from my friends over at OpenBCI.  Check out the photo below.  It's a little smaller than I expected.  Very cool!  Let's get it running and see what it can do!

The OpenBCI V3 board is a smaller than I expected.  Looks great!

Note that the V3 board includes a built-in microprocessor.  This means that you no longer have to buy a separate Arduino to act as host.  That's pretty sweet.  OpenBCI gives you two choices of microprocessor: at ATmega 328 (the 8-bit option) or a PIC32 (the 32-bit option).  While the power of the 32-bit PIC is appealing, I chose the ATmega because that allows me to program the OpenBCI board as if it were an Arduino Uno.  I hear that the 32-bit PIC version can also be programmed from the Arduino IDE, but the Uno is my friend, and I chose to stick with him.  

Another change with the V3 version is that it has a built-in Bluetooth module.  In fact, to maximize electrical safety for the user, the wireless Bluetooth link is now the only way to get data off the device in real-time (though it does have a built-in SD card for those looking to simply log data).  This is quite a change...and a change for the better, in my opinion.

OpenBCI says that the Bluetooth module is compatible with standard protocols (to enable connection to your mobile device) and that it has a special high-speed mode, if you have a mating BT module for your PC.  To enable these high-speed modes, OpenBCI includes a BT USB dongle, which is shown at the bottom of the picture below.

Using the new OpenBCI V3 board to record my ECG.  I used one disposable ECG electrode
on each wrist.  The OpenBCI board was powered by a 9V battery, which is in the black
battery case.  The OpenBCI board transferred the data to the PC via Bluetooth.
OpenBCI includes a BT dongle for the PC, which is shown in the bottom-left.

Once I got my hardware, I started in.  Unfortunately, as part of the deal with me getting this hardware so early, the software to run the hardware is not yet complete.  So, I had a little work to do.

When diving into new hardware, it's best to take baby steps.  Start from something that works and then add features incrementally, with lots of tests along the way.  This makes it much easier to identify and squash the bugs as they pop up.  For my first work with OpenBCI V3, here's my approach

  1. Test the wireless link using pre-defined dummy data
  2. Test getting data from the ADS1299 using its built-in test signals
  3. Test getting real data from the ADS1299 by recording my ECG
  4. Test the full system by recording my EEG

Dummy Data:  To get this process started, Joel (of OpenBCI) provided some example code that exercised the wireless link using dummy data.  He even pre-loaded this software on to the V3 board for me.  So, all I had to do was plug in the USB BT dongle, connect a battery to the OpenBCI board, and I was good to go.  I started my Terminal program on my PC and was immediately interacting with the OpenBCI V3 board.  It correctly transferred the pre-defined dummy data.  Success!

Built-In Test Signals:  Building from this working code base, and building from OpenBCI's initial code for configuring the ADS1299 EEG chip, I added the ability to grab data from the ADS1299 and send it out the wireless link.  I started with the ADS1299's built-in test signals.  After some fiddling with 24-bit vs 32-bit number formats (and then discovering that Joel already programmed the solution for me), I got the nice square wave signal as shown below.  This proves that I can communicate with the ADS1299 chip and that I've got all the number formats correct.  Success again!

Data from my OpenBCI V3 board...this is a built-in test signal being generated
by the ADS1299 EEG chip.  Since it looks beautiful, it means that I have confirmed
that I can configure the chip and that all my number formats are correct.

ECG Data:  As you may know from my previous "getting started" post, I like to start my collection of real data by recording my ECG (ie, heart signals).  I do this because ECG signals are so much stronger and simpler than EEG signals.  Having strong signals makes it more obvious when the system is working correctly (and when it is not working correctly).  So, I got out my disposable ECG electrodes, put one on each wrist, and started recording.  As you can see in the plot below, I got a nice sample of my ECG.  My code for OpenBCI V3 might be rough, and my code is definitely not feature-complete, but it does work.

My wrist-to-wrist ECG looks pretty good.

The sample of data shows that my heart rate was about 80 beats per minute, which is a little high for simply sitting in a chair at my computer.  Maybe I was just excited to be having success playing with the new hardware!  I'm like that.

So, returning to my four step process described earlier, I've got 3 of the steps completed.  Before I do the last step -- collected actual EEG data -- I'd like to revise the code a little more.  Right now, I cannot view the streaming data in real time because the data format is a little different then before.  To move forward, I need to adjust the code in my Processing GUI so that it can interpret the data packets and plot the data in real time.  Once I get that to happen, I'll hook up some electrodes to my head and maybe make my robot dance some more!  Wish me luck!

Follow-Up: Raw data and analysis code is here.

Sunday, June 8, 2014

Controlling a Hex Bug with my Brain Waves

Ever since my effort with OpenBCI began, I've been looking to control something with my brain.  Sure, a while back, I was successful in lighting an LED with my brain waves, but that's pretty simple.  I wanted something more.  And now I can do it.  I can control a robot with my mind!  Yes!


Approach:  My robot has just a few actions that it can do...turn left, turn right, walk forward, and fire.  To make this brain-controlled, I need a way to invoke these commands using signals from my brain.  Ideally, I'd just think the word "Fire!" and the robot would respond.  Unfortunately, those kinds of brain waves are too hard to detect.  Instead, I need to use brain waves that are easy to detect.  For me, "easy" brain waves include the Alpha waves (10 Hz oscillations) that occur when I close my eyes, as well as the brain waves that occur when I watch my blinking movies (a.k.a. visual entrainment).  So, my approach is to use OpenBCI to record my brainwaves, to write software to detect these specific types of brain waves, and to issue commands to the robot based on which brain waves are detected.

Here are all the pieces that you see in the video
Hardware Setup:  The core hardware for this hack is similar to my usual OpenBCI setup: EEG electrodes, an OpenBCI board, an Arduino Uno, and my computer. Added to this setup is the Hex Bug itself and its remote control, which I hacked so that the remote can be controlled by an Arduino.  So, as shown below, my brain wave signals go from my head all the way to the PC.  The PC processes the EEG data looking for the Alpha waves or the visually-entrained waves.  If any are detected, it decides what commands to give the robot.  The commands are conveyed back to the Arduino, which then drives the remote control, which the Hex Bug receives over its usual IR link.

Here is the schematic of how the pieces work together.

EEG Setup:  I'm going to be measuring my Alpha waves and I'm going to be measuring the brain waves induced through visual entrainment.  Based on my previous experience, I know that both are best recorded using an electrode on the back of the head (at the "O1" position, if you're into your 10-20 electrode placement standard).  I do not need electrodes all over my head.  That's the only sensing electrode that I'm using.  That's it.  Of course, EEG also requires a reference electrode, which I put on my left earlobe.  And, finally, EEG often has a third electrode ("bias" or "driven ground"), which I placed on my right earlobe.

Looking at the Frequency of my Brain Waves:  As mentioned above, my approach is to control my robot by detecting Alpha waves and by detecting visually-entrained brain waves.  These are easily detectable because they occur at specific frequencies.  Alpha occur around 10 Hz and the visually-entrained brain waves occur at the blink rate(s) of whatever movies I use (my best results were from 5 Hz and 7.5 Hz movies). So, to control my robot, I will be looking for EEG signals at these frequencies: 5 Hz, 7.5 Hz, and 10 Hz.  I'm going to "look" for these frequencies by writing some EEG processing software that'll look at the frequency content of my EEG signal to see if these frequencies are present.


EEG Processing:  The flow chart above shows the steps that I use to process the EEG signal (my software is here).  Once the PC gets EEG data from the OpenBCI board, the first step is to compute the spectrum of the signal, which tells me the content of the EEG signal as a function of frequency.  I then search through the relevant part of the spectrum (4-15 Hz) to find the peak value.  I note both its frequency value and its amplitude.  In parallel, I also compute the average EEG amplitude across the 4-15Hz frequency band.  This average value is my baseline for deciding whether my peak is tall (strong) or short (weak).  By dividing the amplitude of my peak by this baseline value,  I get the signal-to-noise ratio (SNR) of the peak.  The SNR is my measure of the strength of the peak.  The output of the EEG processing, therefore, are two values: the frequency of the peak and the SNR of the peak.


Deciding My Robot's Action:  Once my EEG processing finds the frequency and SNR of the peak in my EEG spectrum, I now have to decide how to act on that information.  After some trial and error, I settled on the algorithm shown in the flow chart above.  It's got three steps:
  • SNR Check:  First, I decide whether the current peak in the spectrum is legitimate, or if it is likely to be just noise.  I don't want to issue a command if it is just noise because then my robot will be taking all sorts of actions that I didn't intend.  That is not what I want.  So, to decide if the peak is likely to be legitimate, I look at the SNR of the peak.  If it has a big SNR, I'll accept it as a legitimate peak.  If it is too small, I'll take no further action.  Right now, my threshold for this decision is at 6 dB.  Setting a higher threshold results in fewer false commands (which would be good), but it also makes the system less sensitive to legitimate commands (which is bad).  This 6 dB threshold resulted in an OK (but not great) balance.
  • Frequency Check:  If the peak seems legitimate, I decide how to command the robot based on the frequency of the peak.  If the peak is between 4.5-6.5 Hz, I must be looking at the right-side of my 2-speed blinking movie (ie, the portion that blinks at 5 Hz), so the computer prepares the "Turn Right" command.  Alternatively, if the EEG peak is 6.5-8.5 Hz, I must be looking at the left-side of my 2-speed blinking movie (ie, the portion that blinks at 7.5 Hz), so it prepares the "Turn Left" command.  Finally, if the EEG peak is 8.5-12 Hz, it must be my eyes-closed Alpha waves, so the computer prepares the "Move Forward" command.
  • New Command Check:  Before issuing the command, I check to see whether this command is the same as the last command that was extracted from my brain waves.  If the latest command is different, I hijack the command and, instead, issue the "Fire!" command.  If the latest command is the same, I go ahead and issue the left / right / forward command like normal.  The reason for this hijack is that I have no other type of easily-detected brain wave that I can use for commanding the robot to fire.  This approach of issuing "Fire!" on every change in command seemed like a decent way of getting a 4th command out of 3 types of brain waves.
Putting It All Together:  As you can see in the movie, I eventually able to get all of these pieces working together to allow me to command the Hex Bug using just my brain waves.  Of course, it didn't work the first time.  Even once I got all the hardware working, I still needed to tune a bunch of the software parameters (FFT parameters and the detection threshold) until I got something that worked somewhat reliably.  To help with this tuning process, I used the spectrum display that is in my Processing GUI.  Some screen shots are below.

Example EEG spectrum when I stared at the right side of my two-speed blinking
movie.  It induced 5 Hz brain waves.  I programmed 5 Hz to mean "Turn Right".
The SNR here is between 6 and 7 dB.

Here's an example EEG spectrum when I stared at the left side of my two-speed
blinking movie.  It induced 7.5 Hz brain waves.  When the GUI detected 7.5 Hz,
it issued a "Turn Left" command to the Hex Bug.  The SNR is only 6-7 dB.

Finally, here's an example EEG spectrum with my eyes closed so that I was
exhibiting Alpha waves, which are near 10 Hz.  When it detected 10 Hz, I
programmed it to issue a"Forward" command.  The SNR is > 8 dB.

Weak Peaks:  In the screenshots above, the red line shows the current EEG spectrum.  The heavy black circle shows the spectral peak that my software algorithms have detected.  The black dashed line is the "background noise" from which the SNR is computed.  To be declared a legitimate detection, the peak must be 6 dB higher than the black dashed line (unfortunately, I don't show this on the plot...sorry!).  As can be seen, the 5 Hz and 7.5 Hz examples are not very strong (the SNR is only 6-7 dB).  Other peaks within the plots are very close to being the same size, which would cause false commands to be sent to the robot.  In my movie at the top of this post, there were several false commands.

Balancing Sensitivity with False Commands:  To reduce the number of false commands, I could raise my detection threshold above 6 dB. Unfortunately, as see in the first two spectrum plots above, my 5 Hz and 7.5 Hz peaks are usually pretty weak (<  7 dB).  Therefore, any attempt to raise my detection threshold above 6 dB would cause me to no longer detect my legitimate brain waves.  I know because this is exactly the tuning process that I tried.  Bummer!  So, if I want more reliable performance, I'll need to develop a fancier signal processing beyond this simple FFT-threshold approach.  Future challenges!

Wrapping Up:  Even with the false commands seen in my movie, I was still able to command the robot to move around the table.  I could get it to go (roughly) where I wanted it to go.  And, I did it all with just my brain waves.  I think that this is pretty exciting!  Yay!  What are the next steps?  Well, maybe now that I have this under my belt, I can move on to control flying fish, or maybe a quadcopter!  Do you have any other cool ideas for things I can control with my brain?

Follow-Up:  This hack got picked up by IEEE Spectrum as part of an article on OpenBCI.  Cool!  Check it out here.
Follow-Up:  This hack also got picked up by Wired.  Fun!

Thursday, May 15, 2014

Arduino Control of a Hex Bug

Based on my previous success with visual entrainment, I'm moving forward with my plans on making a brain-computer interface (BCI) using my blinky lights.  To really kick this project into high gear, though, I need a good goal -- I need something that would be really fun to control with my brain.  Luckily, my OpenBCI friend Conor, found these cool remote-controlled 6-legged robots that can walk around a fire a little gun.  Bingo!  As you can see below, today's post shows how to hack the robot's remote to make it controllable from an Arduino.  Once it is controllable from an Arduino, it's only one more step until it's controllable from my brain!



The Hex Bug Battle Spider:  The remote-controlled robot that Conor found is a Hex Bug Battle Spider.  They are available in two colors and you can have them do battle.  Hex Bug makes smaller and cheaper versions of this robot, but I believe that only the Battle Spider has the ability to do battle.  I'm looking forward to facing off cerebro-a-cerebro with Conor, so I'm sticking with the Battle Spider.

Hex Bug Battle Spider - My Hacking Target for Today

IR Remote Control:  The Hex Bug is commanded using an infrared (IR) remote-control.  It is the remote control that I will hack so that the Hex Bug can be commanded from an Arduino.  As you can see in the picture below, I was so anxious to hack the remote, that I never got a picture of it while it was still in one piece...I just couldn't wait to smell the solder!  Regardless, as you can see, the remote is simply a bunch of plastic pieces, a couple of coin cell batteries (not shown), and a printed circuit board (PCB).

The Hex Bug Infrared Remote Control (in Pieces)

Thank You, Test Points!  Flipping over the PCB, I was very pleased to see that this board has test points for everything.  Oh, the joy!  Because of these test points, it is much easier to probe the board with my multimeter or with an oscilloscope to figure out how this thing works.  The test points also makes it much easier to attach wires to control this thing from the outside (like from my Arduino).  It turns out that there is a test point for each of the four user buttons on this board.  The relevant test points are shown in the picture below.  These test points will be the focus of my work.

All those test points enable easy hacking.  The test points
with the arrows are for the user buttons.

How the Buttons Work:  After a bit of probing of these test points, I learned that the buttons are used like most buttons in small devices like this (see "Button" demo by the Arduino folks for more info).  The buttons on this remote control are simply switches that are normally open-circuit.  When you press a button, the switch becomes closed-circuit.  The "low" side of each button is tied to ground.  The "high" side is connected to 3.3V via a pull-up resistor (probably inside the microcontroller).  The microcontroller is continually sensing the voltage on the high side of the switch.  When the button is not pressed, no current flows through it nor through the pull-up resistor, so the voltage seen by the micro is high.  When the button is pressed, current flows through the button, which drops the voltage seen by the micro.  As a result, the micro knows that the button was pressed.  Easy!

Hacking Approach:  Based on this discussion, it is clear that the microcontroller on the remote control knows nothing about the buttons...it only knows about the voltage being controlled by the buttons.  When one of those lines goes from high to low, the microcontroller thinks that a button has been pressed.  My hacking approach, therefore, is to wire the Arduino to the remote control so that the Arduino can pull the lines low for me.  This requires me to attach a wire to the high side of each button and to a attach a wire to the remote control's ground.  Normally, I'll keep the Arduino's pins in a high-impedance state so that no current flows.  When I want it to "press" a button for me, I'll command the relevant pin to go into a low-impedance state to allow it to conduct current to ground.  Electrically, this will mimic the behavior of the buttons themselves.  No extra components will be necessary!

Connecting Ground:  OK, we're mostly done talking.  Now let's start soldering.  First, I connected a wire to the remote's ground.  After looking around the PCB, I decided that I liked the solder that was on the low side of the "fire" button.  So, as you can see below, I sneaked a black wire into that location and soldered it in place.

A blurry picture showing where I soldered a black wire to
attach to the remote control's ground.

Connecting Each Button:  Then, I flipped the board over and soldered a colored wire to each button's test point.

All of my wires are now soldered to the test points.

Snip a Pass-Through for the Wires:  While it is not necessary to do this, I like the idea of re-assembling the remote so that I can use it with my fingers (as if it were not modified) or so that I can use it with the Arduino.  To enable the reassembly of the remote, one simply has to cut a hole in the plastic housing to get the wires out.  I used a "nibbler" tool to cut a small hole.  As you can see below, it worked really well!

For extra credit, I used a nibbler to cut a hole in the plastic housing so that
I can get the wires out, even after I fully re-assemble the remote control.

Attach a Pin Header:  To ease the connection of these 5 wires to the Arduino, I decided to solder the free ends of the wires to a piece of basic pin header.  With these pins, I can easily insert the five wires as a single unit into the sockets on the Arduino board.

To make it easier to connect the wires to an Arduino, I attach the wires to
a basic pin header.

The Hacked Remote:  The picture below shows the hacked remote after I reassembled it.  I tested it by pressing the buttons with my fingers and the robot moved.  So far, so good!

My hacked remote control is now re-assembled and ready for testing.

Software:  If I'm going to command this robot from my Arduino, the Arduino needs software.  So, I plugged in my remote to the Arduino (because the Arduino is really flexible, I used the Analog Input pins even though these signals are neither Analog nor are they Inputs...but that doesn't matter, you can use the Analog pins as Digital ins and outs, too) and then began coding.  My code is available on my GitHub as "TestHexBugController".  This code tells the Arduino to listen to commands coming over Serial from the PC.  I assign one ke on the PC's keyboard to each of the Hex Bug's four functions: "forward", "turn left", "turn right", and "fire".  When the Arduino receives one of these commands, it toggles the relevant pin to pull it LOW for 500 msec.  That's all it takes!

Using the hacked remote so that I can use an Arduino to control my Hex Bug via
commands entered from the PC.

Testing it Out:  As you can see in the video at the top of this post, this setup works pretty well for controlling the Hex Bug.  I can make it walk anywhere and shoot its gun on command.  It's pretty fun.  If I cared to, I could no script a whole series of maneuvers.  If I attached a pen to the Hex Bug, maybe I could make it walk around on a big piece of paper so that it would draw out a funny picture.  That could be fun.  Or, I could combine it with some computer vision on my PC and have it chase my cat around.  That would definitely be fun.  Sadly, I don't know anything about computer vision.

Brain Control:  Really, though, my next step is to control this with my brain.  So, I'll attach my OpenBCI shield to this same Arduino and I'll have the Arduino pipe my EEG signal to the PC.  On the PC, I'll process the EEG signal and, if it detects the right brainwave signatures, I'll have the PC send robot commands back to the Arduino.  The Arduino will then convey those commands to the robot via the IR remote.  All of the hardware pieces are in place...now it's time to put it all together!

Follow-Up:  I finally did put all the pieces together.  I can now control the Hex Bug with my brain waves!

Sunday, May 11, 2014

EEG as WAV Files, Go Spectrograms!

OK, let's say that I just finished some cool new EEG experiment where I recorded my EEG response to watching cat videos while listening to the Pink Panther at half speed.  My next step would be to take a quick look at the data to get the overall big picture.  My favorite way of getting that overall view is to make a spectrogram (see example below).  My love for these oh-so-colorful plots runs deep.  The question is, how does one make spectrograms?  Well, in my opinion, if you don't have Matlab (and are afraid of Python), the next best way to make spectrograms is to use one of the multitude of audio editing software packages out there.  Many audio edit programs provide a spectrogram view.  This post is about getting EEG data into an audio program so that you can see your data.

A Spectrogram of EEG data that was Made in Matlab.  This shows data from my previous post,
where I was watching a movie with two different blink rates.  You can see how my brainwaves
entrained with the changing blink rate in the movie.

Converting to a WAV File:  The first step in using an audio program for EEG analysis is to convert one's EEG data into an audio file.  Since I usually work in Windows, I tend to convert all of my EEG data into WAV files.  I choose WAV because it is uncompressed.  I never choose MP3 because it is very unclear what its "perceptual coding" would do to my precious brainwave data.  So, a WAV file is what I would recommend.  But how do you get EEG data into a WAV format?  If your EEG data is in text format (such as is logged by the OpenBCI GUI), you could use my Processing sketch "ConvertToWAV".  This sketch will read in an OpenBCI log file and write each EEG channel out as its own WAV file. You can get the sketch on my GitHub.

Audacity:  Once the data is in WAV format, you can open it in any audio program.  A popular (and free!) audio editing program is Audacity.  While it is not my favorite audio editing program, it is perfectly sufficient for working with EEG data.  After opening your EEG data, the trick is to figure out how to switch the display from waveform to spectrogram.  The screen shot below shows how to do it.

Changing to Spectrogram View in Audacity

Once Audacity is in spectrogram mode, you need to zoom in on the vertical axis in order to see the interesting EEG features, which are usually focused in the lower frequencies.  In Audacity, you zoom simply with a click-and-drag on the vertical axis.  Then, after manipulating the spectrogram settings under the "Preferences" menu, you can get a spectrogram like the one shown below. While the color scheme hurts the eyes a bit, this spectrogram is good enough to see the same kind of EEG entrainment as seen in my original Matlab plot. Furthermore, the tools in Audacity let you further analyze the EEG data through zooming, filtering, amplifying, and (if you change the file's sample rate to increase the playback speed) you can use Audacity to listen to your own brain waves!  Audacity is definitely a useful tool for working with EEG data.

In Audacity, a Spectrogram of my EEG Data

The spectrogram settings that I used are shown in the screen shot below.

My Display Settings for Making EEG Spectrograms in Audacity.  I changed
the Window Size, the Gain, and the Range.

Cool Edit Pro:  I first started getting into spectrograms in the late 90's because this is when I started working with audio and music on the computer. What got me hooked on spectrograms was a piece of shareware called Cool Edit.  It was a stupid name for an otherwise outstanding program.  It was so useful that I spent the extra dollars and bought its upgrad -- Cool Edit Pro. Cool Edit Pro has a *great* spectrogram display, as shown below.  Unlike Audacity, which requires lots of manipulation of the spectrogram settings to get a useful view, the Cool Edit Pro display always seems just right.  Unfortunately, Cool Edit Pro isn't available anymore -- it was bought by Adobe in the early 2000s and became Adobe Audition.  Audition is also fine for making spectrograms (I have only used up to Audition 3.0), but it is expensive.

An EEG Spectrogram in Cool Edit Pro V1.2a.  It's an old school program that totally rocks.

In Cool Edit Pro, the only display parameter that you need to change is the "Resolution" (ie, FFT size).  You do that under the "Settings" menu.

My settings for viewing EEG spectrograms in Cool Edit Pro.
I changed the Resolution value.

Raven Lite:  A third option for making spectrograms is a bit more obscure.  A bunch of years ago, I came across a program called "Raven Lite", which is produced by the Ornithology Lab (ie, bird science) at Cornell University.  The "Lite" version is free.  You can download it and immediately use it for spectrograms, though it is crippled in other ways until you email them for a free (non-commercial) key. What I really like about Raven is that, as shown in the screen shot below, its spectrogram controls are right on the main window for easy manipulation.  Also, I like its color map options way better than what is available in Audacity.  Finally, Raven is one of the few programs that let you see both the spectrogram view and the waveform view at the same time (not shown).  It is really nice to have that capability.

Raven Lite 1.0 from the Cornell Laboratory of Ornithology. It's a pretty good viewer.  The settings for
the display or right here in the main window.

Other Options:  Because I have Matlab and Cool Edit Pro (and Audacity and Raven) I haven't spent a lot of time looking at other options.  Does Garage Band offer a spectrogram view?  Is there a plug-in for iTunes or Windows Media Player that gives spectrograms?  I'm curious to hear what you folks use.  Drop a comment and let me know!

Saturday, May 10, 2014

Controlling Entrainment Through Attention

In a previous post, I showed that I could induce (entrain) brain waves at different frequencies simply by staring at blinking movies playing on my computer.  Having demonstrated this basic feasibility, my goal now is to exploit this phenomenon to make a brain-computer interface (BCI) to control future hacks.  My idea is to play two blinking movies simultaneously -- one at a slow speed and one at a fast speed.  I'm hoping that my brainwaves will only entrain with the blinking from the one movie that I choose to focus on.  Does my brain work this way?  Will my brain successfully reject the blinking from the movie that I'm ignoring?  Let's find out!



Simultaneous Blinking at Two Speeds:  Previously, I made some blinking movies where the whole screen would blink black or white at a given speed.  To make this idea work for a BCI, I want my screen to blink at two different rates at the same time.  So, as you can see in the video above, I made the left side of my screen blink at one rate while the right side of my screen blinks at a different rate.  I'm hoping that, if I focus my attention on the left side of my screen, by brainwaves will only become entrained at the left-side blink frequency, whereas if I were to focus on the right side of the screen, my brainwaves would follow the right-side blink frequency.

Swapping Sides:  To help with this test, I wanted to remove any effect of turning my head to change my gaze between the two sides of my screen.  So, in creating my dual-rate blinking movie, I had the movie automatically swap sides every 20 seconds.  As a result, it starts with fast blinking on the left and slow blinking on the right.  After 20 seconds, it swaps so that slow is on the left and fast is on the right.  It does this swap a few times.  The Matlab code that I used to make these movies is here.

I created a movie where the left and right sides blink independently -- left is fast and right is slow.
For this test, the two blink rates swapped sides every 20 seconds.

Choosing my Blink Rates:  Based on my previous results, it looks like my brain (coupled with my computer's limited ability to blink steadily) is most easily entrained in the 6-10 Hz frequency range. So, for this dual-rate movie, I chose "slow" to toggle between black and white at 10 Hz (ie, a 5 Hz white-white rate) and "fast" to toggle at 15 Hz (ie, a 7.5 Hz white-white rate).  In truth, I made a bunch of movies at different rates, but the the 10/15Hz movie worked the best, so I'll only show its results.

EEG Setup:  With my movies prepared, I gathered up my EEG stuff.  Like usual, I used my OpenBCI board and a few cup electrodes with Ten20 paste. I put one electrode on the left side of my forehead (Fp1), on one the left side of the back of my head (O1) and one on the right side of the back of my head (O2).  Using the impedance measuring feature, my impedances were 11 kOhm, 67 kOhm, and 28 kOhm (I seem to have an on-going problem getting a low impedance at O1).  My reference electrode was on my left ear lobe and my bias electrode was on my right ear lobe.  My OpenBCI board was connected to the PC via USB and I was logging data using my OpenBCI GUI in Processing.  For this test, I also used my photocell to confirm that my computer's blinking was sufficiently steady.

I used my OpenBCI V1 board to record my EEG into the computer.  I also attached a
photocell to confirm that the screen was blinking at the right rate.

Results:  After setting everything up, I started recording my EEG data and then I started playing the dual-rate blinking movie.  It was night time, so my room was pretty dark.  I focused my attention at the center of the left-hand movie. As described above, the left movie toggled fast-slow-fast-slow every 20 seconds, while the right movie played the opposite -- slow-fast-slow-fast.  Spectrograms of the EEG signals from my head are shown in the figure below.  As you can see, there was no entrainment seen in the signals from my forehead (as expected) but there was entrainment in the back of my head (also as expected).  The best entrainment was seen on the left side of my head.

Spectrograms of my EEG signals recorded while watching my dual-rate blinking movie.  The left-back
of my head exhibited the strongest entrainment to the blinking of my movie.  


Only Seeing the Left Blink Rate:  Because the left-back of my head (O1) gave the best entrainment, let's just focus on its results. The figure below shows just the results for the left-back of my head.  Note how,once the movie starts playing, my EEG signals seem to toggle between a fast blink rate (~7.5 Hz) and a slow link rate (~5 Hz).  This exactly follows the white-white blink rate of the left movie. So, my brainwaves successfully entrained to the movie that I was watching. Most importantly, there seems to be no signature in my EEG data from the blinking of the right movie.  This is success!

Spectrogram of my EEG data from the left-back of my head (O1) while watching my dual-rate
blinking movie.  I was focused just on the left movie.  Because of this focus, my brainwaves
appear to have entrained only with the left movie's blink rate.

Purposely Shifting My Attention:  OK,so I've demonstrated that my mind can successfully ignore one of the movies.  That's really good.  But, maybe I'm just biased to looking left. To really make this work for a BCI, I need to be able to shift my attention to either movie and have my brainwaves follow.  So, for my 2nd test, I started the same movie playing back.  But, this time, when the movies swapped sides every 20 seconds, I switched my attention to follow the movie that blinked faster. This means that I started by watching the left movie, then I watched the right, then left, then right.  My EEG response is shown below. Note that I showed strong entrainment and,most importantly, that my brainwaves only show the fast blink rate (7.5 Hz). So, by shifting my attention to follow the faster movie, I successfully rejected the effect of the slower blinking movie. Success again!

Spectrogram of my EEG data from the left-back of my head (O1) while watching my dual-rate blinking
movie.  While watching the movie, I switched my attention between left and right to follow the movie
 that blinked faster. Because of this focus, my brainwaves remained entrained only at the faster rate.

All the Elements are In Place:  It looks like I now have the elements in place for a 3-state BCI.  If I don't look at the movie at all, I get State 1: "Nothing". If I watch the blinking of the left movie, I get State 2: "Left".  If I watch the blinking of the right movie, I get State 3: "Right".  It may be possible to further divide my screen to get more blinking regions to add more BCI states. Maybe that's a good experiment for the future. Right now, though, I think that I'm going to turn my attention to a little robot that I got (thanks for the pointer Conor!) to see if I can control it with visual entrainment.  This is gonna be fun!

Follow-Up:  Interested in getting the EEG data from this post?  Try downloading it from my github!

Follow-Up:  I successfully used visual entrainment to control a six-legged robot!

Wednesday, May 7, 2014

Measuring Video Blink Rate with an Arduino

In my previous post, I used blinking videos on my computer to entrain my brain waves.  A key question, though, was whether my computer could play those blinking videos steadily.  If the blinking isn't steady, it won't entrain brainwaves that are easily detected.  So, in this new post, I show how I hacked a photocell and my Arduino to measure the blink rate that my computer is actually producing.  It's a pretty simple (and cheap!) setup and, as you'll see below, its data explains some of the important findings in my EEG data!

Measuring the Blink Rate From the Movies Played Back by my Computer

Using a Photocell:  My approach to measuring the video blink rate is to quickly and continuously measure the light produced by my computer screen.  I chose to use a photocell, mostly because I had one that came with my very first Arduino.  To learn how to use a photocell, I followed the tutorial at Adafruit.  It explains what a photocell is and it explains exactly how to hook it up to an Arduino.  The key is that you connect a photocell and a 10K resistor in series.  Together, they form a voltage divider.  Then, you connect one end to +5V and the other end to ground.  In the middle, at the junction between the photocell and the 10K resistor, you connect that point to the Arduino's analog input pin.  Pretty easy!

Wiring It Up:  To connect all of the bits together, I needed to solder a few things.  First, I gathered my components -- the photocell, some wire, and some shrink tube (to keep the soldered wires from shorting to each other).

The Components: A Super-Cheap Photocell, some Wire,
and some Shrink Tube.  The 10K resistor is not shown.

Then, I soldered the wires to the legs of the photocell and insulated them with the shink tube.  The photo below shows the components after this assembly.  Looks decent enough.

Fully Assembled.  This is the first version that I tried.  Notice that the back
and sides of the photocell are exposed.  This turned out to be bad.

Unfortunately, when I hooked it up to my Arduino, I found that I was not seeing any change in the light level from my computer screen.  After some playing around, I found that the photocell is sensitive to light from the back and sides, in addition to being sensitive to light from the front.  So, as seen in the photo below, I added another layer of shrink tube to block out the light entering from the back and sides.

Modified Assembly.  I added more shrink tube to wrap around the sides
back of the photocell.  You have to keep out that light!

Mounting Everything:  Once I had my photocell on those long(ish) lead wires, I connected it to the Arduino as discussed on the Adafruit site.  I then needed a way to hold the photocell close to the computer screen so that I could measure its blink rate.  As shown in the photo below, I found that my adjustable soldering fixture (sometimes called a "3rd Hand" fixture) works really well.  It works best if you position the photocell to be VERY close to the computer screen.

I held the photocell to the computer screen using a"3rd Hand" soldering fixture.
To record the photocell signal, I used one of the Analog Inputs available on the
Arduino that is the host for m OpenBCI shield.

Position the photocell to be VERY close to the screen.

Arduino Software:  If I'm going to use my Arduino to read the photocell, I need to some software for the Arduino.  My first step was to use the built-in Arduino example called "AnalogInOutSerial".  I then extended this program to report the actual resistance of the photocell under different lighting conditions ("ReadPhotocellResistance").  While either of these programs works fine to read the photocell, neither is clocked to read the values a steady pace.  If the sampling isn't steady, there's no way to know if the video blinking istelf is steady.  To fix this, you need to setup an Arduino timer.  Or, you could...

Integrate with OpenBCI:  The OpenBCI shield generates data packets at a very precise rate (I usually configure mine to sample at 250 Hz).  It could act as the clock to drive the Arduino to sample the photocell steadily.  So, I modified the OpenBCI Arduino sketch to read one of the analog input pins every time that it receives data from the OpenBCI shield.  It then appends this extra data value to the OpenBCI data packet and sends it to the PC.  Finally, I modified my OpenBCI GUI to receive the extra data and to include it in its log file.

Results:  I used this system to record the blinking produced by the blinking movies from my previous post.  Each movie was about 20 seconds long and each movie blinked at a different rate.  The digitized Photocell values are shown in the figure below as raw counts from the Arduino.  Clearly, this graph is a bit too zoomed out to see much of interest (though you can see the non-steady amplitude when at the fastest speed on the right).  We need to zoom in to see more detail...

Sample values recorded from the photocell by the Arduino's analog input pin. I played
my 10 whole-screen-blinking movies.  Each movie is about 20 seconds long.  Each movie
has a different blink rate -- from a 1 Hz white-to-white blink rate up to a 10 Hz w-w rate.

Zooming-In:  Excerpts from three of the movies are shown below.  In these plots, you can see that the light pulses recorded during the  3 Hz and 10 Hz movies look to be steadily paced, whereas the pulses in the 7 Hz movie looks much more irregular.  Based on this qualitative view, I'd say that the irregularity of the 7Hz movie might cause complications when used for EEG experiments.

Zoomed-In waveforms recorded from the photocell during my blinking movies.  Excerpts at three
different blink rates are shown.  The red and blue dots show features that I used to quantify each
movie's blink rate.  Note that the time scale is different for each of the movies so that you always
see 4 periods, despite their increasing speed.

Measuring the Blink Rate:  To better assess the steadiness of each movie, I setup a routine to quantify the blink rate on a blink-by-blink basis.  I did this by, first, computing the mean sensor value for the whole recording.  This is my threshold for deciding whether the screen is "white" or "black".  This threshold value is shown by the horizontal black line in the excerpts above.  Then, I detected when the signal crossed this threshold.  Each threshold crossing is shown as a blue dot in the figures above.  To compute the blink rate, I compute the difference between the dots.  The "white-to-white" blink rate is the difference between the red dots.  Alternatively, the rate at which the screen merely changed (either from white-to-black or black-to-white), I measured the difference between the blue dots.

Blink Rate Throughout the Test:  The plot below shows the results of quantifying the blink rate throughout the test.  In red, the plot shows the white-to-white blink rate.  In blue, I show the blink rate from both transitions.  As expected, the blink rate counting both transitions is twice as fast as the blink rate when counting just from white-to-white.

Blink Rate Measured for my 10 Movies.  The measured blink rate generally follows the expected blink
rate, though the measured blink rate exhibits unsteadiness at the faster speeds. The blink rate when
counting both transitions (whit-to-black and black-to-white) is especially unsteady at the higher speeds.

Unsteadiness:  As can be seen in the plot above, the blink rate is pretty steady for the first four movies (ie, speeds of 1-4 Hz W-W).  For the 5th movie (5 Hz W-W), the plot above starts to look messier, especially the blue line.  This means that the system is not playing back the blinking movie smoothly.  As we get into the faster movies (6-9) Hz, the blue line gets extremely messy.  Clearly, the system is unable to keep a steady pace of white-to-black and black-to-white transitions (the blue line), though the white-to-white period (the red line) isn't too as bad.  Funnily, at 10 Hz, note that the white-to-white blink rate gets very stable again.  It seems that at 10 Hz, the individual movie frames must be well-aligned with the natural update rate of the video system on my computer.

Relationship to EEG Data:  The whole purpose of this investigation was to see if my EEG results from my previous post (copied again below, for convenience) were reflecting properties of my brain, or if they were reflecting artifacts from imperfections in my computer's movie playback.  My main question with my EEG data is why I exhibited no video-entrained EEG signals above 10 Hz.  Well, looking at the graph of the computer's blink rate (above), we see that the video blink rate becomes extremely unstable for any frequency above 10 Hz.  My computer, in other words, was unable to generate steady visual stimulation above 10 Hz.  Without stable stimulation, my brain had nothing to entrain with.  Therefore, these limitations in my video system mean that I cannot declare either way whether my brain can entrain with visual stimuli at speeds greater than 10 Hz.  With a more stable video system, maybe I could entrain with the faster blinks.

EEG data shown in my previous post.  This is the signal recorded from the back of my head (reference
on left ear) when staring at my blinking movies.  The signals marked by the blue arrows seem to indicate
periods when my brain entrained with the video on every transition of white-to-black and black-to-white.
The periods marked by the red arrows seem to indicate periods when my brain entrained on just the
white-to-white blink rate.

Next Steps:  With this system, I have proven that I can assess the steadiness of my video playback system.  Steady playback is critical to inducing visual entrainment of brainwaves.  So, as I move forward with trying to create a BCI based on visual entrainment, I can use this synchronized photocell recording to confirm that the video stimulation is sufficient to (hopefully) induce EEG responses.  Let the development of the visual BCI begin!

Sunday, May 4, 2014

Inducing Brain Waves with Visual Entrainment

A while back, I had a friend come over and I measured his EEG in response to staring at a blinking light.  We saw (as we hoped) that his brainwaves oscillated in sync with the blinking of the light.  I thought that this visual entrainment (aka "steady-stead visual evoked potential") was pretty cool.  Since then, I've learned that it can be used as the basis for a brain-computer interface (BCI).  Because I'm still searching for a good BCI paradigm, I decided to return to my exploration of visual entrainment.  Today, I'm going to show how I successfully used visual stimuli to induce brainwaves at different frequencies.  As a result, I can now see an good avenue for an EEG-based BCI.  Yes!  Let's go!

Inducing SSVEP Using a Toggling Checkerboard Pattern on my Computer Screen
Goal:  My goal today is to use visual stimuli to induce brainwaves across a range of frequencies.  Because I want to use this for a BCI, I'm trying to determine what kind of visual stimuli I should use and what EEG frequencies I can induce.  What does it take to make this work reliably?

Visual Setup:  In my previous post, my visual stimulation was simply a blinking head-lamp.  It was effective (and really bright!), but I had no control over its blink frequency.  As a result, I also had no control over the frequency of the brainwaves that it induced.  So, for today's test, I needed to get fancier.  I ditched the head-lamp and, instead, created a series of blinking movies that I could playback on my computer.  I controlled the "blinking rate" by saving my movies at different frame rates.  What exactly did the movie look like?  Well, at first, because of a paper that I read in the VEP literature, my movie used the checkerboard pattern shown in the picture above.  The movie toggled back-and-forth between this image and the inverse image (swap blacks and whites).  While this worked OK, I later switched to a simpler movie (code here) where the screen was simply all-white or all-black.  That seemed to work better.

EEG Setup:  Once I made my movies, I set myself up with my EEG system (OpenBCI ).  I my usual gold cup electrodes with Ten20 EEG paste.  I put one electrode on the back of my head (near O1) and I put another electrode on my forehead.  My reference electrode was on my left ear lobe and my bias electrode was on my right ear lobe.  Using the impedance-measuring feature of OpenBCI, the electrode on my forehead had an impedance of 24 kOhm and the one on the back of my head was about 65 kOhm.  I couldn't seem to get the back electrode to a lower value.

Channel 1 was on my Forehead, Channel 2 was on the back of my head.
My left ear lobe was my reference.  My right ear lobe was the bias.

I used my OpenBCI V1 board with Ten20 Paste.
I think that those owl napkins are fun!

Test Method:  Once I got everything setup, I launched the OpenBCI GUI in Processing and started an EEG recording.  To playback my homemade blinking movies, I opened up Windows Media Player and set it to full-screen mode.  I had ten movies, with each movie blinking at a different rate.  I had WMP play all 10 movies continuously in sequence.  Each movie was 20 seconds long, so the whole test took about 200 seconds.  It was nighttime when I did this test and my room was dimly lit.  I tried to stare at the screen and I tried to only blink my eyes at the transition between the different blinking rates.

Results, Checkerboard:  As usual, my preferred way to view the data is to make spectrograms.  In the figure below, the top plot is the data from my forehead and the bottom plot is the data from the back of my head.  From my forehead, there is nothing interesting except my eye blinks.  From the back of my head, we see several interesting features, which I've marked with blue and white arrows.  Note that these interesting features change every 20 seconds, which is the same as my 20-second movies.  It seems clear to me that these features are my brainwaves responding to changing of the blink rate in my movies.  Excellent!


Spectrogram of EEG Signal Recorded While Watching the Blinking Checkerboard Pattern.
The top plot is the signal from my forehead.  The bottom plot is from the back of my head.
"W-B" is the rate at which the movie switched from either white-to-black or black-to-white.
"W-W" is the rate if you measure just from white-to-white.

Entrained with the Blink Rate?  Looking at the three blue arrows, it appears that I have entrained brainwaves at 2 Hz, 4 Hz, and 6 Hz.  At these times, any given square in my movie was blinking at 1 Hz, 2 Hz, and then 3 Hz, if you count from white period to white period ("W-W").  Because I have entrained brainwaves as 2x the white-to-white frequency, it suggests that it is NOT white-to-white that matters, but that it is the transition between white/black or black/white that matters.  At least, that is what is implied for these three (out of 10) cases for the checkerboard stimuli.

Complications:  While that would be a fine conclusion, why does this rule not continue through the other 7 cases in this checkerboard test?  Why does it only work for the three cases with the blue arrows?  The cases with the white arrows do show some sort of EEG response, but not at any frequency that makes sense given the speed of my movies.  What is going on?  I've got two possible explanations: (1) either my movies are not playing back reliably during these other cases, or (2) the checkerboard pattern is too complicated to be a good starting point for learning about my brainwaves.

Modifying the Test:  Of these two possible explanations, it's easier for me to simplify the checkerboard than it is for me to fix the reliability of my movie playback.  So, I changed my movies so that the whole screen is either all all black or all white.  Hopefully, this simpler visual stimuli will make my EEG response easier to understand.

Results, Whole-Screen Blinking:  After recording my EEG while staring at the new movies, the spectrograms of my data are shown below.  Again, all of the interesting action is in the back of my head.  The bottom plot shows that I got good entrainment of my brainwaves for nearly *all* of the new movies.  I'm very pleased.  I'm also very curious about the jump between the cases marked with blue arrows versus the cases marked with red arrows.  What is happening here?

Spectrogram of EEG Signal Recorded While Watching the Whole Screen Toggle White or Black.
The top plot is the signal from my forehead.  The bottom plot is from the back of my head.

What Blink Rate Matters?  Looking at the first half of this plot, the blue arrows indicate cases that have results similar to the checkerboard data shown earlier.  Here, my brain seems to respond to every transition from white to black and from black to white (aka, the "W-B-W" speed).  But, for the second half of this plot, when the blinking is faster, it looks like my brainwaves follow the slower rate resulting from just the white-to-white frequency ("W-W").  Based on this weird result, I'm thinking that my brain doesn't actually care so much about whether the stimuli is W-B-W or W-W...it is simply sensitive to rhythmic visual stimuli in a certain frequency range.  I'm thinking that, whatever rhythmic stimuli falls in this frequency range, my brain will become entrained with it.

Quantifying Entrainment vs Frequency:  If it's simply the frequency that matters, it would be good to see which frequencies yield the strongest entrainment.  Sure, the spectrograms above suggest which frequencies are best, but I took the next step and actually measured the EEG response at each of the stimulation frequencies.  The plot below shows the EEG amplitude that I measured for each of the visual blinking frequencies.  Note that there are two lines, one for if you count based on the white-to-white frequency (blue line) or whether you're counting based on all the white/black and black/white transitions (red line).  This graph suggests that I seem to yield decent responses in the 6-10 Hz frequency range.  So, if I'm looking to use visual entrainment for a BCI, I should focus on the 6-10 Hz band.

Amplitude of EEG Signals Induced by Visual Entrainment.
My best responses seem to be in the 6-10 Hz band.
That could be a good target frequency range for use in a BCI.

Computer Could be Limiting my Performance:  As mentioned earlier, all of these results could be confounded by the possibility that my computer cannot reliably and steadily refresh my screen.  Perhaps it can reliably handle the frequencies at 10 Hz and below, but is not steady above 10 Hz.  Perhaps that's why my apparent response above 10 Hz falls off.  Sure, my computer claims that the screen has a 60 Hz refresh rate, but that doesn't mean that Windows or that Windows Media Player can keep up.  So, any next steps should include some method of assessing whether the computer is actually displaying my movies smoothly at the rate that I expect.

Entrainment for BCI:  My overall goal is to make a cool brain-computer interface (BCI).  Because I am showing that I can successfully measure visual entrainment, I would like to further explore how visual entrainment could be exploited for a BCI.  One idea is that I could simultaneously show two movies side-by-side, each blinking at its own rate.  Perhaps, if I'm lucky, my brainwaves will only respond to the one movie that I'm actually watching.  If that's the case, then I would have conscious control over my brainwaves (and, therefore, the BCI) simply by selecting which of the two movies that I watch.  That could be very cool.


Follow-Up:  I setup a photocell and my Arduino to measure the actual blink rate of the movies on my computer.  In my results, I found that I can't generate steady blinking faster than 10 Hz.  This is probably a strong reason why my EEG recordings exhibited no entrainment above 10 Hz...how can I entrain to signals that aren't there?!?

Follow-Up: I extended this work by having one movie blink at two different rates.  I found that I could control my entrained brainwaves by choosing which of the blink rates I focused on.  Pretty cool!  If you're interested, you can see the results in this post.

Follow-Up:  Interested in getting the EEG data from this post?  Try downloading it from my github!