Thursday, May 15, 2014

Arduino Control of a Hex Bug

Based on my previous success with visual entrainment, I'm moving forward with my plans on making a brain-computer interface (BCI) using my blinky lights.  To really kick this project into high gear, though, I need a good goal -- I need something that would be really fun to control with my brain.  Luckily, my OpenBCI friend Conor, found these cool remote-controlled 6-legged robots that can walk around a fire a little gun.  Bingo!  As you can see below, today's post shows how to hack the robot's remote to make it controllable from an Arduino.  Once it is controllable from an Arduino, it's only one more step until it's controllable from my brain!



The Hex Bug Battle Spider:  The remote-controlled robot that Conor found is a Hex Bug Battle Spider.  They are available in two colors and you can have them do battle.  Hex Bug makes smaller and cheaper versions of this robot, but I believe that only the Battle Spider has the ability to do battle.  I'm looking forward to facing off cerebro-a-cerebro with Conor, so I'm sticking with the Battle Spider.

Hex Bug Battle Spider - My Hacking Target for Today

IR Remote Control:  The Hex Bug is commanded using an infrared (IR) remote-control.  It is the remote control that I will hack so that the Hex Bug can be commanded from an Arduino.  As you can see in the picture below, I was so anxious to hack the remote, that I never got a picture of it while it was still in one piece...I just couldn't wait to smell the solder!  Regardless, as you can see, the remote is simply a bunch of plastic pieces, a couple of coin cell batteries (not shown), and a printed circuit board (PCB).

The Hex Bug Infrared Remote Control (in Pieces)

Thank You, Test Points!  Flipping over the PCB, I was very pleased to see that this board has test points for everything.  Oh, the joy!  Because of these test points, it is much easier to probe the board with my multimeter or with an oscilloscope to figure out how this thing works.  The test points also makes it much easier to attach wires to control this thing from the outside (like from my Arduino).  It turns out that there is a test point for each of the four user buttons on this board.  The relevant test points are shown in the picture below.  These test points will be the focus of my work.

All those test points enable easy hacking.  The test points
with the arrows are for the user buttons.

How the Buttons Work:  After a bit of probing of these test points, I learned that the buttons are used like most buttons in small devices like this (see "Button" demo by the Arduino folks for more info).  The buttons on this remote control are simply switches that are normally open-circuit.  When you press a button, the switch becomes closed-circuit.  The "low" side of each button is tied to ground.  The "high" side is connected to 3.3V via a pull-up resistor (probably inside the microcontroller).  The microcontroller is continually sensing the voltage on the high side of the switch.  When the button is not pressed, no current flows through it nor through the pull-up resistor, so the voltage seen by the micro is high.  When the button is pressed, current flows through the button, which drops the voltage seen by the micro.  As a result, the micro knows that the button was pressed.  Easy!

Hacking Approach:  Based on this discussion, it is clear that the microcontroller on the remote control knows nothing about the buttons...it only knows about the voltage being controlled by the buttons.  When one of those lines goes from high to low, the microcontroller thinks that a button has been pressed.  My hacking approach, therefore, is to wire the Arduino to the remote control so that the Arduino can pull the lines low for me.  This requires me to attach a wire to the high side of each button and to a attach a wire to the remote control's ground.  Normally, I'll keep the Arduino's pins in a high-impedance state so that no current flows.  When I want it to "press" a button for me, I'll command the relevant pin to go into a low-impedance state to allow it to conduct current to ground.  Electrically, this will mimic the behavior of the buttons themselves.  No extra components will be necessary!

Connecting Ground:  OK, we're mostly done talking.  Now let's start soldering.  First, I connected a wire to the remote's ground.  After looking around the PCB, I decided that I liked the solder that was on the low side of the "fire" button.  So, as you can see below, I sneaked a black wire into that location and soldered it in place.

A blurry picture showing where I soldered a black wire to
attach to the remote control's ground.

Connecting Each Button:  Then, I flipped the board over and soldered a colored wire to each button's test point.

All of my wires are now soldered to the test points.

Snip a Pass-Through for the Wires:  While it is not necessary to do this, I like the idea of re-assembling the remote so that I can use it with my fingers (as if it were not modified) or so that I can use it with the Arduino.  To enable the reassembly of the remote, one simply has to cut a hole in the plastic housing to get the wires out.  I used a "nibbler" tool to cut a small hole.  As you can see below, it worked really well!

For extra credit, I used a nibbler to cut a hole in the plastic housing so that
I can get the wires out, even after I fully re-assemble the remote control.

Attach a Pin Header:  To ease the connection of these 5 wires to the Arduino, I decided to solder the free ends of the wires to a piece of basic pin header.  With these pins, I can easily insert the five wires as a single unit into the sockets on the Arduino board.

To make it easier to connect the wires to an Arduino, I attach the wires to
a basic pin header.

The Hacked Remote:  The picture below shows the hacked remote after I reassembled it.  I tested it by pressing the buttons with my fingers and the robot moved.  So far, so good!

My hacked remote control is now re-assembled and ready for testing.

Software:  If I'm going to command this robot from my Arduino, the Arduino needs software.  So, I plugged in my remote to the Arduino (because the Arduino is really flexible, I used the Analog Input pins even though these signals are neither Analog nor are they Inputs...but that doesn't matter, you can use the Analog pins as Digital ins and outs, too) and then began coding.  My code is available on my GitHub as "TestHexBugController".  This code tells the Arduino to listen to commands coming over Serial from the PC.  I assign one ke on the PC's keyboard to each of the Hex Bug's four functions: "forward", "turn left", "turn right", and "fire".  When the Arduino receives one of these commands, it toggles the relevant pin to pull it LOW for 500 msec.  That's all it takes!

Using the hacked remote so that I can use an Arduino to control my Hex Bug via
commands entered from the PC.

Testing it Out:  As you can see in the video at the top of this post, this setup works pretty well for controlling the Hex Bug.  I can make it walk anywhere and shoot its gun on command.  It's pretty fun.  If I cared to, I could no script a whole series of maneuvers.  If I attached a pen to the Hex Bug, maybe I could make it walk around on a big piece of paper so that it would draw out a funny picture.  That could be fun.  Or, I could combine it with some computer vision on my PC and have it chase my cat around.  That would definitely be fun.  Sadly, I don't know anything about computer vision.

Brain Control:  Really, though, my next step is to control this with my brain.  So, I'll attach my OpenBCI shield to this same Arduino and I'll have the Arduino pipe my EEG signal to the PC.  On the PC, I'll process the EEG signal and, if it detects the right brainwave signatures, I'll have the PC send robot commands back to the Arduino.  The Arduino will then convey those commands to the robot via the IR remote.  All of the hardware pieces are in place...now it's time to put it all together!

Follow-Up:  I finally did put all the pieces together.  I can now control the Hex Bug with my brain waves!

Sunday, May 11, 2014

EEG as WAV Files, Go Spectrograms!

OK, let's say that I just finished some cool new EEG experiment where I recorded my EEG response to watching cat videos while listening to the Pink Panther at half speed.  My next step would be to take a quick look at the data to get the overall big picture.  My favorite way of getting that overall view is to make a spectrogram (see example below).  My love for these oh-so-colorful plots runs deep.  The question is, how does one make spectrograms?  Well, in my opinion, if you don't have Matlab (and are afraid of Python), the next best way to make spectrograms is to use one of the multitude of audio editing software packages out there.  Many audio edit programs provide a spectrogram view.  This post is about getting EEG data into an audio program so that you can see your data.

A Spectrogram of EEG data that was Made in Matlab.  This shows data from my previous post,
where I was watching a movie with two different blink rates.  You can see how my brainwaves
entrained with the changing blink rate in the movie.

Converting to a WAV File:  The first step in using an audio program for EEG analysis is to convert one's EEG data into an audio file.  Since I usually work in Windows, I tend to convert all of my EEG data into WAV files.  I choose WAV because it is uncompressed.  I never choose MP3 because it is very unclear what its "perceptual coding" would do to my precious brainwave data.  So, a WAV file is what I would recommend.  But how do you get EEG data into a WAV format?  If your EEG data is in text format (such as is logged by the OpenBCI GUI), you could use my Processing sketch "ConvertToWAV".  This sketch will read in an OpenBCI log file and write each EEG channel out as its own WAV file. You can get the sketch on my GitHub.

Audacity:  Once the data is in WAV format, you can open it in any audio program.  A popular (and free!) audio editing program is Audacity.  While it is not my favorite audio editing program, it is perfectly sufficient for working with EEG data.  After opening your EEG data, the trick is to figure out how to switch the display from waveform to spectrogram.  The screen shot below shows how to do it.

Changing to Spectrogram View in Audacity

Once Audacity is in spectrogram mode, you need to zoom in on the vertical axis in order to see the interesting EEG features, which are usually focused in the lower frequencies.  In Audacity, you zoom simply with a click-and-drag on the vertical axis.  Then, after manipulating the spectrogram settings under the "Preferences" menu, you can get a spectrogram like the one shown below. While the color scheme hurts the eyes a bit, this spectrogram is good enough to see the same kind of EEG entrainment as seen in my original Matlab plot. Furthermore, the tools in Audacity let you further analyze the EEG data through zooming, filtering, amplifying, and (if you change the file's sample rate to increase the playback speed) you can use Audacity to listen to your own brain waves!  Audacity is definitely a useful tool for working with EEG data.

In Audacity, a Spectrogram of my EEG Data

The spectrogram settings that I used are shown in the screen shot below.

My Display Settings for Making EEG Spectrograms in Audacity.  I changed
the Window Size, the Gain, and the Range.

Cool Edit Pro:  I first started getting into spectrograms in the late 90's because this is when I started working with audio and music on the computer. What got me hooked on spectrograms was a piece of shareware called Cool Edit.  It was a stupid name for an otherwise outstanding program.  It was so useful that I spent the extra dollars and bought its upgrad -- Cool Edit Pro. Cool Edit Pro has a *great* spectrogram display, as shown below.  Unlike Audacity, which requires lots of manipulation of the spectrogram settings to get a useful view, the Cool Edit Pro display always seems just right.  Unfortunately, Cool Edit Pro isn't available anymore -- it was bought by Adobe in the early 2000s and became Adobe Audition.  Audition is also fine for making spectrograms (I have only used up to Audition 3.0), but it is expensive.

An EEG Spectrogram in Cool Edit Pro V1.2a.  It's an old school program that totally rocks.

In Cool Edit Pro, the only display parameter that you need to change is the "Resolution" (ie, FFT size).  You do that under the "Settings" menu.

My settings for viewing EEG spectrograms in Cool Edit Pro.
I changed the Resolution value.

Raven Lite:  A third option for making spectrograms is a bit more obscure.  A bunch of years ago, I came across a program called "Raven Lite", which is produced by the Ornithology Lab (ie, bird science) at Cornell University.  The "Lite" version is free.  You can download it and immediately use it for spectrograms, though it is crippled in other ways until you email them for a free (non-commercial) key. What I really like about Raven is that, as shown in the screen shot below, its spectrogram controls are right on the main window for easy manipulation.  Also, I like its color map options way better than what is available in Audacity.  Finally, Raven is one of the few programs that let you see both the spectrogram view and the waveform view at the same time (not shown).  It is really nice to have that capability.

Raven Lite 1.0 from the Cornell Laboratory of Ornithology. It's a pretty good viewer.  The settings for
the display or right here in the main window.

Other Options:  Because I have Matlab and Cool Edit Pro (and Audacity and Raven) I haven't spent a lot of time looking at other options.  Does Garage Band offer a spectrogram view?  Is there a plug-in for iTunes or Windows Media Player that gives spectrograms?  I'm curious to hear what you folks use.  Drop a comment and let me know!

Saturday, May 10, 2014

Controlling Entrainment Through Attention

In a previous post, I showed that I could induce (entrain) brain waves at different frequencies simply by staring at blinking movies playing on my computer.  Having demonstrated this basic feasibility, my goal now is to exploit this phenomenon to make a brain-computer interface (BCI) to control future hacks.  My idea is to play two blinking movies simultaneously -- one at a slow speed and one at a fast speed.  I'm hoping that my brainwaves will only entrain with the blinking from the one movie that I choose to focus on.  Does my brain work this way?  Will my brain successfully reject the blinking from the movie that I'm ignoring?  Let's find out!



Simultaneous Blinking at Two Speeds:  Previously, I made some blinking movies where the whole screen would blink black or white at a given speed.  To make this idea work for a BCI, I want my screen to blink at two different rates at the same time.  So, as you can see in the video above, I made the left side of my screen blink at one rate while the right side of my screen blinks at a different rate.  I'm hoping that, if I focus my attention on the left side of my screen, by brainwaves will only become entrained at the left-side blink frequency, whereas if I were to focus on the right side of the screen, my brainwaves would follow the right-side blink frequency.

Swapping Sides:  To help with this test, I wanted to remove any effect of turning my head to change my gaze between the two sides of my screen.  So, in creating my dual-rate blinking movie, I had the movie automatically swap sides every 20 seconds.  As a result, it starts with fast blinking on the left and slow blinking on the right.  After 20 seconds, it swaps so that slow is on the left and fast is on the right.  It does this swap a few times.  The Matlab code that I used to make these movies is here.

I created a movie where the left and right sides blink independently -- left is fast and right is slow.
For this test, the two blink rates swapped sides every 20 seconds.

Choosing my Blink Rates:  Based on my previous results, it looks like my brain (coupled with my computer's limited ability to blink steadily) is most easily entrained in the 6-10 Hz frequency range. So, for this dual-rate movie, I chose "slow" to toggle between black and white at 10 Hz (ie, a 5 Hz white-white rate) and "fast" to toggle at 15 Hz (ie, a 7.5 Hz white-white rate).  In truth, I made a bunch of movies at different rates, but the the 10/15Hz movie worked the best, so I'll only show its results.

EEG Setup:  With my movies prepared, I gathered up my EEG stuff.  Like usual, I used my OpenBCI board and a few cup electrodes with Ten20 paste. I put one electrode on the left side of my forehead (Fp1), on one the left side of the back of my head (O1) and one on the right side of the back of my head (O2).  Using the impedance measuring feature, my impedances were 11 kOhm, 67 kOhm, and 28 kOhm (I seem to have an on-going problem getting a low impedance at O1).  My reference electrode was on my left ear lobe and my bias electrode was on my right ear lobe.  My OpenBCI board was connected to the PC via USB and I was logging data using my OpenBCI GUI in Processing.  For this test, I also used my photocell to confirm that my computer's blinking was sufficiently steady.

I used my OpenBCI V1 board to record my EEG into the computer.  I also attached a
photocell to confirm that the screen was blinking at the right rate.

Results:  After setting everything up, I started recording my EEG data and then I started playing the dual-rate blinking movie.  It was night time, so my room was pretty dark.  I focused my attention at the center of the left-hand movie. As described above, the left movie toggled fast-slow-fast-slow every 20 seconds, while the right movie played the opposite -- slow-fast-slow-fast.  Spectrograms of the EEG signals from my head are shown in the figure below.  As you can see, there was no entrainment seen in the signals from my forehead (as expected) but there was entrainment in the back of my head (also as expected).  The best entrainment was seen on the left side of my head.

Spectrograms of my EEG signals recorded while watching my dual-rate blinking movie.  The left-back
of my head exhibited the strongest entrainment to the blinking of my movie.  


Only Seeing the Left Blink Rate:  Because the left-back of my head (O1) gave the best entrainment, let's just focus on its results. The figure below shows just the results for the left-back of my head.  Note how,once the movie starts playing, my EEG signals seem to toggle between a fast blink rate (~7.5 Hz) and a slow link rate (~5 Hz).  This exactly follows the white-white blink rate of the left movie. So, my brainwaves successfully entrained to the movie that I was watching. Most importantly, there seems to be no signature in my EEG data from the blinking of the right movie.  This is success!

Spectrogram of my EEG data from the left-back of my head (O1) while watching my dual-rate
blinking movie.  I was focused just on the left movie.  Because of this focus, my brainwaves
appear to have entrained only with the left movie's blink rate.

Purposely Shifting My Attention:  OK,so I've demonstrated that my mind can successfully ignore one of the movies.  That's really good.  But, maybe I'm just biased to looking left. To really make this work for a BCI, I need to be able to shift my attention to either movie and have my brainwaves follow.  So, for my 2nd test, I started the same movie playing back.  But, this time, when the movies swapped sides every 20 seconds, I switched my attention to follow the movie that blinked faster. This means that I started by watching the left movie, then I watched the right, then left, then right.  My EEG response is shown below. Note that I showed strong entrainment and,most importantly, that my brainwaves only show the fast blink rate (7.5 Hz). So, by shifting my attention to follow the faster movie, I successfully rejected the effect of the slower blinking movie. Success again!

Spectrogram of my EEG data from the left-back of my head (O1) while watching my dual-rate blinking
movie.  While watching the movie, I switched my attention between left and right to follow the movie
 that blinked faster. Because of this focus, my brainwaves remained entrained only at the faster rate.

All the Elements are In Place:  It looks like I now have the elements in place for a 3-state BCI.  If I don't look at the movie at all, I get State 1: "Nothing". If I watch the blinking of the left movie, I get State 2: "Left".  If I watch the blinking of the right movie, I get State 3: "Right".  It may be possible to further divide my screen to get more blinking regions to add more BCI states. Maybe that's a good experiment for the future. Right now, though, I think that I'm going to turn my attention to a little robot that I got (thanks for the pointer Conor!) to see if I can control it with visual entrainment.  This is gonna be fun!

Follow-Up:  Interested in getting the EEG data from this post?  Try downloading it from my github!

Follow-Up:  I successfully used visual entrainment to control a six-legged robot!

Wednesday, May 7, 2014

Measuring Video Blink Rate with an Arduino

In my previous post, I used blinking videos on my computer to entrain my brain waves.  A key question, though, was whether my computer could play those blinking videos steadily.  If the blinking isn't steady, it won't entrain brainwaves that are easily detected.  So, in this new post, I show how I hacked a photocell and my Arduino to measure the blink rate that my computer is actually producing.  It's a pretty simple (and cheap!) setup and, as you'll see below, its data explains some of the important findings in my EEG data!

Measuring the Blink Rate From the Movies Played Back by my Computer

Using a Photocell:  My approach to measuring the video blink rate is to quickly and continuously measure the light produced by my computer screen.  I chose to use a photocell, mostly because I had one that came with my very first Arduino.  To learn how to use a photocell, I followed the tutorial at Adafruit.  It explains what a photocell is and it explains exactly how to hook it up to an Arduino.  The key is that you connect a photocell and a 10K resistor in series.  Together, they form a voltage divider.  Then, you connect one end to +5V and the other end to ground.  In the middle, at the junction between the photocell and the 10K resistor, you connect that point to the Arduino's analog input pin.  Pretty easy!

Wiring It Up:  To connect all of the bits together, I needed to solder a few things.  First, I gathered my components -- the photocell, some wire, and some shrink tube (to keep the soldered wires from shorting to each other).

The Components: A Super-Cheap Photocell, some Wire,
and some Shrink Tube.  The 10K resistor is not shown.

Then, I soldered the wires to the legs of the photocell and insulated them with the shink tube.  The photo below shows the components after this assembly.  Looks decent enough.

Fully Assembled.  This is the first version that I tried.  Notice that the back
and sides of the photocell are exposed.  This turned out to be bad.

Unfortunately, when I hooked it up to my Arduino, I found that I was not seeing any change in the light level from my computer screen.  After some playing around, I found that the photocell is sensitive to light from the back and sides, in addition to being sensitive to light from the front.  So, as seen in the photo below, I added another layer of shrink tube to block out the light entering from the back and sides.

Modified Assembly.  I added more shrink tube to wrap around the sides
back of the photocell.  You have to keep out that light!

Mounting Everything:  Once I had my photocell on those long(ish) lead wires, I connected it to the Arduino as discussed on the Adafruit site.  I then needed a way to hold the photocell close to the computer screen so that I could measure its blink rate.  As shown in the photo below, I found that my adjustable soldering fixture (sometimes called a "3rd Hand" fixture) works really well.  It works best if you position the photocell to be VERY close to the computer screen.

I held the photocell to the computer screen using a"3rd Hand" soldering fixture.
To record the photocell signal, I used one of the Analog Inputs available on the
Arduino that is the host for m OpenBCI shield.

Position the photocell to be VERY close to the screen.

Arduino Software:  If I'm going to use my Arduino to read the photocell, I need to some software for the Arduino.  My first step was to use the built-in Arduino example called "AnalogInOutSerial".  I then extended this program to report the actual resistance of the photocell under different lighting conditions ("ReadPhotocellResistance").  While either of these programs works fine to read the photocell, neither is clocked to read the values a steady pace.  If the sampling isn't steady, there's no way to know if the video blinking istelf is steady.  To fix this, you need to setup an Arduino timer.  Or, you could...

Integrate with OpenBCI:  The OpenBCI shield generates data packets at a very precise rate (I usually configure mine to sample at 250 Hz).  It could act as the clock to drive the Arduino to sample the photocell steadily.  So, I modified the OpenBCI Arduino sketch to read one of the analog input pins every time that it receives data from the OpenBCI shield.  It then appends this extra data value to the OpenBCI data packet and sends it to the PC.  Finally, I modified my OpenBCI GUI to receive the extra data and to include it in its log file.

Results:  I used this system to record the blinking produced by the blinking movies from my previous post.  Each movie was about 20 seconds long and each movie blinked at a different rate.  The digitized Photocell values are shown in the figure below as raw counts from the Arduino.  Clearly, this graph is a bit too zoomed out to see much of interest (though you can see the non-steady amplitude when at the fastest speed on the right).  We need to zoom in to see more detail...

Sample values recorded from the photocell by the Arduino's analog input pin. I played
my 10 whole-screen-blinking movies.  Each movie is about 20 seconds long.  Each movie
has a different blink rate -- from a 1 Hz white-to-white blink rate up to a 10 Hz w-w rate.

Zooming-In:  Excerpts from three of the movies are shown below.  In these plots, you can see that the light pulses recorded during the  3 Hz and 10 Hz movies look to be steadily paced, whereas the pulses in the 7 Hz movie looks much more irregular.  Based on this qualitative view, I'd say that the irregularity of the 7Hz movie might cause complications when used for EEG experiments.

Zoomed-In waveforms recorded from the photocell during my blinking movies.  Excerpts at three
different blink rates are shown.  The red and blue dots show features that I used to quantify each
movie's blink rate.  Note that the time scale is different for each of the movies so that you always
see 4 periods, despite their increasing speed.

Measuring the Blink Rate:  To better assess the steadiness of each movie, I setup a routine to quantify the blink rate on a blink-by-blink basis.  I did this by, first, computing the mean sensor value for the whole recording.  This is my threshold for deciding whether the screen is "white" or "black".  This threshold value is shown by the horizontal black line in the excerpts above.  Then, I detected when the signal crossed this threshold.  Each threshold crossing is shown as a blue dot in the figures above.  To compute the blink rate, I compute the difference between the dots.  The "white-to-white" blink rate is the difference between the red dots.  Alternatively, the rate at which the screen merely changed (either from white-to-black or black-to-white), I measured the difference between the blue dots.

Blink Rate Throughout the Test:  The plot below shows the results of quantifying the blink rate throughout the test.  In red, the plot shows the white-to-white blink rate.  In blue, I show the blink rate from both transitions.  As expected, the blink rate counting both transitions is twice as fast as the blink rate when counting just from white-to-white.

Blink Rate Measured for my 10 Movies.  The measured blink rate generally follows the expected blink
rate, though the measured blink rate exhibits unsteadiness at the faster speeds. The blink rate when
counting both transitions (whit-to-black and black-to-white) is especially unsteady at the higher speeds.

Unsteadiness:  As can be seen in the plot above, the blink rate is pretty steady for the first four movies (ie, speeds of 1-4 Hz W-W).  For the 5th movie (5 Hz W-W), the plot above starts to look messier, especially the blue line.  This means that the system is not playing back the blinking movie smoothly.  As we get into the faster movies (6-9) Hz, the blue line gets extremely messy.  Clearly, the system is unable to keep a steady pace of white-to-black and black-to-white transitions (the blue line), though the white-to-white period (the red line) isn't too as bad.  Funnily, at 10 Hz, note that the white-to-white blink rate gets very stable again.  It seems that at 10 Hz, the individual movie frames must be well-aligned with the natural update rate of the video system on my computer.

Relationship to EEG Data:  The whole purpose of this investigation was to see if my EEG results from my previous post (copied again below, for convenience) were reflecting properties of my brain, or if they were reflecting artifacts from imperfections in my computer's movie playback.  My main question with my EEG data is why I exhibited no video-entrained EEG signals above 10 Hz.  Well, looking at the graph of the computer's blink rate (above), we see that the video blink rate becomes extremely unstable for any frequency above 10 Hz.  My computer, in other words, was unable to generate steady visual stimulation above 10 Hz.  Without stable stimulation, my brain had nothing to entrain with.  Therefore, these limitations in my video system mean that I cannot declare either way whether my brain can entrain with visual stimuli at speeds greater than 10 Hz.  With a more stable video system, maybe I could entrain with the faster blinks.

EEG data shown in my previous post.  This is the signal recorded from the back of my head (reference
on left ear) when staring at my blinking movies.  The signals marked by the blue arrows seem to indicate
periods when my brain entrained with the video on every transition of white-to-black and black-to-white.
The periods marked by the red arrows seem to indicate periods when my brain entrained on just the
white-to-white blink rate.

Next Steps:  With this system, I have proven that I can assess the steadiness of my video playback system.  Steady playback is critical to inducing visual entrainment of brainwaves.  So, as I move forward with trying to create a BCI based on visual entrainment, I can use this synchronized photocell recording to confirm that the video stimulation is sufficient to (hopefully) induce EEG responses.  Let the development of the visual BCI begin!

Sunday, May 4, 2014

Inducing Brain Waves with Visual Entrainment

A while back, I had a friend come over and I measured his EEG in response to staring at a blinking light.  We saw (as we hoped) that his brainwaves oscillated in sync with the blinking of the light.  I thought that this visual entrainment (aka "steady-stead visual evoked potential") was pretty cool.  Since then, I've learned that it can be used as the basis for a brain-computer interface (BCI).  Because I'm still searching for a good BCI paradigm, I decided to return to my exploration of visual entrainment.  Today, I'm going to show how I successfully used visual stimuli to induce brainwaves at different frequencies.  As a result, I can now see an good avenue for an EEG-based BCI.  Yes!  Let's go!

Inducing SSVEP Using a Toggling Checkerboard Pattern on my Computer Screen
Goal:  My goal today is to use visual stimuli to induce brainwaves across a range of frequencies.  Because I want to use this for a BCI, I'm trying to determine what kind of visual stimuli I should use and what EEG frequencies I can induce.  What does it take to make this work reliably?

Visual Setup:  In my previous post, my visual stimulation was simply a blinking head-lamp.  It was effective (and really bright!), but I had no control over its blink frequency.  As a result, I also had no control over the frequency of the brainwaves that it induced.  So, for today's test, I needed to get fancier.  I ditched the head-lamp and, instead, created a series of blinking movies that I could playback on my computer.  I controlled the "blinking rate" by saving my movies at different frame rates.  What exactly did the movie look like?  Well, at first, because of a paper that I read in the VEP literature, my movie used the checkerboard pattern shown in the picture above.  The movie toggled back-and-forth between this image and the inverse image (swap blacks and whites).  While this worked OK, I later switched to a simpler movie (code here) where the screen was simply all-white or all-black.  That seemed to work better.

EEG Setup:  Once I made my movies, I set myself up with my EEG system (OpenBCI ).  I my usual gold cup electrodes with Ten20 EEG paste.  I put one electrode on the back of my head (near O1) and I put another electrode on my forehead.  My reference electrode was on my left ear lobe and my bias electrode was on my right ear lobe.  Using the impedance-measuring feature of OpenBCI, the electrode on my forehead had an impedance of 24 kOhm and the one on the back of my head was about 65 kOhm.  I couldn't seem to get the back electrode to a lower value.

Channel 1 was on my Forehead, Channel 2 was on the back of my head.
My left ear lobe was my reference.  My right ear lobe was the bias.

I used my OpenBCI V1 board with Ten20 Paste.
I think that those owl napkins are fun!

Test Method:  Once I got everything setup, I launched the OpenBCI GUI in Processing and started an EEG recording.  To playback my homemade blinking movies, I opened up Windows Media Player and set it to full-screen mode.  I had ten movies, with each movie blinking at a different rate.  I had WMP play all 10 movies continuously in sequence.  Each movie was 20 seconds long, so the whole test took about 200 seconds.  It was nighttime when I did this test and my room was dimly lit.  I tried to stare at the screen and I tried to only blink my eyes at the transition between the different blinking rates.

Results, Checkerboard:  As usual, my preferred way to view the data is to make spectrograms.  In the figure below, the top plot is the data from my forehead and the bottom plot is the data from the back of my head.  From my forehead, there is nothing interesting except my eye blinks.  From the back of my head, we see several interesting features, which I've marked with blue and white arrows.  Note that these interesting features change every 20 seconds, which is the same as my 20-second movies.  It seems clear to me that these features are my brainwaves responding to changing of the blink rate in my movies.  Excellent!


Spectrogram of EEG Signal Recorded While Watching the Blinking Checkerboard Pattern.
The top plot is the signal from my forehead.  The bottom plot is from the back of my head.
"W-B" is the rate at which the movie switched from either white-to-black or black-to-white.
"W-W" is the rate if you measure just from white-to-white.

Entrained with the Blink Rate?  Looking at the three blue arrows, it appears that I have entrained brainwaves at 2 Hz, 4 Hz, and 6 Hz.  At these times, any given square in my movie was blinking at 1 Hz, 2 Hz, and then 3 Hz, if you count from white period to white period ("W-W").  Because I have entrained brainwaves as 2x the white-to-white frequency, it suggests that it is NOT white-to-white that matters, but that it is the transition between white/black or black/white that matters.  At least, that is what is implied for these three (out of 10) cases for the checkerboard stimuli.

Complications:  While that would be a fine conclusion, why does this rule not continue through the other 7 cases in this checkerboard test?  Why does it only work for the three cases with the blue arrows?  The cases with the white arrows do show some sort of EEG response, but not at any frequency that makes sense given the speed of my movies.  What is going on?  I've got two possible explanations: (1) either my movies are not playing back reliably during these other cases, or (2) the checkerboard pattern is too complicated to be a good starting point for learning about my brainwaves.

Modifying the Test:  Of these two possible explanations, it's easier for me to simplify the checkerboard than it is for me to fix the reliability of my movie playback.  So, I changed my movies so that the whole screen is either all all black or all white.  Hopefully, this simpler visual stimuli will make my EEG response easier to understand.

Results, Whole-Screen Blinking:  After recording my EEG while staring at the new movies, the spectrograms of my data are shown below.  Again, all of the interesting action is in the back of my head.  The bottom plot shows that I got good entrainment of my brainwaves for nearly *all* of the new movies.  I'm very pleased.  I'm also very curious about the jump between the cases marked with blue arrows versus the cases marked with red arrows.  What is happening here?

Spectrogram of EEG Signal Recorded While Watching the Whole Screen Toggle White or Black.
The top plot is the signal from my forehead.  The bottom plot is from the back of my head.

What Blink Rate Matters?  Looking at the first half of this plot, the blue arrows indicate cases that have results similar to the checkerboard data shown earlier.  Here, my brain seems to respond to every transition from white to black and from black to white (aka, the "W-B-W" speed).  But, for the second half of this plot, when the blinking is faster, it looks like my brainwaves follow the slower rate resulting from just the white-to-white frequency ("W-W").  Based on this weird result, I'm thinking that my brain doesn't actually care so much about whether the stimuli is W-B-W or W-W...it is simply sensitive to rhythmic visual stimuli in a certain frequency range.  I'm thinking that, whatever rhythmic stimuli falls in this frequency range, my brain will become entrained with it.

Quantifying Entrainment vs Frequency:  If it's simply the frequency that matters, it would be good to see which frequencies yield the strongest entrainment.  Sure, the spectrograms above suggest which frequencies are best, but I took the next step and actually measured the EEG response at each of the stimulation frequencies.  The plot below shows the EEG amplitude that I measured for each of the visual blinking frequencies.  Note that there are two lines, one for if you count based on the white-to-white frequency (blue line) or whether you're counting based on all the white/black and black/white transitions (red line).  This graph suggests that I seem to yield decent responses in the 6-10 Hz frequency range.  So, if I'm looking to use visual entrainment for a BCI, I should focus on the 6-10 Hz band.

Amplitude of EEG Signals Induced by Visual Entrainment.
My best responses seem to be in the 6-10 Hz band.
That could be a good target frequency range for use in a BCI.

Computer Could be Limiting my Performance:  As mentioned earlier, all of these results could be confounded by the possibility that my computer cannot reliably and steadily refresh my screen.  Perhaps it can reliably handle the frequencies at 10 Hz and below, but is not steady above 10 Hz.  Perhaps that's why my apparent response above 10 Hz falls off.  Sure, my computer claims that the screen has a 60 Hz refresh rate, but that doesn't mean that Windows or that Windows Media Player can keep up.  So, any next steps should include some method of assessing whether the computer is actually displaying my movies smoothly at the rate that I expect.

Entrainment for BCI:  My overall goal is to make a cool brain-computer interface (BCI).  Because I am showing that I can successfully measure visual entrainment, I would like to further explore how visual entrainment could be exploited for a BCI.  One idea is that I could simultaneously show two movies side-by-side, each blinking at its own rate.  Perhaps, if I'm lucky, my brainwaves will only respond to the one movie that I'm actually watching.  If that's the case, then I would have conscious control over my brainwaves (and, therefore, the BCI) simply by selecting which of the two movies that I watch.  That could be very cool.


Follow-Up:  I setup a photocell and my Arduino to measure the actual blink rate of the movies on my computer.  In my results, I found that I can't generate steady blinking faster than 10 Hz.  This is probably a strong reason why my EEG recordings exhibited no entrainment above 10 Hz...how can I entrain to signals that aren't there?!?

Follow-Up: I extended this work by having one movie blink at two different rates.  I found that I could control my entrained brainwaves by choosing which of the blink rates I focused on.  Pretty cool!  If you're interested, you can see the results in this post.

Follow-Up:  Interested in getting the EEG data from this post?  Try downloading it from my github!