Tuesday, March 3, 2015

Brain-Controlled Shark Attack!

Visiting my friends at OpenBCI HQ, we got together to do some hacking.  Since I'm always looking to control new things with my brain, I was really excited to see that someone had brought a remote-controlled shark-shaped balloon (an "Air Swimmers").  This is a very cool toy -- it swims through the air in a wondrous way.  But, I can't just leave a good things alone.  So, after a few hours of hacking, Joel and I were able to turn this simple toy into a 5-person, brain-controlled, SHARK ATTACK!

Approach:  Our approach to this hack is extremely similar to the approach that we used for our multi-person control of a toy robot.  As shown in the figure below, the idea is that you get multiple players hooked up to a single EEG system (OpenBCI, in my case).  The computer processes the EEG data looking for each person's eyes-closed Alpha waves.  Depending upon which person's Alpha waves are detected, the computer sends commands to the shark.  The commands are conveyed to the shark via an Arduino, which is driving the shark's remote control.  The end result is that the shark swims because of one player's brain waves. I think that's pretty cool.

Two-Person Demo:  As Joel and I were pulling this hack together, we started to test it using just the two of us.  Being just two people, we could only do shark two commands, not all five.  It was still pretty fun, though.  I love the sense of excitement that happens when a hack first starts to work.

Hacking the Remote Control:  To make this shark controllable from the computer, we needed to hack into the shark's remote control.  Like when I hacked the remote for the toy robot, Joel found that the remote for the shark was simply a few push buttons that were wired to pull one side of the switch down to ground whenever the button was pushed.  So, to make this controllable from my computer, Joel soldered some wires to the circuit board (to the high side of each switch) to allow an Arduino to pull it down to ground instead of having to push it with your finger.  As a result, we can now send a command to the Arduino and cause the shark to move.  Our Arduino code for this hack is on GitHub here.

We modified the shark's remote control by adding a wire to the non-grounded side of each push button.
We brought the wires out and connected them to an Arduino.
An Arduino drives the shark's remote control.

EEG Electrode Setup:  It's quite easy to record a person's eyes-closed Alpha waves.  You need three electrodes.  Put one electrode on the back of your head (O1 or O2, if you know the 10-20 system), put the EEG reference electrode onto your earlobe, and put the EEG bias electrode on your other earlobe.  You can see some examples in the photo below, where we had three people controlling the shark.  We used the gold cup electrodes and Ten20 electrode paste that came with the OpenBCI kit.

Three-Man "Team Alpha!" Controlling the Shark.  You can also see
where we put the electrodes -- back of head and both earlobes.

OpenBCI Setup:  We are going to wire up multiple people to control this shark.  And to be clear, it is not normal to hook multiple people to one EEG system.  But that is what we are going to do.  This is definitely using EEG in a non-traditional way.  That's why this is called "hacking".  The trick to making it work is to tell the EEG system (in this case, OpenBCI) that each person has his own EEG reference electrode.  OpenBCI enables this by allowing each EEG channel to be run in "differential mode", where you use each channel's the "P" and "N" inputs as a differential pair.  This is in contrast to the more-usual "common reference mode", where we use one of the SRB inputs as a common EEG reference for all EEG channels.  To change OpenBCI to differential mode, you use the OpenBCI GUI, via the "Chan Set" tab, to change each channel's "SRB1" and "SRB2" setting to "off".  Then, in the main window, turn off all of the other channels that you are not using.

Screenshot Showing How to Configure for Five Channels in
Differential Mode...simply Turn Off SRB2.

Plugging Into OpenBCI:  Once you've got the EEG electrodes on the individual players, you've got to hook them into the OpenBCI board.  Because we're in "differential mode", each player will get one "P" input and one "N" input.  For this hack, we put the electrode from back of the head into the "N" input.  We then put the left earlobe into the corresponding "P" input.  Finally, the right earlobe was connected to a bias pin.  Because we had five players, there aren't enough bias pins available on the OpenBCI board.  I used a 16-channel OpenBCI board because it has 4 bias pins, so that covered four players.  The fifth player simply plugged into the analog ground pin ("AGND"), which is not as good as using a bias pin, but it worked well enough.

Wiring Electrodes to the OpenBCI Board for Five Players.  Note that the board only
has 4 bias connections, so one player is attached to AGND instead.

EEG Processing Algorithms:  As mentioned earlier, the PC does all of the EEG processing...no processing occurs on the OpenBCI board itself.  For the software on the PC, we started with the stock OpenBCI GUI.  Then, I extended it by (1) adding the Alpha detection algorithms and by (2) adding code to send shark commands to the Arduino.  This variant of the OpenBCI GUI is currently saved here on GitHub as a branch of the main repository.  As you can tell from the class names shown in the code, the code is based heavily upon the previous work with the HexBug robot...which you may find confusing since we're controlling a shark and not a HexBug.  Sorry for the confusion!

Three-Person Testing:  After Joel and I did our two-person testing, we roped in a couple of other players to join Joel.  That got us to a three-man shark attack!

Three-Person Brain-Controlled Shark

Shark Food:  While we were all hacking the shark to make it brain-controlled, Conor was busy doing his own hacking.  Once we finally got our brain-controlled shark into fighting condition, we couldn't resist swimming it over to harass Conor.  Conor was pretty sure that his teeth were sharper than the shark's, so he wasn't much afraid.

Conor Faces Off Against the Shark.

Four-Person Shark Control:  Having successfully used three people to control the shark, we wired up a fourth person.  Have you ever tried to get four people doing anything in a smooth and coordinated fashion?  It's hard!  But, we did have success...

Five-Person Shark Attack:  OK, if four people working together is hard, five people is just chaos. In case you can't seem him, notice below that the fifth guy is in the center of the crowd, kneeling so that you just see his head popping above the bench.  The wiring on the OpenBCI electrodes seems generously long when you're just attaching one person.  With 5 people, though, you really need longer wires...or you simply need a little creativity on how you pack the people together.

Five-Person Brain-Control of the Swimming Shark.  The fifth person is kneeling
and you can only see his head.  We need longer wires!

Swimming Away:  By the time that we got this all working, it was really late at night.  The time stamps on the pictures show that it was about midnight, and we'd been at OpenBCI HQ since about 10AM.  So, between the fatigue and all the caffeine, I think that we were getting some funny brain wave behavior.  For example, one guy was making some weak Alpha even with his eyes open.  Wacky!  Regardless, we were able to get the shark to swim around...until we swam the shark too far to one end of the OpenBCI HQ...

Swimming the Shark Off Into the Sunset.

...at which point the shark's IR remote control could no longer communicate with the shark.  Stranded Shark!  And so our night of EEG hacking ended.  Still, it was mighty fine work,  Go Team Alpha!

Update 2015-03-09: I just saw that someone has already made a BCI for controlling this very same swimming shark (Chen et al.  "Recreational devices controlled using an SSVEP-based Brain Computer Interface (BCI)").  Note that they used one person to control the shark via SSVEP, which is exactly what I did with my brain-controlled Hex Bug!

Update 2015-09-24: Wow!  I was given an opportunity to write an article for IEEE Spectrum for their Oct 2015 issue.  How cool is that?  You can check it out here: "OpenBCI: Control An Air Shark With Your Mind".

Update 2015-11-08: I see that Wired (magazine) posted their nicely-done video on our shark hacking.  It's quite an enjoyable piece.  Good work, Wired!.

Sunday, January 25, 2015

Brain Got Beats -- Not Yet

I like controlling things with my mind.  That's why I do this brain-computer interface (BCI) thing.  The tough part of BCIs, though, is finding brain signals that are simple enough for the computer to detect, yet are also something that I can consciously control.  So far, I can do eyes-closed Alpha waves, concentration-controlled Beta/Gamma, and steady-state visual evoked potential (SSVEP).  I need more options.  Today, I'm going to try to do auditory steady state response (ASSR).  Or, more colloquially, does my brain got beats?

Can I use beating tones to entrain brainwaves?

Auditory Steady-State Response (ASSR)

The idea with ASSR is that we are looking for EEG signals from my brain that are driven by sounds presented to my ears.  When doing an ASSR, you use an audio tone whose amplitude is varied ("modulated") at a fixed rate such as 40 Hz.  Then, when you play that sound in your ears, you look in the EEG signals for a strong 40 Hz component.  Easy, eh?

Note that this is very similar to the steady-state visual evoked potential (SSVEP) that I used previously, where I'd make my computer screen blink at 8 Hz and 8 Hz signals would appear in my EEG.

Attention-Based ASSR?

If I want to use ASSR for a brain-computer interface (ie, for controlling robots!), there needs to be some way to consciously control my response to the sound.  For the SSVEP, where stimulation was my blinking computer screen, my response was much stronger if I consciously paid attention to the blinking screen.  This attention-based response was the key to being able to exploit it for a BCI.

Does ASSR have a similar attention-based component?  Until yesterday morning, I didn't know.  But then I came across this paper:  Do-Won Kim et al.  "Classification of selective attention to auditory stimuli: Toward vision-free brain–computer interfacing".  Journal of Neuroscience Methods 197 (2011) 180–185.  PDF here.

Kim's ASSR Setup

In the paper by Kim, they used two loudspeakers to present tones to the test subject.  The setup is shown below.  The subjects were sitting down in a comfy chair listening to the tones while wearing a small montage of EEG electrodes (Cz, Oz, T7, T8, ref at left mastoid, ground at right mastoid).

Test Setup as used by Kim (2011) for Evoking Auditory Steady-State Response (ASSR)

For the audio tones, they used a 2500 Hz tone from one speaker and a 1000 Hz tone from another speaker.  The key feature of ASSR, though, is the modulation of these tones.  For one of the tones, they varied the amplitude of the tone (ie, they alternately made it quiet and loud) at a rate of 37 Hz, while the other tone they modulated at a rate of 43 Hz.  These frequencies are the "beat rates" for the audio.  It is the 37 Hz or 43 Hz beat rate that they are looking for in the EEG (hence, "brain got beats?").

Below is what they saw in the EEG signals (Cz) for one of their subjects when the subject gave their attention to the 37 Hz modulated signal (red) or the 43 Hz modulated signal (blue).  There is clearly a difference.  This makes me happy.  This is what I want to recreate with my own testing.

Spectral Results for One Subject from Kim (2011) In Response to Steady-Pitch
Tones that were Amplitude Modulated at 37 Hz or 43 Hz.

My Test Setup

I want to recreate their results.  I'm going to create some audio files with the amplitude modulated signals, I'm going to play them into my ears via headphones, and I'm going to record my EEG signals (OpenBCI!) to look for my ASSR.

EEG Setup:  Reading more details from the paper, they said that they got the strongest response from the electrode at Cz, so I decided to start there.  I put one electrode at the top of my head (Cz) with the reference on my left ear lobe and the OpenBCI "bias" on my right ear lobe.  I used the gold electrodes and the Ten20 EEG paste that came with the OpenBCI kit.  Without really trying, I happened to get an electrode impedance of 20-30 kOhm at both Cz and at the reference, which are probably good enough.

My EEG Setup, Cz Only.  Also, unlike Kim, I used ear buds (headphones)
stead of loudspeakers to present my tones.

OpenBCI EEG System:  For this test, I happened to use my 16-channel OpenBCI system.  I'm only using one channel of EEG data, though, so I could have used the 8-channel systems (or even other systems, like OpenEEG) just as well.  I wired up my OpenBCI unit as shown below.  Starting from the left, the white wire is the "bias" (aka, driven ground) going to my right ear lobe, the brown wire is the electrode at the top of my head, and the black wire is the reference electrode on my left ear lobe.  Note that they are all plugged into the lower row of pins (the "N" inputs) on the lower board.  The system is being powered by four AA batteries and is sending its data wirelessly back to the PC.  I'm using the OpenBCI GUI in Processing.

Here's How I Plugged into the OpenBCI Board.

Audio Files:  I created my audio files in Audacity.  I created two sets of files, based on the frequencies used in the Kim paper: one set of files using a 1000 Hz tone and the other set using a 2500 Hz tone.  The Kim paper said that the strongest ASSR generally occurs for a beat frequency of 40 Hz.  I wanted to see my response at different beat frequencies, so for each tone I created three versions: one beating at 38 Hz, one at 40 Hz, and one at 42 Hz.  I made each version 20 seconds long.  I used a square wave (ie, on/off) amplitude modulation, though next time I might try sine wave modulation instead.

I Created My Amplitude-Modulated (AM) Test Tones in Audacity.  First, "generate" the
tone.  Then, to do the AM, go under "Effect" and select "Tremolo". 

Data and Analysis Files:  My audio files, my data files, and my analysis files are all on my GitHub here.  Note that I did my analysis using an IPython Notebook (see it here).  My specific Python installation is described here.

My ASSR Response

My goal is to see if I exhibit the ASSR response with this test setup.  To do the test, I wired myself up as discussed above, I queued up all six audio files (the three at 1000 Hz followed by the three at 2500 Hz), put in my ear buds, and started recording.

Eyes Closed:  The spectrogram below shows my Cz EEG signal when I did this test with my eyes closed.  That strong red stripe at 10 Hz is my Alpha response simply due to having my eyes closed.  What I do not see here are horizontal stripes of energy at 38, 40, or 42 Hz.  In other words, I do not see any brain waves entraining with the audio stimulation.  This is disappointing.

Spectrogram of EEG Signal from Cz with AM Auditory Stimulation Near 40 Hz.
My eyes were closed, hence the strong response at 10 Hz.
There is no signature of the 38-42 Hz AM Audio Stimulation.

Eyes Open:  I also performed this test with my eyes open.  A spectrogram of my EEG signal at Cz is shown below.  I started and ended the test with my eyes closed for 10 seconds, which you can see as 10 Hz Alpha waves at the start and end.  What I really want to see, though, is something corresponding to the audio stimulation at 38 Hz, 40 Hz, or 42 Hz.  Again, I see nothing.

Spectrogram of EEG Signal from Cz with AM Auditory Stimulation Near 40 Hz.
My eyes were open, except at the beginning at end.
There is no signature of the 38-42 Hz AM Audio Stimulation.

Average Spectrum:  To most closely mimic the plot from the Kim paper (ie, the graph that I copied earlier), I plotted the average spectrum.  In the Kim plot, there were clear peaks at his two beat frequencies (37 and 43 Hz).  In my equivalent plot below, there are no peaks at the three beat frequencies that I studied (38, 40, and 42 Hz).

Mean Spectrum During the Test Period.  There is no evidence of my brain waves entraining
with the 38, 40, and 42 Hz AM auditory signals.  Bummer.

Conclusion:  So, it is clear that i did not see any ASSR in my EEG recordings.  This is very disappointing to me.

Comparison to Kim

Why did Kim see ASSR and I did not?  I'm not sure.  Maybe my test setup or my audio files were sufficient different to prevent the response.  Or, maybe I'm reading too much into his results...

In looking back at his plot with the spectrum from one of his subjects (copied earlier in this post), I see that the y-axis is a linear axis, whereas I always do dB.  What might his values look like when converted to dB?

As an example, I see that his first peak is 0.40 uV^2, relative to a baseline of about 0.30 uV^2.  Converted to dB (re: 1 uV^2), this would be -4.0 dB and -5.2 dB.  Comparing to my own spectrum plot above, where my baseline is about -10 dB, any peak at -4.0 dB should be easily seen.  Therefore, if my own response were as strong has Kim's subject's response, I would think that I would see the response in my plots.  I don't see the peak, so I guess that I didn't have the response as strongly as Kim's subject.

Perhaps the "gotcha" here is that the difference in Kim's data between the peak (-4.0 dB) and the baseline (-5.2 dB) is only 1.2 dB.  That is a really small difference.  For reliable detection, I generally like to see 6-10 dB of difference.  It might be too much to hope to reliably see only a 1.2 dB difference.

Next Steps

I'm not going to give up yet.  I'm going to try again.  I'm going to try using the additional EEG electrodes as used by Kim and I'm going to try to use sine-wave modulation instead of square-wave modulation.  I want to see this response!

Sunday, January 11, 2015

Estimating OpenBCI Battery Life

In OpenBCI's twitter feed a few weeks ago, I saw that someone 3D printed a belt holster for their OpenBCI system, including the battery pack.  I thought that was pretty sweet.  It also put in my mind the question of "How long might the system run on its batteries?".  So, after assembling the 16-channel OpenBCI system, I decided to measure the power draw.

Current Draw from 16-Channel OpenBCI System During Operation (no SD)

As you can see above, I used my digital multi-meter (DMM) in its "mA" setting to measure the current flow.  It doesn't matter where measure the current, you just have to break into the power circuit somewhere and bridge the gap with the DMM.  I did it at the battery pack, because you just have to pop out one battery.  Easy.

Touch the Red Lead (Positive) to the Battery

I measured the current draw of two of the three OpenBCI versions.  As shown in the picture at the top, the 32-bit board with daisy module (ie, the 16-channel version of OpenBCI) draws about 62 mA.  The picture below shows that the 8-bit board was drawing about 40 mA.  In both cases, the boards were actively streaming their EEG data via their RFDuino BT module.  Neither was saving data to their SD cards (SD writing can be *very* power hungry).

Measuring the Current Draw for the 8-Channel 8-Bit OpenBCI Board.

So, how long might the OpenBCI system run from a set of batteries?  Well, that can be a complicated question.  I started by looking at the datasheet for Energizer AA batteries (here).  The graph below is copied from near the end of the datasheet.  It shows how the battery voltage will change as a function of time for two different loads...one called "remote" and one called "radio".  Which might be similar to OpenBCI?  Well, if the 16-channel board is pulling 62 mA, and the nominal battery voltage is 1.5V, then the effective load is (1.5/0.062) = 24 ohms.  Hey, the graph below says that the "remote" is also a load of 24 ohms!  So we can read that line directly.

Discharge Curve for Energizer AA Batteries.  The 16-channel OpenBCI board might last 26 hours.

Looking at the graph, we need to know when the batteries will no longer be able to power the OpenBCI system...when can we call the batteries "dead"?  Often, AA cells are considered dead at 1.0 or even 0.8 V.  Unfortunately, I think (I'm not sure) that OpenBCI can't run that low.  I think that it needs a 5V supply to run (though I could totally be wrong, especially if it uses a buck-boost converter).  If we assume that it needs 5V, and if we've got 4 AA cells, then each cell needs to supply at least 1.25V.  That's our threshold.

Looking at the graph above, I focus on the blue line labeled "remote" and I see when it crosses our hypothetical 1.25V threshold.  It says that it could live for 26 hours.  Wow.  That's a pretty long time.  Cool.

Remember that this lifetime is for a 62 mA current draw (ie, for the 16-channel OpenBCI system).  Pulling 62 mA for 26 hours means that we are utlizing 62 mA * 26 hrs = 1612 mA-hours of battery capacity.  For the 8-channel board, which only pulls 40 mA, that same battery capacity might allow us to run for 1612 mA-hrs / 40 mA = 40 hours.  Not bad at all!

Since the battery life looks pretty good, it means that we should be able to come up with some pretty good mobile EEG hacks.  No need to stay indoors, people!  Let's get outside and freak some people out with our silly EEG headgear!

UPDATE 2015-07-10:  In the comments section, there's been some discussion regarding my "equivalent resistance" approach to estimating battery life.  As an alternative, it might be better to assume that OpenBCI board is actually a constant current load, rather than a constant resistance load.  So, let's estimate the battery life using that approach.  Below is the graph from the datasheet for battery life as a function of constant current draw.

Another Method of Estimating Battery Life for the 16-Channel OpenBCI Board.

To use this graph, I start with the knowledge that the OpenBCI board draws 62 mA.  This locates me on the x-axis.  I then read up to the line corresponding to the battery voltage where my device will die.  In this case, I think that OpenBCI will die at 1.25V.  There's a curve for 1.2V.  Let's use that.  From that point, I read off the the service life from the y-axis.  Allowing for some uncertainty in reading a value from a logarithmic scale, it looks like the battery life would be about 23 hours.   This value agrees decently well with the 26 hour value that I found based on my "equivalent resistance" method.  Such agreement is always satisfying.  It doesn't always work out that way.  :)

Saturday, January 3, 2015

Soldering 16-chan OpenBCI

For a while now, I've been using the 8-channel version of OpenBCI.  You can see some of my EEG data here and some of my accelerometer data here.  Recently, I've been interested in getting more EEG channels, which means that I have turned my attention to the 16-channel version of OpenBCI.  The 16-channel version consists of a single 8-channel OpenBCI board with a additional "Daisy Module" to provide the additional 8-channels.  Today, I'm going to show a few pictures of the soldering necessary to assemble these two boards into a working unit.

OpenBCI Daisy Module (Left) with OpenBCI 32-Bit Board (Right) along
with their male and female headers (bottom).

I started the assembly process by reading through the assembly instructions (with its pictures) as provided on the OpenBCI website.  Those instructions were good, though I thought that some additional illustration would be helpful to others.  Hence, the reason for today's post.

Parts and Components:  As you can see in the picture above, the OpenBCI boards themselves are fully assembled.  But, like many Arduino-style kits, you do need to solder on some pin headers in order to connect the boards together.  To make this easy, I found that the OpenBCI kit comes with the correct male pin header for the Daisy module as well as the correct collection of female headers for the base OpenBCI board.  Great!

A Trick for Soldering Headers:  I started by looking to solder the female headers to the base OpenBCI board.  Based on my experience soldering headers to various Arduino kits, I know that soldering the female headers can be annoying because it is hard to hold the header in place while your two hands are already busy holding the soldering iron and the solder.  To overcome this problem, I used a trick that I saw a while ago where you use a solderless breadboard to hold your female headers vertically in place, hands-free.  It's pretty sweet trick.  In addition to a solderless breadboard, you need some cheap double-ended pins (see below left).  I got mine from Adafruit, but they are standard items available from a number of vendors.  I only ever this pins for this soldering trick, so I bought them once and they've lived in my toolbox ever since.

Use double-ended extra-long pin headers along with a solderless breadboard as a trick to hold
the female headers in place.  The long pins will go into the bread board.  The short stubs off
the top of the female header get soldered into the OpenBCI board.

As seen in the picture on the right, above, you stick the extra-long headers into the female header that you are looking to solder to the OpenBCI board.  Then, as shown in the picture below, left, you stick the extra-long headers into the solderless breadboard, which leaves the short pins (which are the solderable part of the female header) sticking up in the air.

Use the extra-long pin header to hold the female header onto the solderless breadboard.
Then flip the OpenBCI board upside-down and place the OpenBCI board onto the
solder pins of the female header.  You're ready to solder those pins!

Soldering the Female Headers:  Now, you can place the OpenBCI board over those solder pins (see the right picture, above).  Note that the OpenBCI board has been flipped over so that it is face-down.  It's important that you solder the header onto the correct side of the board!  Once you have confirmed that everything is sitting correctly, you can start soldering.

Soldering each one of the pins in this header.
Then repeat for all of the other headers.

After repeating this process for all of the other female headers, the base OpenBCI board is fully prepared.

The base OpenBCI 32-bit board is finished.

Preparing the Daisy Module:  With the base OpenBCI board finished, I turned to the Daisy module.  Here, you start but using your pliers to snap apart the single, long, male pin header into the smaller pieces needed to fit into the different spots of the Daisy module.

Prepare the male pin headers for the Daisy board.  Snap the long pin header
into the correct number of pieces.

Use the Base Board as Your Fixture:  Then, as before, it can be tricky to solder these headers when your hands are full with the soldering iron and solder.  The tick this time, is to use the base OpenBCI board itself as your fixture.  This is a classic trick for soldering Arduino shields.  As shown in the picture below, left, insert the male pin headers into the base board's female headers.  Do this for all of the male headers that you will solder to the Daisy module.   Once they're in place, you can simply place the Daisy module onto the pins (see below, right) and everything will be nicely aligned and ready to solder.
(Left) Insert the male pin headers into the female headers that were just soldered into
the base OpenBCI board.  This holds them in the right place.  Then, place the Daisy board
on top so that you can solder the pins into the Daisy board.

Solder the Daisy Module:  With the pins all in place, solder the headers into place.  With everything so nicely held, this part is fast!  For me, it went so quickly that I forgot to solder one of the headers into place.  Ooops!  So, I went back and soldered the remaining pins.  No problem.

Soldering the pins to the Daisy board.  Be sure to solder all of the pins on all
of the new headers (I forgot one header when I did it)

Ready for EEG:  With the last soldering complete, the two boards are mated and I'm ready to collect 16-channels of EEG.  This is gonna be fun!

I'm finished!

Follow-Up:  I measured the power draw of the system here.

Tuesday, December 2, 2014

OpenBCI Accelerometer Data

The OpenBCI V3 board does more than just EEG.  Yes, I've already shown examples of doing ECG and EOG with my old V1 and V2 boards, but the new V3 board includes an accelerometer, which the old boards did not have.  How could an accelerometer be useful?  Well, you could use it to sense orientation (or change in orientation) of the head as part your BCI.  Or, you could use it to sense rough motion, which might suggest that you'll have motion artifacts in your EEG data.  Or, you could sense yourself tapping on the board as a way to introduce markers during your data collection.  There are many possibilities!  Today, I'm going to look at the accelerometer data for the first time.

OpenBCI V3 Board with Batteries
Goal: My goal is to record accelerometer data during known motions so that I can confirm that the data matches the motions.

Setup:  I used my OpenBCI V3 Board (see picture above) as delivered from OpenBCI.  On the OpenBCI board, I was running the same software as was shipped by OpenBCI in November, 2014.  The OpenBCI board used its wireless link to the PC.  On the PC, I ran the OpenBCI GUI in Processing.  The GUI logged the data to a file.

Procedure:  I inserted the batteries to my OpenBCI board to give it power. I started the OpenBCI GUI to begin recording data. Holding the board in my hand, I completed the following maneuvers:
  1. Start with board flat and level (z-axis points up, like in the picture at the top)
  2. Roll it 90 deg to the right (x-axis points down) and 90 deg left (x-axis points up)
  3. Tip it nose down (y-axis points down) and nose up (y-axis points up)
  4. Flip it upside down (z-axis points down)

Notice the markings on the OpenBCI board (zoomed picture below) that indicate the direction of the accelerometer's axes.

The accelerometer is the small black square towards the bottom.
Note that "X" points right, "Y" points forward, and "Z" comes up out of the board.

Data Files:  The 3-axis accelerometer data was saved to a text file by the Processing GUI.  I analyzed the data using Python. The data and analysis files are available on my EEGHacker repo on GitHub.  If you use this data, be sure to unzip the ZIP file in the SavedData directory!

Analysis:  The specific goals of this analysis are to confirm that the data is well behaved, that the correct axes are responding to the known motions, that the units are correct, and that the scale factors are correct.  I used an IPython Notebook to step through each one of these analyses.  You can see the IPython Notebook here.

Results, Data Continuity: The first thing I did was to look at the data to make sure that it was well behaved.  The most important part of being well behaved is that the data is continuous.  Looking at the packet counter in the data file (a counter which is transmitted by the OpenBCI board), there were no missing data packets.  Excellent.  I did see however that accelerometer data is only included in every 10th or 11th data packet.  Why?  Well, looking at the code on the OpenBCI board, it has configured the accelerometer to only produce data at 25 Hz.  So, compared to the 250 Hz sample rate for the EEG data (which then drives a 250 Hz rate for data packets), we see why we only get acceleration values every 10th or 11th packet.  It makes sense.  Good.

Results, Individual Axes: After ensuring that the data was continuous, I looked at the data values themselves.  I plotted the acceleration values as a function of time.  The plots below show the values recorded from each of the accelerometer's three axes.  As can be seen, the signals clearly reflect the maneuvers defined in my procedure.  Additionally, from these plots, we learn that negative acceleration values result when the accelerometer's axis is pointing down (relative to gravity) and positive values result when the axis is pointing up.  This polarity information is important if you wish to use the accelerometer data to estimate the orientation of the OpenBCI board.

Acceleration Values for the Accelerometer's Three Axes.
The three channels correspond to the X, Y, and Z axes.

Results, Scale Factor:  With the behavior of the 3 axes shown to be reasonable, I then wanted to confirm that the magnitude of the values were correct.  I wanted to make sure that the scale factors used for interpreting the raw values was correct.  The quickest way for me to confirm the scale factor was to compute the magnitude of the 3-axis acceleration vector.  When the device is at rest, the magnitude of the measured acceleration should equal gravity, which is 1.0 G.  As you can see below, the magnitude of our acceleration was generally close to 1.0 G (though often a little high), except when it was moving during its transitions between positions.  This is good.

The magnitude of the 3-axis acceleration vector should equal 1.0 G when at rest.
Ours equals about 1.044 G, which is within the known offset error bounds of the device

When I look very closely at the values, it appears that the typical reading is actually 1.044 G instead of 1.000 G.  There is a 44 mG difference.  Is this unexpected?  Well, yes, it was unexpected at first.  And then I read the datasheet.  Always look at the datasheet.  In this case, it reports that the accelerometer should have a typical offset error of 40 mG per axis.  For a 3-axis device, this could result in sqrt(402 + 402 + 402) = 69 mG of error on my magnitude value.  As a result, my 44 mG value appears to be in-line with the device's advertised performance.  That's satisfying.

Conclusion:  With this test, I confirmed that my accelerometer is sending well-behaved data, with all three axes responding appropriately to known motions, with all axes having the correct scale factor.  These are good results and I'm pleased. Now it's time to figure out something fun to do with the accelerometer!

Sunday, November 16, 2014

My Kickstarter OpenBCI Arrived!

It's arrived!  It's arrived!  My OpenBCI Kickstarter award has arrived!  And now, the guilty pleasure of unpacking a new piece of tech...

Sure, Joel and Conor did send me an early unit for me to test for them, but yesterday I received my actual purchased unit.  For their Kickstarter back in January, I choose the "OpenBCI Board -- Early Bird Special".  Based on their description, I thought that I'd get just the OpenBCI board.  It turns out that I got quite a bit more!

As you can see in center of the picture above, I got the OpenBCI board (8-bit version) as well as the USB Bluetooth dongle.  That was expected.  Looping around the outside are all the extra pieces that I didn't expect.  Starting from the left side of the picture, I got: a 4xAA battery holder, an OpenBCI sticker and sew-on patch, an OpenBCI T-Shirt, a set of electrode adapters, and two little bags of solderable female headers (for expanding the functionality of the OpenBCI board and of the dongle).  That's some good stuff!

Thanks, OpenBCI!

Sunday, November 2, 2014

Two Brains - One Robot

After my success with sharing the brain-controlled hex bug with Conor and Joel, we brainstormed on how we could make this hack even more fun.  We decided that the main problem with this hack is that only one person gets to participate -- the person driving the robot.  The solution?  Let's hook up multiple people at the same time to control the one robot.  It'll be like that 3-legged race, where you tie your leg to the leg of another person, and then you stumble together in slapstick hilarity until you both get to the finish line.  We are going to do the same thing, but with brain-controlled robots.  Here's how far we've gotten so far...

The Plan:  Our goal is to have multiple people control one robot via their brain waves.  To do this, we aimed to connect multiple people to a single OpenBCI board.  I have never connected multiple people to one EEG system before, so this was pretty exciting for me.  As shown in the figure below, the idea is that each player is responsible for just one of the robot's actions -- one player is responsible for "Turn Left", another for "Turn Right", etc.  Since the robot has four actions (Left, Right, Forward, Fire), we can have up to four players.

The Hexbug robot has four commands (Left, Right, Forward, Fire), so for multi-player fun,
connect four people to one OpenBCI board and work cooperatively!

Commanding the Robot:  In setting up this hack, I wanted to make it as easy as possible for the players to command the robot with their brain waves.  The easiest brain waves to generate and the easiest brain waves to detect are Alpha rhythms (ie, 10 Hz oscillations), specifically the Alpha rhythm that naturally occurs when you close your eyes.  So, with the setup above, we have the computer looking for Alpha waves in each person's EEG signal.  If the computer sees Alpha waves from Player 1, the computer issues a "Turn Left" command to the robot.  If the computer sees Alpha waves from Player 2, it issues a "Forward" command.  And so on...

EEG Setup:  To detect these eyes-closed Alpha waves, we put one electrode on the back of a player's head over the visual cortex (position "O1" in the 10-20 system).  We put another electrode on one ear lobe to act as the EEG reference.  Finally, we put a third electrode on the other ear lobe to act as the EEG Bias.

Individual Reference:  To allow each person to use their own reference electrode, we configured the software on the OpenBCI board to put the ADS1299 EEG chip into per-channel differential mode.  Unlike our normal mode of operation, which uses a common reference electrode via SRB1 or SRB2, this differential mode allows each channel (ie, each player) to have its own reference.  This is what we want!  We simply plug the O1 electrode into the channel's "P" input and the ear lobe reference into the channel's "N" electrode.

Common Bias:  The only tricky part is that we want all four players to be connected to the OpenBCI Bias.  This is tricky because the OpenBCI board does not have four Bias pins.  Well, as you can see below, all it takes is a soldering iron and you can connect a piece of pin header to turn the single Bias pin into four Bias pins.  Now we're hacking!

OpenBCI V3 Board With Extra Pins Soldered to the Bias Output

Connecting the Pieces:   The picture below shows all the connections to the OpenBCI board assuming three players.  On the lower left, we've got three pairs of wires (one pair for each player) plugged into the "P" and "N" inputs of three different channels.  Then, in the upper-left, you see three wires plugged into three of the four new Bias pins.  Finally, in the upper-right, you see five wires that go off to command the hacked Hexbug remote control.

OpenBCI Board with Connections Ready for Three Players

Making It Happen:  Being a rare thing that me and Joel and Conor are all together, it was really fun that we could work together to make this hack happen.  Joel worked the soldering iron to attach the pins and he modified the Arduino code running on the OpenBCI board to enable the per-channel differential mode.  Conor further modified the Arduino code as well as the Processing GUI to enable slower turning of the robot (originally, it was turning WAY too fast).  Then, I modified the Processing GUI to enable Alpha detection on the four individual players.  We did all this in parallel.  I'd never really done group-hacking before.  It was definitely fun.

Conor and Joel working through the details of the connecting the Hexbug remote control.

Testing It:  Once we pulled together all of the pieces, Conor and I began to test the complete setup (see pic below).  After a little tweaking, we got the whole system working, as shown in the video at the top of this post.  It was a group effort that worked out.  Pretty sweet.

Conor and Chip -- Two Brains, One Robot.

Breaking Robots:  So our original vision was to get this hack working so that we could have *two* 4-person teams, with each team controlling their own robot.  Luckily, we had multiple robots and multiple OpenBCI boards, so we thought that we could make it happen.  Unfortunately, as soon as Conor and I made our video, the robots started to break.  They don't like being stuffed in suitcases, I guess.  So, we were left with just one working robot.  Bummer.

Recruiting a Team:  At the AF LabHack, there were lots of folks doing their own hacking.  By the time we got our system working (with the one healthy robot), the other teams were scrambling to get there last results prior to presenting to the group...so we had a tough time recruiting volunteers for being part of a robot-control team.  In the short time we had left, we did get three enthusiastic folks step up.  We got them all equipped with EEG electrodes, tuned the system a bit and let them play!

Our Fine Volunteers.  Three Brains, One Robot.  

No Video:  At this point, we should be presenting a triumphant video.  Unfortunately, we don't have one.  If we did, what you'd see is that two of the three players could easily and repeatably use their eyes-closed Alpha waves to command the robot.  It was cool to see.

No Alpha:  The third player, though, did not have much luck controlling his part of the robot.  At first, I assumed that it was a problem with our system, but after a little debugging, I came to the conclusion that his brain simply wasn't generating eyes-closed Alpha.  He could have been trying too hard (you must be relaxed, without concentrating or being overly focused), or he could have been part of the 11% of the normal, healthy population that simply does not generate Alpha upon closing their eyes [Ref 1].  For these folks, I've got to come up with an alternate robot-control methodology...perhaps by the concentration signature of counting-backwards-by-three.

Next Steps:  The next steps are clear -- I have to get a bunch of people together, hook them up, and enjoy the shenanigans of many brains trying to control a single robot.  Should be fun!

Ref [1]: Gibbs FA, Gibbs EL, Lennox WG. Electroencephalographic classification of epileptic patients and control subjects. Arch Neurol Psychiatry. 1943;50:111–28, as referenced by http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3927247/

Follow-Up:  We used a similar approach to get a 5-person team to brain-control a swimming shark balloon.  It's cool.  Check it out here.