Sunday, January 25, 2015

Brain Got Beats -- Not Yet

I like controlling things with my mind.  That's why I do this brain-computer interface (BCI) thing.  The tough part of BCIs, though, is finding brain signals that are simple enough for the computer to detect, yet are also something that I can consciously control.  So far, I can do eyes-closed Alpha waves, concentration-controlled Beta/Gamma, and steady-state visual evoked potential (SSVEP).  I need more options.  Today, I'm going to try to do auditory steady state response (ASSR).  Or, more colloquially, does my brain got beats?

Can I use beating tones to entrain brainwaves?

Auditory Steady-State Response (ASSR)


The idea with ASSR is that we are looking for EEG signals from my brain that are driven by sounds presented to my ears.  When doing an ASSR, you use an audio tone whose amplitude is varied ("modulated") at a fixed rate such as 40 Hz.  Then, when you play that sound in your ears, you look in the EEG signals for a strong 40 Hz component.  Easy, eh?

Note that this is very similar to the steady-state visual evoked potential (SSVEP) that I used previously, where I'd make my computer screen blink at 8 Hz and 8 Hz signals would appear in my EEG.

Attention-Based ASSR?


If I want to use ASSR for a brain-computer interface (ie, for controlling robots!), there needs to be some way to consciously control my response to the sound.  For the SSVEP, where stimulation was my blinking computer screen, my response was much stronger if I consciously paid attention to the blinking screen.  This attention-based response was the key to being able to exploit it for a BCI.

Does ASSR have a similar attention-based component?  Until yesterday morning, I didn't know.  But then I came across this paper:  Do-Won Kim et al.  "Classification of selective attention to auditory stimuli: Toward vision-free brain–computer interfacing".  Journal of Neuroscience Methods 197 (2011) 180–185.  PDF here.

Kim's ASSR Setup


In the paper by Kim, they used two loudspeakers to present tones to the test subject.  The setup is shown below.  The subjects were sitting down in a comfy chair listening to the tones while wearing a small montage of EEG electrodes (Cz, Oz, T7, T8, ref at left mastoid, ground at right mastoid).

Test Setup as used by Kim (2011) for Evoking Auditory Steady-State Response (ASSR)

For the audio tones, they used a 2500 Hz tone from one speaker and a 1000 Hz tone from another speaker.  The key feature of ASSR, though, is the modulation of these tones.  For one of the tones, they varied the amplitude of the tone (ie, they alternately made it quiet and loud) at a rate of 37 Hz, while the other tone they modulated at a rate of 43 Hz.  These frequencies are the "beat rates" for the audio.  It is the 37 Hz or 43 Hz beat rate that they are looking for in the EEG (hence, "brain got beats?").

Below is what they saw in the EEG signals (Cz) for one of their subjects when the subject gave their attention to the 37 Hz modulated signal (red) or the 43 Hz modulated signal (blue).  There is clearly a difference.  This makes me happy.  This is what I want to recreate with my own testing.

Spectral Results for One Subject from Kim (2011) In Response to Steady-Pitch
Tones that were Amplitude Modulated at 37 Hz or 43 Hz.

My Test Setup


I want to recreate their results.  I'm going to create some audio files with the amplitude modulated signals, I'm going to play them into my ears via headphones, and I'm going to record my EEG signals (OpenBCI!) to look for my ASSR.

EEG Setup:  Reading more details from the paper, they said that they got the strongest response from the electrode at Cz, so I decided to start there.  I put one electrode at the top of my head (Cz) with the reference on my left ear lobe and the OpenBCI "bias" on my right ear lobe.  I used the gold electrodes and the Ten20 EEG paste that came with the OpenBCI kit.  Without really trying, I happened to get an electrode impedance of 20-30 kOhm at both Cz and at the reference, which are probably good enough.

My EEG Setup, Cz Only.  Also, unlike Kim, I used ear buds (headphones)
stead of loudspeakers to present my tones.

OpenBCI EEG System:  For this test, I happened to use my 16-channel OpenBCI system.  I'm only using one channel of EEG data, though, so I could have used the 8-channel systems (or even other systems, like OpenEEG) just as well.  I wired up my OpenBCI unit as shown below.  Starting from the left, the white wire is the "bias" (aka, driven ground) going to my right ear lobe, the brown wire is the electrode at the top of my head, and the black wire is the reference electrode on my left ear lobe.  Note that they are all plugged into the lower row of pins (the "N" inputs) on the lower board.  The system is being powered by four AA batteries and is sending its data wirelessly back to the PC.  I'm using the OpenBCI GUI in Processing.

Here's How I Plugged into the OpenBCI Board.

Audio Files:  I created my audio files in Audacity.  I created two sets of files, based on the frequencies used in the Kim paper: one set of files using a 1000 Hz tone and the other set using a 2500 Hz tone.  The Kim paper said that the strongest ASSR generally occurs for a beat frequency of 40 Hz.  I wanted to see my response at different beat frequencies, so for each tone I created three versions: one beating at 38 Hz, one at 40 Hz, and one at 42 Hz.  I made each version 20 seconds long.  I used a square wave (ie, on/off) amplitude modulation, though next time I might try sine wave modulation instead.

I Created My Amplitude-Modulated (AM) Test Tones in Audacity.  First, "generate" the
tone.  Then, to do the AM, go under "Effect" and select "Tremolo". 

Data and Analysis Files:  My audio files, my data files, and my analysis files are all on my GitHub here.  Note that I did my analysis using an IPython Notebook (see it here).  My specific Python installation is described here.

My ASSR Response


My goal is to see if I exhibit the ASSR response with this test setup.  To do the test, I wired myself up as discussed above, I queued up all six audio files (the three at 1000 Hz followed by the three at 2500 Hz), put in my ear buds, and started recording.

Eyes Closed:  The spectrogram below shows my Cz EEG signal when I did this test with my eyes closed.  That strong red stripe at 10 Hz is my Alpha response simply due to having my eyes closed.  What I do not see here are horizontal stripes of energy at 38, 40, or 42 Hz.  In other words, I do not see any brain waves entraining with the audio stimulation.  This is disappointing.

Spectrogram of EEG Signal from Cz with AM Auditory Stimulation Near 40 Hz.
My eyes were closed, hence the strong response at 10 Hz.
There is no signature of the 38-42 Hz AM Audio Stimulation.

Eyes Open:  I also performed this test with my eyes open.  A spectrogram of my EEG signal at Cz is shown below.  I started and ended the test with my eyes closed for 10 seconds, which you can see as 10 Hz Alpha waves at the start and end.  What I really want to see, though, is something corresponding to the audio stimulation at 38 Hz, 40 Hz, or 42 Hz.  Again, I see nothing.

Spectrogram of EEG Signal from Cz with AM Auditory Stimulation Near 40 Hz.
My eyes were open, except at the beginning at end.
There is no signature of the 38-42 Hz AM Audio Stimulation.

Average Spectrum:  To most closely mimic the plot from the Kim paper (ie, the graph that I copied earlier), I plotted the average spectrum.  In the Kim plot, there were clear peaks at his two beat frequencies (37 and 43 Hz).  In my equivalent plot below, there are no peaks at the three beat frequencies that I studied (38, 40, and 42 Hz).

Mean Spectrum During the Test Period.  There is no evidence of my brain waves entraining
with the 38, 40, and 42 Hz AM auditory signals.  Bummer.

Conclusion:  So, it is clear that i did not see any ASSR in my EEG recordings.  This is very disappointing to me.

Comparison to Kim


Why did Kim see ASSR and I did not?  I'm not sure.  Maybe my test setup or my audio files were sufficient different to prevent the response.  Or, maybe I'm reading too much into his results...

In looking back at his plot with the spectrum from one of his subjects (copied earlier in this post), I see that the y-axis is a linear axis, whereas I always do dB.  What might his values look like when converted to dB?

As an example, I see that his first peak is 0.40 uV^2, relative to a baseline of about 0.30 uV^2.  Converted to dB (re: 1 uV^2), this would be -4.0 dB and -5.2 dB.  Comparing to my own spectrum plot above, where my baseline is about -10 dB, any peak at -4.0 dB should be easily seen.  Therefore, if my own response were as strong has Kim's subject's response, I would think that I would see the response in my plots.  I don't see the peak, so I guess that I didn't have the response as strongly as Kim's subject.

Perhaps the "gotcha" here is that the difference in Kim's data between the peak (-4.0 dB) and the baseline (-5.2 dB) is only 1.2 dB.  That is a really small difference.  For reliable detection, I generally like to see 6-10 dB of difference.  It might be too much to hope to reliably see only a 1.2 dB difference.

Next Steps


I'm not going to give up yet.  I'm going to try again.  I'm going to try using the additional EEG electrodes as used by Kim and I'm going to try to use sine-wave modulation instead of square-wave modulation.  I want to see this response!

11 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. Hi chip. I am a computer technician by trade and just beginning to delve into BCI computing. I'm really interested in the implications there may be regarding Cymatics. I haven't studied Kim's paper independently but I noticed that the results graph for his experiment show a spike at 37Hz and 43Hz for *both* signals (although lower output at the higher frequency) It makes me wonder if both signals were being played simultaneously while merely asking the subject to *focus* on one signal over the other? With both signals playing at once perhaps the brain coalesced the two signals hence reenforcing the signal output?

    Just a thought.

    ReplyDelete
    Replies
    1. Thanks for checking out my work!

      Yes, in the Kim paper, they were playing both tones simultaneously. Both would appear in the subject's EEG. When the test subject focused on one of the two tones, the strength of that frequency would increase in the EEG.

      With the experiment above, I was merely trying to see the presence of either tone in my EEG...without even taking the next step of seeing if my attention would strengthen on of them. I saw nothing in my EEG.

      I later repeated this experiment using simultaneous tones, like in the Kim paper. There was still no sign of the audio stimulation in my EEG signals. Bummer.

      Chip

      Delete
  3. Hey Chip! Stumbled upon your blog and now have a new obsession. I've been messing around with Arduino for a few months now, have years of programming experience and am absolutely fascinated by biofeedback technology. How do you recommend I get started? The openBCI is soldout and I couldn't find older versions for sale anywhere. I was considering the Olimex OpenEEG but don't know if it will be useless to do anything fun and from what I understand Neurosky is even less precise. Are there alternatives? Know where I could find an old openbci like v2 or v1 even? Thanks for any response.

    ReplyDelete
    Replies
    1. Bummer that they are sold out. You should go to the OpenBCI forum and post this question. There's a guy there, wjcroft, who'll probably have some excellent recommendations.

      Delete
  4. Hi Chip, great post. One thing I wonder about Kim's experiment and yours: perhaps Kim had the subjects listen to the sounds for a long duration? Like 15-30 minutes? I've read that for the brain to "respond" to these outside frequencies it takes some time. Happy brain-hacking!

    ReplyDelete
  5. hey Chip! I really enjoy reading your posts
    and I was wondering: did you try using binaural beats to produce e.g. a 5Hz component (400Hz left ear, 405Hz right ear) ?
    Because, if you Can measure a reliable binaural-beats response, then maybe you can try the following to make a BCI:
    Play 400Hz + 800Hz in left ear
    Play 405Hz + 808Hz in right ear
    So you can then first try focus on the lower tone (400/405)
    and then on the higher tone (800/808) and see if you can alter the spectrogram, just as you did with SSVEP :)
    Cheers!

    ReplyDelete
    Replies
    1. Correction: I wrote a relatively high frequency of 800Hz in above post, but it might be a better idea to go with 500 (or something), since one source says that "binaural beats are best perceived at lower pitches and are best observed at about 440 Hz. (Binaural beats using carrier frequencies above 900 Hz are typically not noticeable.) " (source: http://mindalive.com/index.cfm/technology/entraining-tones-and-binaural-beats/ )

      Delete
    2. Also, I wonder if frequencies of ~40Hz used in your experiment were too high? Maybe it would work for lower frequencies like <10Hz ?

      bko

      Delete
  6. Hey Chip, how are you? What happened with blog? How are your research now? I'm brazilian and i working in homemade and low-cost eeg. Your post of homemade electrodes has helped me too much. Thank's for sharing your knowledge. Keep working and posting here for us. Your work help a lot of people. Thank You!

    ReplyDelete
  7. I have used the Emotiv Epoc as I own one and what your trying to do is honestly simple with that device. Perhaps your traning wrong with your device. I learned many ways to train myself. Contact me overgrouth@hotmail.com I would love to help you with your project.

    ReplyDelete