Open Access

Automatic single-trial discrimination of mental arithmetic, mental singing and the no-control state from prefrontal activity: toward a three-state NIRS-BCI

BMC Research Notes20125:141

Received: 7 October 2011

Accepted: 13 March 2012

Published: 13 March 2012



Near-infrared spectroscopy (NIRS) is an optical imaging technology that has recently been investigated for use in a safe, non-invasive brain-computer interface (BCI) for individuals with severe motor impairments. To date, most NIRS-BCI studies have attempted to discriminate two mental states (e.g., a mental task and rest), which could potentially lead to a two-choice BCI system. In this study, we attempted to automatically differentiate three mental states - specifically, intentional activity due to 1) a mental arithmetic (MA) task and 2) a mental singing (MS) task, and 3) an unconstrained, "no-control (NC)" state - to investigate the feasibility of a three-choice system-paced NIRS-BCI.


Deploying a dual-wavelength frequency domain near-infrared spectrometer, we interrogated nine sites around the frontopolar locations while 7 able-bodied adults performed mental arithmetic and mental singing to answer multiple-choice questions within a system-paced paradigm. With a linear classifier trained on a ten-dimensional feature set, an overall classification accuracy of 56.2% was achieved for the MA vs. MS vs. NC classification problem and all individual participant accuracies significantly exceeded chance (i.e., 33%). However, as anticipated based on results of previous work, the three-class discrimination was unsuccessful for three participants due to the ineffectiveness of the mental singing task. Excluding these three participants increases the accuracy rate to 62.5%. Even without training, three of the remaining four participants achieved accuracies approaching 70%, the value often cited as being necessary for effective BCI communication.


These results are encouraging and demonstrate the potential of a three-state system-paced NIRS-BCI with two intentional control states corresponding to mental arithmetic and mental singing.


Many individuals with severe and multiple motor disabilities cannot communicate through the conventional avenues of speech and gesture. Many such individuals may also lack sufficient motor control to operate common movement-based access devices (e.g.,mechanical switches, eye-trackers) [1]. Brain-computer interface (BCI) technologies are controlled through brain activity alone, and may provide these individuals with an alternative, movement-free means of access [2]. Near-infrared spectroscopy (NIRS) is an optical imaging technology that has been recently investigated as a safe, non-invasive brain response measurement technology for potential use in BCI applications [36]. NIRS can be used to assess functional activity in the cerebral cortex via measurement of the haemodynamic response (see [7] for description of fundamental principles). NIRS offers a number of advantages for BCI applications compared to the more frequently studied electroencephalography (EEG), such as insensitivity to electrophysiological artefacts (e.g., EMG, EOG) and, when monitoring regions not covered by hair (e.g. prefrontal cortex), much faster and easier sensor placement (e.g., no need for electrode gel [8]). Note that NIRS optode placement in areas covered by hair is much more difficult, and may take considerably longer to achieve adequate signal-to-noise ratio.

Generally, a user controls a BCI output by consciously eliciting distinct, reproducible patterns of activation in a particular brain region. This is usually done by performing different mental tasks, such as motor imagery [3, 4], mental arithmetic [5, 6, 912], mental singing [5, 6] and verbal tasks [10, 11]. The system then detects and interprets these patterns of activity, and produces the appropriate command signal to control a connected external device (e.g., computer cursor) in the way the user intended.

For the most part, previous research has investigated the development of NIRS-BCIs operating under synchronous control paradigms. Under synchronous control, the system evaluates the user's brain activity for control (i.e., is vigilant) only during certain periods defined by the system, and users must exert intentional control over their brain activity (i.e., generate what we refer to as an "intentional control (IC) state") during each and every one of these "system-vigilant" periods. Though functional, the need for such frequent mental state control is mentally demanding for the user. An attractive alternative to the synchronous control paradigm is "system-paced" control, a paradigm proposed by Mason et al. [13], in which users are required to intentionally control their brain activity only during the system-vigilant periods in which they actually wish to affect the BCI output, and can remain in a more natural, "no-control (NC)" state at all other times. This paradigm can be considered an intermediate step between synchronous control and the "ideal" asynchronous, or self-paced, paradigm. See [14] for a full discussion of the different BCI control paradigms and their implications for NIRS-BCI.

In a previous study, we investigated the feasibility of a system-paced NIRS-BCI with one IC state corresponding to the performance of either mental arithmetic (MA) or mental singing (MS) [14]. These tasks were chosen as they had both been previously shown to elicit activation in the prefrontal cortex [5, 6, 9, 10, 12, 1519]. The precise conditions that induce prefrontal activation during mental arithmetic are not well understood [20], but could be associated with working memory [21, 22], mental stress [23, 24], or other general cognitive operations that are instrumental, but not specific, to mental arithmetic [21, 25]. Music is known to elicit [26, 27] intense emotional responses that activate brain regions believed to be associated with emotional behaviors, including the prefrontal cortex [28] and specifically, the orbitofrontal and frontopolar areas [29, 30]. We found that both mental arithmetic and music imagery could be automatically distinguished from the NC state with average accuracies of 71.9% and 63.1%, respectively, across participants. Though the overall classification result achieved for the MS vs. NC classification problem was lower than that achieved for MA vs. NC, it is important to note that large inter-participant variation was observed for the former task. High accuracies for the MS vs. NC problem, close to or even exceeding the corresponding MA vs. NC results, were achieved for three of seven participants (> 70%). For one other participant, maximum accuracy achieved for MS vs. NC (63%) was lower than for MA vs. NC, but still significantly above chance. The results for the remaining three participants, however, were all below chance for the MS vs. NC problem. We believe that the greater inter-participant variability was due to the more subjective nature of the mental singing task, in which participants mentally rehearsed self-selected musical pieces and were instructed to try to feel the emotion elicited by the song (it has been suggested that incorporating this self-monitoring element in an emotional induction task can result in an increase in the prefrontal hemodynamic response as compared to more passive emotional tasks [31]). Some individuals may have been able to do this more consistently/effectively than others. It is possible that, if given neurofeedback of their response, even those participants for whom mental singing was ineffective in this study could learn to evoke a detectable response. Collectively, these results are encouraging, and demonstrate the potential of a system-paced NIRS-BCI with one IC state corresponding to either mental arithmetic or mental singing, but suggest that mental singing may not represent a suitable IC state for all users.

In the present study, we wish to expand on our previous results and investigate the feasibility of a system-paced NIRS-BCI with two IC states corresponding to mental arithmetic and mental singing. This is desirable because increasing the number of states recognized by the system increases the functionality/information transfer rate of the BCI. For example, a system-paced BCI with one IC state recognizes two different states (i.e., one IC state and the NC state) and thus allows for a two-choice system (e.g., IC = "yes' and NC = "no"). The addition of a second IC state increases the number of recognized states to three (i.e., the two IC states and the NC state). This in turn allows for a three-choice system (e.g., ICa = "yes", ICb = "no" and NC = "choosing not to respond"). By increasing the number of recognized states, one increases the number of distinct messages the user can convey.

This is the first NIRS-BCI study to attempt single-trial classification of more than two intentionally- and autonomously-generated mental states (i.e., not dependent on external prompting, and thus suitable for active BCI control). More specifically, this is the first attempt at distinguishing two IC states corresponding to two different cognitive tasks - mental arithmetic and mental singing - and an explicit NC state. We expect to achieve promising classification results in the cases for which both MA and MS are individually distinguishable from the NC state [14].



Seven able-bodied adults (two male, mean age = 25.7 ± 3.1 years) were recruited from the students and staff at Holland Bloorview Kids Rehabilitation Hospital (Toronto, Canada). Individuals were excluded from participation if they had any condition that could adversely affect either the measurements or their ability to follow the experimental protocol. Ethical approval was obtained from Holland Bloorview Kids Rehabilitation Hospital and the University of Toronto. All participants provided signed consent.


Signals were acquired using a multichannel frequency-domain NIRS instrument (Imagent Functional Brain Imaging System from ISS Inc., Champaign, IL). Ten NIR sources and three photomultiplier tube detectors were secured against the participant's forehead using a flexible headband, as shown in Figure 1. The ten sources were grouped into five pairs, each containing one 690 nm and one 830 nm source, so that each location could be probed by the two wavelengths concurrently. The headband was placed on the participant's forehead such that the bottom row of optodes sat just above the eyebrows, and the center column of optodes was in line with the nose. Nine locations within a 27 cm2 trapezoidal area were probed, as shown in Figure 1. In the given configuration, we considered only signals arising from source-detector pairs (henceforth referred to as "channels") with a separation of 3 cm, which is generally accepted to be the ideal source-detector separation for measuring cortical haemodynamics [32]. This yielded a total of 18 channels (i.e., 3 detectors × 3 source-pairs per detector × 2 wavelengths per source-pair). Data were sampled at 31.25 Hz.
Figure 1

Source-detector configuration. Source-detector configuration. Each open circle represents a source-pair comprising one 690 nm and one 830 nm source fibre, while each solid circle represents a detector. Only the source-pair/detector combinations with a separation of 3 cm were considered. "X" denotes a point of interrogation. "*" denotes the approximate FP1 and FP2 positions of the International 10-20 System.

Positioning the headband to achieve adequate coupling of the optodes to a participant's forehead generally took approximately 5-10 min.

Intentional control states - mental arithmetic and music imagery

For the mental singing task, participants silently rehearsed self-selected musical pieces that they felt would elicit within them a strong, positive emotional response. They were instructed to make an effort to feel the emotion of the song, rather than just passively recite the lyrics or tune.

For the mental arithmetic task, participants performed a sequence of simple mathematical calculations beginning with the subtraction of a small number (between four and thirteen) from a three digit number, and continued throughout the task interval with successive subtractions of the small number from the result of the previous subtraction (e.g., 753-13 = 740, 740-13 = 727, 727-13 = 714, etc.). The calculation the participant was to perform during a given system-vigilant period was displayed on the screen. A different calculation was given for each system-vigilant period of a given session.

Experimental protocol

Each participant completed three experimental sessions which were conducted on different days. During each session, participants performed a total of 32 trials. In each trial participants were visually presented with a question and three possible responses. The three choices were highlighted in sequence for periods of 20 s each. These 20 s periods constituted the system-vigilant periods of the system-paced paradigm, and were separated by 12 s intervals (to allow hemodynamics to return to the no-control/baseline state after activation). The timing of an example trial is shown in Figure 2. Within a given session, no question was repeated. The same 32 questions were used in the three different sessions, but the order was randomized for each.
Figure 2

Example trial stimulus sequence and timing. Example trial stimulus sequence and timing diagram. In this example, the participant would enter the IC state during intervals 2) and 3) - to select responses A and B - and would remain in the NC state at all other times. The task cue at the bottom of the display indicates that this is a mental arithmetic trial. Note that at the end of each trial, when participants were asked to explicitly verify which answer(s) they selected, they also gave a rating, on a scale of 1-5, of how engaged they felt they were in the task during the trial. These data were used for verification purposes only and were not used in the quantitative analysis.

Participants were instructed to answer the questions by eliciting the indicated IC state (i.e., MA or MS) throughout the intervals in which their desired response(s) were highlighted. There was not necessarily a single correct answer; there could be one, two, three or no correct answers. During the intervals in which they did not wish to make a selection, participants were not required to control their mental activity in any particular way, but rather were told to allow natural thought patterns to occur without restriction. This represented the "no-control" state. To ensure that the data during the system-vigilant periods could be properly labeled as MA, MS or NC, we included only questions with obvious answers (see example in Figure 2) and participants were asked to explicitly verify their selection(s) at the end of each trial. NIRS data were not recorded during this verification period. The protocol was designed such that 72 MA, 72 MS, and 144 NC periods were recorded across the three sessions.

Note that this multiple choice question paradigm is not the ideal application for a system-paced NIRS-BCI with two IC states (i.e., a three-state system). This protocol was designed primarily to facilitate investigation of a system-paced BCI with one IC state (i.e., a two-state system) [14], thus the selected application reflects this. A more suitable application for a three-state system would be one in which the participant could select one of three (rather than one of two) different options during each system-vigilant period; for example, yes/no questions with the possible choices of "yes", "no" and "choosing not to respond". However, since the objective in this work is simply to determine if the three mental states can be automatically differentiated from one another on a single-trial basis, the use of these data is justified.

NIRS data pre-processing

For each trial, each of the 18 signals (i.e., 2 wavelengths at 9 interrogation locations) was first normalized by its own mean and standard deviation in order to account for inter-trial differences in sensor coupling due to removal of the headband between and (at the participant's request) within sessions. The signals were also linearly detrended to mitigate any effects of instrumentation-related drift. The 20 s system-vigilant periods, of which there were three per trial, were then extracted and grouped into MA, MS and NC samples.

The raw normalised light intensity signals for each system-vigilant period were low-pass filtered in order to mitigate physiological noise due, primarily, to respiration (0.2-0.3 Hz) [33], cardiac activity (0.8-1.2 Hz) and the Mayer wave (approximately 0.1 Hz) [34]. A 3rd-order Chebyshev type II filter was designed with cut-off frequency at 0.1 Hz, stop frequency at 0.5 Hz, pass-band loss of no more than 6 dB, and at least 50 dB of attenuation in the stop-band.

Feature extraction

Consistent with what we know about the hemodynamic response, we expected to see a change in the amplitude of the light intensity signals after the commencement of the mental task, as the concentrations of oxygenated and deoxygenated haemoglobin change (a result of the haemodynamic response) and in turn alter the absorption properties of the cortical tissue [35]. We found in an earlier study that the slope of the linear regression line fit to the signal within the system-vigilant period was effective for discriminating the intentional control states individually from the no-control state [14]. This result corroborated the findings of other studies which had success classifying mental activity from a controlled rest state using similar amplitude-based features [4, 5, 36].

As in the earlier study, to capture the unique temporal response for each individual (there could be intersubject variability in time for hemodynamic response to peak, number of peaks, etc.), as well as any temporal differences between the activities, we considered as features the slope of the regression line fit to the signal over multiple time windows within the 20 s response period. Each time interval was defined by a start time and an end time, where start times ranged from 0 to 15 s, and end times ranged from 5 to 20 s, both in 5 s increments. All possible combinations of start and end times, where the latter exceeded the former, were considered as valid time intervals for feature calculation. In total, ten different time windows were considered. Thus the resultant feature pool consisted of 180 candidate features comprising the slope of the regression line fit to each of the 18 channels over each of the 10 time windows.

Feature selection and classification

In this study, the classification problem of interest is mental arithmetic vs. mental singing vs. the no-control state. A linear discriminant analysis (LDA) classifier was trained on optimal feature subsets selected using a standard genetic algorithm (GA). Such random search algorithms can allow for the evaluation of a search space more efficiently than most other heuristic search methods [37]. The GA parameter values used are listed in Table 1. Feature selection was based on the wrapper method - that is, candidate feature subsets were evaluated for their predictive performance using the learning algorithm of interest [38]. To reduce search time, and to avoid the "curse of dimensionality" (i.e., to maintain an adequate ratio of training sample size to feature subset dimensionality), we explicitly prescribed the subset dimensionality of interest. Based on preliminary analyses, we chose to consider feature subsets with dim = 8, 9, 10, 11 and 12.
Table 1

GA parameters



Population Size


Search space dimensionality


Elite count


Parent selection

roulette- wheel

Crossover function


Crossover rate


Mutation function


Mutation rate


Max generations


Fitness function


Fitness value

mean probability of error

The classification strategy used in this study is depicted in Figure 3. A six-fold cross-validation was used to estimate the classification accuracy. For each fold of this external cross-validation, five independent runs of the genetic algorithm were performed on the training data. Within the genetic algorithm, LDA served as the fitness function, and the mean probability of error, as estimated by the training set, was selected as the fitness value. Of the five feature sets selected over the five runs of the GA, the set yielding the lowest mean probability of error was used with the training set to train the classifier in the given fold of the external cross-validation. Classification accuracy was then determined for the test set (note that for each fold of the cross-validation, the test set was not involved in either the feature selection or the training of the classifier). A total of five runs of the six-fold cross-validation was performed. Thirty accuracy measures were thus obtained, from which a mean classification accuracy was calculated. Note that adjusted accuracies were used, rather than the standard accuracy measure, to account for bias due to the imbalanced classes (recall there were 144 NC, 72 MA, and 72 MS samples). For a two-class problem, adjusted accuracy (AA) is calculated as [39]
Figure 3

Classification procedure. Classification procedure. This procedure was performed on a per-participant basis for each feature subset dimensionality under investigation, specifically dim = 8, 9, 10, 11 and 12.

AA = s e n s i t i v i t y + s p e c i f i c i t y 2
Writing this in general terms for a k-class problem gives
AA = j = 1 k ( P C j ) k

where PCj is the percentage of correctly classified samples from class j. This expression was used to calculate the adjusted accuracies for the MA vs. MS vs. NC classification problem.

To confirm the validity of our classification accuracy estimates, we repeated the classification procedure described above with randomized class labels. If our classification algorithm was properly configured, these results should be at approximately chance level (i.e., 33% for a three-class problem).


An LDA classifier trained on a 10-dimensional feature set allowed for the most accurate classification of MA vs. MS vs. NC across participants, yielding an average accuracy of 56.2%. The overall average classification accuracy, as well as all individual participant accuracies, significantly exceeded chance at α = 0.01 (note that the upper confidence limit of chance for a three-class problem, n = 288 trials and α = 0.01 is 40.4% [40]). However, as expected, for three participants (P2, P3 and P4) MS was classified near chance levels, and thus even though the overall accuracy exceeds chance, the MA vs. MS vs. NC classification cannot be considered successful for these participants. Across the four candidate participants (i.e., those participants for whom MA and MS were previously found to be individually differentiable from NC, specifically P1, P5, P6 and P7), an average accuracy of 62.5% was achieved. Further, each of the three classes (i.e., MA, MS and NC) were classified well in excess of chance for all four participants. Figure 4 shows, for one of these participants (P7), the average hemodynamic response for each class over the 20 s system-vigilant period at each of the nine interrogation locations. Note the distinct differences in the response among the three tasks.
Figure 4

Example hemodynamic signals for mental arithmetic, music imagery and no-control. Normalized light intensity versus time plots showing the hemodynamic response for mental arithmetic (red), music imagery (blue) and no-control (black) over the 20 s system-vigilant period. Only the signals from the 830 nm sources are shown. For each task, the signals shown are the average over all samples for one of the participants for whom the three-class classification was successful (P7). Dashed lines indicate standard error.

Table 2 reports the results for the MA vs. MS vs. NC classification problem (LDA, dim = 10). Along with the overall classification accuracies, it includes the per-class classification accuracies, and the overall classification accuracies for the same data but with randomized class labels. As expected, these values are all approximately 33%.
Table 2

MA vs. MS vs. NC classification results: LDA trained on 10-dimensional feature set

Participant Number

Proper Labels

Randomized Labels



NC correct (%)

MA correct (%)

MS correct (%)

Adjusted Accuracy (%)


64.5 ± 6.9




32.7 ± 8.1

2 2

49.0 ± 7.8




33.1 ± 6.7

3 2

46.5 ± 8.0




31.8 ± 6.7

4 2

47.9 ± 5.5




32.7 ± 5.2


63.6 ± 6.5




33.4 ± 6.5


55.0 ± 6.0




32.4 ± 5.6


66.8 ± 5.8




33.4 ± 6.0

Mean (all participants):

56.2 ± 8.7




32.8 ± 0.58

Mean (Participants P1,P5-P7) 3

62.5 ± 5.1




33.0 ± 0.54

1The overall average classification accuracy, as well as all individual classification accuracies, are significantly greater than chance (α = 0.01)

2As expected for this participant, classification of MS is very near chance level.

3Participants for whom both MA vs NC and MS vs NC could be classified with accuracy significantly exceeding chance [16]. These participants were considered candidates for the MA vs MS vs NC classification problem.


The results achieved for the four candidate participants are very encouraging. Accuracies for three of these participants approached 70%, the level cited by some as being necessary for effective BCI communication [41]. With training/practice, these individuals could potentially meet and exceed this level [42]. A three-state system offers a significant increase in functionality/information transfer rate over a two-state system. The findings support the potential of a three-state system-paced NIRS-BCI with intentional control states corresponding to mental arithmetic and mental singing. For three other participants, however, mental singing was ineffective, which suggests that a three-state system based on this task may not be suitable for all users.

A potential source of inter-participant variation in the reported classification accuracies is the inter-individual difference in scalp-cortex distance over the sinus frontalis. Specifically, as sinus volume increases, NIR light traverses a decreasing volume of grey matter, resulting in diminished sensitivity of the measurement to cortical activity [43].

To the best of our knowledge, only two other NIRS-BCI studies have attempted single-trial classification of greater than two mental states [44, 45]. However, these studies focused on passive BCI applications [46] (specifically, for enhancing human-computer interaction in gaming systems) and used complex tasks that depend on external cues/stimuli and are therefore unsuitable for an active BCI in which the user should be able to autonomously and spontaneously perform the task (note that though we used visual cues in the MA task in order to keep the experiment controlled, the cues would not be necessary in a practical system; the user could easily select the initial calculation to perform independently). Specifically, they classified rest and two different difficulty levels of externally-cued spatial tasks (76.7% accuracy) [45] or computer game play (54% accuracy) [44]. The higher accuracies reported in [45] as compared to our results could be attributed to the following two differences in the studies: 1) the spatial task used in [45] was more complex than either the mental arithmetic or mental singing task, and could have resulted in greater activation that was more clearly distinguishable from rest; and 2) in our study we differentiated the two tasks and a no-control state, where the participant's brain activity was unconstrained. In the spatial task study, they distinguished tasks of different difficulties from a "controlled rest" exercise, though they do not explicitly define this rest state. It is possible that brain activity was constrained during this period, allowing for greater discriminability compared to the diverse no-control state used in our study. Also, the spatial task study does not report per-class classification rates, thus it is not clear if all three classes were classified successfully.


This is the first NIRS study to explicitly investigate the automatic discrimination of three intentionally- and autonomously-generated mental states suitable for active BCI control. Specifically, we classified intentional activity due to the performance of two different cognitive tasks - mental arithmetic and mental singing - and the no-control state, where the user's mental activity is unconstrained. With a ten-dimensional feature set and a linear classifier, an overall classification accuracy of 62.5% was achieved across four candidate participants for the MA vs. MS vs. NC classification problem. All participants attained accuracies well in excess of chance, three of which approached 70%, the level cited by some as being necessary for effective communication [41]. Overall, these results are encouraging and demonstrate the potential of a three-state system-paced NIRS-BCI with two IC states corresponding to mental arithmetic and mental singing.


aThe term "no-control" state refers to the natural state existing when the user is not consciously modulating his/her brain activity for the purpose of controlling the BCI output, e.g. during periods of thinking, composing, monitoring or daydreaming [13, 47, 48].

bThe upper confidence limit around the theoretical chance level of p = 50% for a two-class problem, given n = 144 trials and α = 0.01, is 60.6% [40]; thus any classification accuracy above this value can be said to be significantly greater than chance at a confidence level of 99%.

cDuring each session, four different questions appeared for each of the eight possible combinations of the three choices (i.e., neither A, B nor C; A only; B only; C only; A and B; A and C; etc.). One set of the eight possible response combinations yields 12 IC and 12 NC periods. Therefore, (4 sets of eight possible response combinations) × (12 IC periods and 12 NC periods) × (3 sessions) = 144 IC periods and 144 NC periods. The 144 IC periods were split evenly between MA and MS, yielding 72 MA and 72 MS periods.



This study was made possible by the Canada Research Chairs program, the Ontario Centres of Excellence, and the Natural Sciences and Engineering Research Council of Canada. The authors would like to acknowledge Ms. Yael Pomerantz for her assistance with the data collection for this study.

Authors’ Affiliations

Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital
Institute of Biomaterials and Biomedical Engineering, University of Toronto


  1. Tai K, Blain S, Chau T: A review of emerging access technologies for individuals with severe motor impairments. Assist Technol. 2008, 20: 204-219. 10.1080/10400435.2008.10131947.PubMedView ArticleGoogle Scholar
  2. Wolpaw JR, Birbaumer N, Heetderks WJ, McFarland DJ, Peckham PH, Schalk G, Donchin E, Quatrano LA, Robinson CJ, Vaughan TM: Brain-computer interface technology: A review of the first international BCI meeting. IEEE Trans on Rehab Eng. 2000, 8 (2): 164-173. 10.1109/TRE.2000.847807.View ArticleGoogle Scholar
  3. Sitaram R, Zhang H, Guan C, Thulasidas M, Hoshi Y, Ishikawa A, Shimizu K, Birbaumer N: Temporal classification of multichannel near-infrared spectroscopy signals of motor imagery for developing a brain-computer interface. NeuroImage. 2007, 34 (4): 1416-1427. 10.1016/j.neuroimage.2006.11.005.PubMedView ArticleGoogle Scholar
  4. Coyle SM, Ward TE, Markham CM: Brain-computer interface using a simplified functional near-infrared spectroscopy system. J Neural Eng. 2007, 4: 219-226. 10.1088/1741-2560/4/3/007.PubMedView ArticleGoogle Scholar
  5. Naito M, Michioka Y, Ozawa K, Ito Y, Kiguchi M, Kanazawa T: A Communication means for totally locked-in ALS patients based on changes in cerebral blood volume measured with near-infrared light. IEICE Trans Inf Syst. 2007, E90-D (7): 1028-1037. 10.1093/ietisy/e90-d.7.1028.View ArticleGoogle Scholar
  6. Power SD, Falk TH, Chau T: Classification of prefrontal activity due to mental arithmetic and music imagery using hidden Markov models and frequency domain near-infrared spectroscopy. J Neural Eng. 2010, 7 (026002): 9-Google Scholar
  7. Sitaram R, Caria A, Birbaumer N: Hemodynamic brain-computer interfaces for communication and rehabilitation. Neural Networks. 2009, 22: 1320-1328. 10.1016/j.neunet.2009.05.009.PubMedView ArticleGoogle Scholar
  8. Coyle S, Ward T, Markham C, McDarby G: On the suitability of near-infrared (NIR) systems for next-generation brain-computer interfaces. Physiol Meas. 2004, 25: 815-822. 10.1088/0967-3334/25/4/003.PubMedView ArticleGoogle Scholar
  9. Bauernfeind G, Leeb R, Wriessnegger SC, Pfurtscheller G: Development, set-up and first results for a one-channel near-infrared spectroscopy system. Biomed Eng. 2008, 53: 36-43.View ArticleGoogle Scholar
  10. Utsugi K, Obata A, Sato H, Katsura T, Sagara K, Maki A, Koizumi H: Development of an optical brain-machine interface. Proceedings of the 29th Annual International Conference of the IEEE EMBS: 23-26 August 2007; Lyon, France. 2007, 5338-5341.Google Scholar
  11. Ogata H, Mukai T, Yagi T: A study on the frontal cortex in cognitive tasks using near-infrared spectroscopy. Proceedings of the 29th Annual International Conference of the IEEE EMBS: 23-26 August 2007; Lyon, France. 2007, 4731-4734.Google Scholar
  12. Pfurtscheller G, Bauernfeind G, Wriessnegger SC, Neuper C: Focal frontal (de)oxyhemoglobin responses during simple arithmetic. Int J Psychophysiol. 2010, 76: 186-192. 10.1016/j.ijpsycho.2010.03.013.PubMedView ArticleGoogle Scholar
  13. Mason SG, Kronegg J, Huggins JE, Fatourechi M, Schlögl A: Evaluating the performance of self-paced brain-computer interface technology. 2006, Brain-Interface Laboratory, Neil Squire Society, Vancouver, Canada, Technical report, []Google Scholar
  14. Power SD, Kushki A, Chau T: Toward a system-paced NIRS-BCI: differentiating prefrontal activity due to mental artithmetic and music imagery from the no-control state. J Neural Eng. 2011, 8 (066004): 14-Google Scholar
  15. Bauernfeind G, Scherer R, Pfurtscheller G, Neuper C: Single-trial classification of antagonistic oxyhemoglobin responses during mental arithmetic. Med Biol Eng Comput. 2011, 49 (9): 979-984. 10.1007/s11517-011-0792-5.PubMedView ArticleGoogle Scholar
  16. Feng S, Wang W, Liu H, Abraham A: The Deactivation Network in Brain During Acute Stress. Proceedings of the International Conference of Soft Computing and Pattern Recognition: 14-16 October; Dalian, China. 2011, 533-537.Google Scholar
  17. Kleber B, Birbaumer N, Veit R, Trevorrow T, Lotzea M: Overt and imagined singing of an Italian aria. NeuroImage. 2007, 36: 889-900. 10.1016/j.neuroimage.2007.02.053.PubMedView ArticleGoogle Scholar
  18. Fuchino Y, Nagaob M, Katura T, Bandob M, Naito M, Makic A, Nakamuraa K, Hayashi H, Koizumi H, Yoro T: High cognitive function of an ALS patient in the totally locked-in state. Neurosci Lett. 2008, 435: 85-89. 10.1016/j.neulet.2008.01.046.PubMedView ArticleGoogle Scholar
  19. Falk TH, Guirgis M, Power S, Chau T: Taking NIRS-BCIs Outside the Lab: Towards Achieving Robustness Against Environment Noise. IEEE Neural Syst and Rehab Eng. 2011, 19 (2): 136-146.View ArticleGoogle Scholar
  20. Pisapia ND, Slomski JA, Braver TS: Functional specializations in lateral prefrontal cortex associated with the integration and segregation of information in working memory. Cereb Cortex. 2007, 17: 993-1006.PubMedView ArticleGoogle Scholar
  21. Gruber O, Indefrey P, Steinmetz H, Kleinschmidt A: Dissociating neural correlates of cognitive components in mental calculation. Cereb Cortex. 2001, 11: 350-359. 10.1093/cercor/11.4.350.PubMedView ArticleGoogle Scholar
  22. Zago L, Pesenti M, Mellet E, Crivello F, Mazoyer B, Tzourio-Mazoyer N: Neural correlates of simple and complex mental calculation. NeuroImage. 2001, 13: 314-327.PubMedView ArticleGoogle Scholar
  23. Tanida M, MK M, Sakatani K: Relation between mental stress-induced prefrontal cortex activity and skin conditions: a near-infrared spectroscopy study. Brain Res. 2007, 1184: 210-216.PubMedView ArticleGoogle Scholar
  24. Tanida M, Katsuyama M, Sakatani K: Effects of fragrance administration on stress-induced prefrontal cortex activity and sebum secretion in the facial skin. Neurosci Lett. 2008, 432: 157-161. 10.1016/j.neulet.2007.12.014.PubMedView ArticleGoogle Scholar
  25. Richter MM, Zierhut KC, Dresler T, Plichta MM, Ehlis AC, Reiss K, Pekrun R, Fallgatter AJ: Changes in cortical blood oxygenation during arithmetical tasks measured by near-infrared spectroscopy. J Neural Trans. 2009, 116: 267-273. 10.1007/s00702-008-0168-7.View ArticleGoogle Scholar
  26. Altenmuller E, Schurmann K, Lim VK, Parlitz D: Hits to the left, flops to the right: different emotions during listening to music are reflected in cortical lateralization patterns. Neuropsychologia. 2002, 40: 2242-2256. 10.1016/S0028-3932(02)00107-0.PubMedView ArticleGoogle Scholar
  27. Krumhansl CL: An exploratory study of musical emotions and psychophysiology. Can J Exp Psychol. 1997, 51: 336-53.PubMedView ArticleGoogle Scholar
  28. Boso M, Politi P, Barale F, Enzo E: Neurophysiology and neurobiology of the musical experience. Funct Neurol. 2006, 21: 187-191.PubMedGoogle Scholar
  29. Blood AJ, Zatorre RJ: Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc Natl Acad Sci USA. 2001, 98: 11818-11823. 10.1073/pnas.191355898.PubMedPubMed CentralView ArticleGoogle Scholar
  30. Blood AJ, Zatorre RJ, Bermudez P, Evans AC: Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nat Neurosci. 1999, 2: 382-387. 10.1038/7299.PubMedView ArticleGoogle Scholar
  31. Herrmann MJ, Ehlis AC, Fallgatter AJ: Prefrontal activation through task requirements of emotional induction measured with NIRS. Biol Psychol. 2003, 64 (3): 255-263. 10.1016/S0301-0511(03)00095-4.PubMedView ArticleGoogle Scholar
  32. Yamamoto T, Maki A, Kadoya T, Tanikawa Y, Yamada Y, Okada E, Koizumi H: Arranging optical fibres for the spatial resolution improvement of topographical images. Phys Med Biol. 2002, 47: 3429-3440. 10.1088/0031-9155/47/18/311.PubMedView ArticleGoogle Scholar
  33. Franceschini MA, Fantini S, Toronov V, Filiaci ME, Gratton E: Cerebral hemodynamics measured by near-infrared spectroscopy at rest and during motor activation. Proceedings of the Optical Society of America In Vivo Optical Imaging Workshop: Washington. 2000, 73-80.Google Scholar
  34. Matthews F, Pearlmutter BA, Ward TE, Soraghan C, Markham C: Hemodynamics for Brain-Computer Interfaces. IEEE Signal Process Mag. 2008, 25: 87-94.View ArticleGoogle Scholar
  35. Villringer A, Chance B: Non-invasive optical spectroscopy and imaging of human brain function. Trends in Neurosci. 1997, 20: 435-442. 10.1016/S0166-2236(97)01132-6.View ArticleGoogle Scholar
  36. Tai K, Chau T: Single-trial classification of NIRS signals during emotional induction tasks: Towards a corporeal machine interface. J NeuroEng Rehab. 2009, 6 (39): 1-14.Google Scholar
  37. Grefenstette J, Baker J: How genetic algorithms work: a critical look at implicit parallelism. Proceedings of the Third International Conference on Genetic Algorithm: 4-7 June; Fairfax. Edited by: Schaffer JD. 1989, 20-27.Google Scholar
  38. Tan F, Fu X, Zhang Y, Bourgeois AG: A genetic algorithm-based method for feature subset selection. Soft Comput. 2008, 12: 111-120.View ArticleGoogle Scholar
  39. Zeng F, Yap R, Wong L: Using feature generation and feature selection for accurate prediction of translation initiation sites. Genome Informatics. 2002, 13: 192-200.PubMedGoogle Scholar
  40. Muller-Putz GR, Scherer R, Brunner C, Leeb R, Pfurtscheller G: Better than random? A closer look at BCI results. Int J Bioelectromagnet. 2008, 10: 1-Google Scholar
  41. Perelmouter J, Birbaumer N: A binary spelling interface with random errors. IEEE Trans Rehab Eng. 2000, 8: 227-232. 10.1109/86.847824.View ArticleGoogle Scholar
  42. Kubler A, Nijboer F, Mellinger J, Vaughan TM, Pawelzik H, Schalk G, McFarland DJ, Birbaumer N, Wolpaw JR: Patients with ALS can use sensorimotor rhythms to operate a brain-computer interface. Neurology. 2005, 64 (10): 1775-1777. 10.1212/01.WNL.0000158616.43002.6D.PubMedView ArticleGoogle Scholar
  43. Haeussinger FB, Heinzel S, Hahn T, Schecklmann M, Ehlis AC, Fallgatter AJ: Simulation of near-infrared light absorption considering individual head and prefrontal cortex anatomy: Implications for optical neuroimaging. PLoS One. 2011, 6 (10): e26377-10.1371/journal.pone.0026377.PubMedPubMed CentralView ArticleGoogle Scholar
  44. Hirshfield LM, Girouard ETSA, Kebinger J, Sassaroli A, Tong Y, Fantini S, Jacob RJK: Brain measurement for usability testing and adaptive interfaces: An example of uncovering syntactic workload with functional near infrared spectroscopy. Proceedings of CHI 2009 Conference on Human Factors in Computing Systems: 4-9 April; Boston. 2009Google Scholar
  45. Girouard A, Solovey ET, Hirshfield LM, Chauncey K, Sassaroli A, Fantini S, Jacob RJK: Distinguishing Difficulty Levels with Non-invasive Brain Activity Measurements. In Lecture Notes in Comput Sci. 2009, 5726: 440-452. 10.1007/978-3-642-03655-2_50.View ArticleGoogle Scholar
  46. Zander TO, Kothe C: Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general. J Neural Eng. 2011, 8 (025005): 5-Google Scholar
  47. Leeb R, Friedman D, Muller-Putz GR, Scherer R, Slater M, Pfurtscheller G: Self-paced (Asynchronous) BCI control of a wheelchair in virtual environments: A case study with a tetraplegic. Computational Intelligence and Neuroscience. 2007, 2007: 8-View ArticleGoogle Scholar
  48. Scherer R, Leeb F, Schlogl A, Leeb R, Bischof H, Pfurtscheller G: Toward self-paced brain-computer communication: Navigation through virtual worlds. IEEE Trans Biomed Eng. 2008, 55 (2): 675-682.PubMedView ArticleGoogle Scholar


© Power et al; licensee BioMed Central Ltd. 2012

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.