NIPS 2004 workshop on

Towards Brain-Computer Interfacing

wct
description invitees schedule abstracts organizers

Invitation:

Hereby, we invite you to participate in our workshop on Brain-Computer Interfacing. Should you be interested, we would like to add your name to a workshop list where we plan to announce further details about the workshop schedule and event in a timely manner. We would encourage you to submit any ideas related to the organization of the meeting so that we have a fun filled, fruitful meeting. If you are interested, please contact one of the organizers.

Time and place:

Friday, December 17, 2004
Workshop Sessions: 7:30am - 10:30am and 4:00pm - 7:00pm
Whistler, Canada

Planned format:

One day workshop with short presentations, with plenty of time for discussion. Furthermore the BCI competition III will be introduced.

Description:

During the last years research interest is growing to develop a so called 'Brain-Computer Interface' which allows one-sided communication of humans with machines like computers, wheelchairs or prostheses only by use of brain signals. On the one hand, such an interface provides a new (and possibly only) communication channel for people who suffer from severe physical disabilities while having intact cognitive functions (e.g. ALS). For healthy subjects on the other hand, this interface can enhance and facilitate man-machine interaction by providing additional control options. Several results during the last decades have proven that it is possible to implement a BCI system and even some locked-in patients could use BCI systems to express their thoughts and wishes to the outside world. But so far the applicability of BCI systems is limited by factors, like high error rates, long training and preparation times, high subject variability etc.

Different issues are to be discussed, e.g,
- invasive vs. non-invasive measurements of brain signals
- subject training (operand conditioning) vs. machine training
- evoked potentials vs. spontaneous brain responses
- signal processing and machine learning techniques for BCI

During the workshop these issues and their success, applicability in BCI and possible enhancements of existing BCI systems will be discussed.

Tentative speakers:

Chuck Anderson, Colorado State University
Richard Anderson, California Institute of Technology, Pasadena
Jesicca D. Bayliss, Rochester Institute of Technology
Michael J. Black, Brown University
Guido Dornhege, Fraunhofer FIRST.IDA
Jeffrey Fessler, University of Michigan
Thilo Hinterberger, University of Tübingen
Thomas Navin Lal, Max-Planck-Institute of Tübingen
Steve G. Mason, Neil Squire Foundation
Dennis McFarland, Wadsworth Center, Albany
José del R. Millán, IDIAP Research Institute
Klaus-Robert Müller, Fraunhofer FIRST.IDA, University of Potsdam
Paul Sajda, Department of Biomedical Engineering, Columbia University
Alois Schlögl, University of Technology Graz
Andrew Schwartz, University of Pittsburgh
Lavi Shpigelman, The Hebrew University Jerusalem

Schedule:

Friday morning session (7:30am - 10:30am):

7:30am Klaus-Robert Müller
INTRODUCTION
7:35am Andrew Schwartz
ADVANCES IN INVASIVE BCI
7:50am Richard Anderson
COGNITIVE CONTROL SIGNALS FOR NEURAL PROSTHETICS
8:05am Michael J. Black
PROBABLISTIC DECODING FOR A NEURAL MOTOR PROTHESIS
8:20am Lavi Shpigelman
A TEMPORAL KERNEL-BASED MODEL FOR TRACKING HAND-MOVEMENTS FROM NEURAL ACTIVITIES
8:35am COFFEE BREAK
9:00am Jeffrey Fessler
MODEL-BASED DETECTION OF EVENT-RELATED SIGNALS IN ELECTROCORTICOGRAM
9:15am Thomas Navin Lal
ON BRAIN COMPUTER INTERFACES BASED ON ELECTROCORTICOGRAPHY RECORDINGS
9:30am Jessica D. Bayliss
P300 BRAIN-COMPUTER INTERFACE CONSIDERATIONS
9:45am Steve G. Mason
CONCEPTUAL MODELS FOR BCI DEVELOPMENT AND TESTING
10:00am Paul Sajda
SINGLE-TRIAL DETECTION OF VISUAL RECOGNITION AND DISCRIMINATION EVENTS IN EEG
10:15am Alois Schlögl
BIOSIG - A SOFTWARE LIBRARY FOR BCI RESEARCH
10:30am MORNING SESSION ENDS

Friday afternoon session (4:00pm - 7:00pm):

4:00pm Guido Dornhege
THE BERLIN-BRAIN COMPUTER INTERFACE
4:15pm Chuck Anderson
PATTERNS IN EEG FOR DISCRIMINATION BETWEEN MENTAL TASKS
4:30pm José del R. Millán
NON-INVASIVE LOCAL FIELD POTENTIALS FOR BCI
4:45pm Thilo Hinterberger
AUDITORY FEEDBACK OF HUMAN EEG FOR DIRECT BRAIN-COMPUTER COMMUNICATION
5:00pm COFFEE BREAK
5:30pm Dennis McFarland
CONTROL OF TWO-DIMENSIONAL CURSOR MOVEMENT BY A NON-INVASIVE BRAIN-COMPUTER INTERFACE IN HUMANS
5:45pm Klaus-Robert Müller
INTRODUCTION BCI COMPETITION III
6:00pm DISCUSSION
7:00pm AFTERNOON SESSION ENDS

Abstracts:

PATTERNS IN EEG FOR DISCRIMINATION BETWEEN MENTAL TASKS
Chuck Anderson (Colorado State University)

Linear transformations of lagged, multi-channel, spontaneous EEG recorded from subjects performing different mental tasks reveal spatial and temporal patterns that are similar across tasks and other patterns that are dissimilar. The similar patterns may be due to noise in the recording process and to mental activity common to the tasks. The dissimilar patterns form a basis for identifying the mental tasks that underlie the recorded EEG. Results are described for transforms based on singular value decomposition, maximum signal fraction, canonical correlation analysis, and independent components analysis.


COGNITIVE CONTROL SIGNALS FOR NEURAL PROSTHETICS
Richard Anderson (California Institute of Technology, Pasadena)

High-level cognitive signals including the goals of movements and the expected value of rewards for performing them can be decoded from cortical neural activity. For neural prosthetics applications to assist paralyzed patients, the goal signals can be used to operate external devices such as computers, robots, and vehicles, and the expected value signals can be used to continuously monitor the preferences and motivation of the patients.


P300 BRAIN-COMPUTER INTERFACE CONSIDERATIONS
Jesicca D. Bayliss (Rochester Institute of Technology), Samuel A. Inverso (Media Lab Europe)

The P300 component of the evoked potential has proven useful as a control signal for brain-computer interfaces. Individuals do not need to be trained to produce the signal, the signal occurs in response to auditory as well as visual stimuli, and it is a fairly stable and large evoked potential. Even with recent signal classification advances, on-line experiments with P300-based BCIs remain far from perfect. We present some potential methods for improving control accuracy. From experimental results in an evoked potential BCI used to control items in a virtual apartment, we show that a reduced response exists when items are accidentally controlled. The presence of a P300-like signal in responses to goal items means that it can be used for automatic error correction. The P300 response may also be affected by the design of the application interface and we show some simple examples of this with a yes/no application. A T9 ambiguous keyboard algorithm combined with context-sensitive word selection can reduce the decisions required to 'type' a work. This approach can be a benefit for P300 spelling applications, which are traditionally slow and prone to error. We show results from a prototype context-sensitive predictive text system for this purpose.


PROBABILISTIC DECODING FOR A NEURAL MOTOR PROSTHESIS
Michael J. Black, John Donoghue (Department of Computer Science Brown University)

The rapid development of neural motor prostheses has been enabled by new devices for recording the activity of populations of neurons and new mathematical tools for decoding this activity. This talk will introduce and summarize the work at Brown University on real-time neural control of cursor motion. We view this problem as one of probabilistic inference from uncertain data over time. Even at the lowest level of signal detection there is inherent uncertainty about the neural activity and to address this we present a probabilistic spike sorting algorithm that outperforms human experts. At the level of prosthetic control we show that Bayesian methods for decoding neural signals enable accurate real-time neural control of 2D cursor motion. In on-line cursor control experiments these probabilistic methods outperform previous approaches. The talk will also examine new machine learning methods for probabilistically modeling the neural code.


THE BERLIN BRAIN-COMPUTER INTERFACE
Guido Dornhege (Fraunhofer FIRST.IDA), Benjamin Blankertz (Fraunhofer FIRST.IDA), Gabriel Curio (Charité - University Medicine Berlin), Klaus-Robert Müller (Fraunhofer FIRST.IDA)

The Berlin BCI approach is driven by the motto 'Let the machines learn'. Subjects can utilize this interface right away after a short training time since the computer adapts to the subject's behaviour and not vice versa. Based on neurophysiological knowledge the computer is programmed to extract suitable features individually for each subject after observing brain signals in a controlled szenario. In this manner, our system uses slow potentials for prediction of upcoming movements and a combination of slow potentials and oscillatoric features for imagined movements to control continuous feedback szenarios. In the talk I will focus on the latter, namely continuous feedbacks based on both slow potentials and oscillatoric features. With this system it was possible to achieve bitrates up to 37 bits/min in recent BCI feedback experiments with untrained subjects.


MODEL-BASED DETECTION OF EVENT-RELATED SIGNALS IN ELECTROCORTICOGRAM
Jeffrey Fessler (University of Michigan), Simone P.Levine (University of Michigan)

Our group is investigating the potential for a direct brain interface based on signals from electrocorticogram (ECoG). To date, our signal de tection method has been cross-correlation template matching (CCTM), which is optimal for detecting deterministic event-related potentials in a dditive white gaussian noise. However, the CCTM method disregards the spectral changes of ECoG signals that result from event-related desynch ronization (ERD) and event-related synchronization (ERS). Schloegl et al. modeled these changes using adaptive auto-regressive (AAR) methods, using the AR coefficients as features for subsequent classification. Rather than using a feature-based classifier, we derive the most powerf ul likelihood ratio test (MP-LRT) for detecting gaussian signals with different spectral properties, with AR spectral models. The resulting d etector simply compares the variances of the innovation sequences at the output of two FIR filters (the inverse AR filters). We describe the training procedure for our unlabeled, self-paced training data. Experimental results show that the AR/LRT method has better signal detection performance than the CCTM method when detection delay is constrained appropriately.


AUDITORY FEEDBACK OF HUMAN EEG FOR DIRECT BRAIN-COMPUTER COMMUNICATION
Thilo Hinterberger (Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen)

Communication with severely paralyzed patients using brain signals only is an important application of brain computer interfaces (BCIs). Perso ns diagnosed with amyotrophic lateral sclerosis are able to communicate using self-regulation of slow potential changes of their EEG (Birbaumer et al., 1999). In patients in an advanced stage of the disease, focusing gaze to sufficiently process the visual feedback or read the letter s in the verbal communication paradigm is no longer possible. In this case, a non-visual feedback modality such as auditory or tactile feedbac k has to be implemented.
Here it is demonstrated how the BCI "Thought-Translation-Device (TTD)"can be entirely operated auditorily by combined listening and mental ac tivity. Auditory feedback of EEG parameters as well as sonified EEG rhythms allow for acquisition of self regulation skills. The extension PO SER allows for sonified orchestral real-time feedback of multiple EEG parameters for the training of self-regulation. The properties of the sy stem are reported and the results of some studies and experiments with auditory feedback are presented. Further, simultaneous feedback of mul tiple EEG parameters may offer a fast way for finding controllable EEG parameters.
Finally, a fully auditory driven verbal spelling interface is presented and further improvement of this system will be discussed.


ON BRAIN COMPUTER INTERFACES BASED ON ELECTROCORTICOGRAPHY RECORDINGS
Thomas Navin Lal (Max-Planck-Institute Tübingen), Thilo Hinterberger (University of Tübingen), Bernhard Schoelkopf (Max-Planck-Institute Tübingen), Niels Birbaumer (University of Tübingen)

During the last ten years there has been growing interest in the development of Brain Computer Interfaces (BCIs). The field has mainly been driven by the needs of completely paralyzed patients to communicate. With a few exceptions, most human BCIs are based on extracranial electroencephalography (EEG). However, reported bit rates are still low. One reason for this is the low signal-to-noise ratio of the EEG. We are currently investigating if BCIs based on electrocorticography (ECoG) are a viable alternative. In this talk I would like to present first results on offline analysis of ECoG data recorded during a motor imagery paradigm.


CONCEPTUAL MODELS FOR BCI DEVELOPMENT AND TESTING
Steve G. Mason (Neil Squire Foundation)

The development of brain computer interface (BCI) (also referred to as brain machine interface (BMI) technology) continues to attract researchers with a wide range of backgrounds and expertise. However, this diversity is accompanied by inconsistent terminology and methods which often hinders understanding of reported designs and findings, and impairs technology cross-fertilization and cross-group validation of results. Underlying this dilemma is a lack of common perspective and language. In an attempt to foster the development of common perspective and language, researchers at the Neil Squire Foundation (Vancouver, Canada) have developed several theoretical models to represent the key aspects of BCI technology design and testing. In this talk, these models will be presented using examples of existing BCI research. The presentation will conclude with a general discussion of terminology and related issues.


CONTROL OF TWO-DIMENSIONAL CURSOR MOVEMENT BY A NON-INVASIVE BRAIN-COMPUTER INTERFACE IN HUMANS
Dennis McFarland (Wadsworth Center, Albany)

We trained four individuals to uses scalp-recorded EEG activity (i.e., sensorimotor rhythms) to move a cursor in two dimensions toward targets on a computer screen. They can readily use this control to reach novel targets not encountered during training. We model this problem as one of regression rather than classification. Performance is optimized on-line by an adaptive algorithm. The results suggest that people with severe motor disabilities could use non-invasive brain signals to operate a robotic arm or a neuroprosthesis.


NON-INVASIVE LOCAL FIELD POTENTIALS FOR BCI
J. del R. Millán, S. Gonzalez Andino, L. Perez, P. Ferrez, R. Grave de Peralta (IDIAP Research Institute)

Recent experiments have shown the possibility to use the brain electrical activity to directly control the movement of robots or prosthetic devices in real time. Such neuroprostheses can be invasive or invasive, depending on how the brain signals are recorded. In principle, invasive approaches will provide a more natural and flexible control of neuroprostheses, but their use in humans is debatable given the inherent medical risks. Non-invasive approaches mainly use scalp electroencephalogram (EEG) signals and their main disadvantage is that those signals represent the noisy spatiotemporal overlapping of activity arising from very diverse brain regions; i.e., a single scalp electrode picks up and mixes the temporal activity of myriads of neurons at very different brain areas. In order to combine the benefits of both approaches, we propose to rely on the non-invasive estimation of local field potentials in the whole human brain from the scalp measured EEG data using a recently developed inverse solution (ELECTRA) to the EEG inverse problem. The goal of a linear inverse procedure is to de-convolve or un-mix the scalp signals attributing to each brain area its own temporal activity. To illustrate the advantage of this approach we compare, using identical set of spectral features, classification of voluntary finger self-tapping with left and right hands based on scalp EEG and non-invasively estimated LFP.


SINGLE-TRIAL DETECTION OF VISUAL RECOGNITION AND DISCRIMINATION EVENTS IN EEG
Paul Sajda (Department of Biomedical Engineering, Columbia University)

The performance of a human subject executing a task while interacting with a computer can be highly variable, depending upon such individual factors as level of alertness, reaction speed, working memory capacity, and capacity to perform parallel tasks. Most current human-computer interfaces (HCI) do not adapt to the physiological or psychological state of the user. The goal of an adaptive interface is to estimate variables correlated with human performance and adapt the HCI accordingly (e.g. adjust speed of display, provide appropriate cues, automatically correct errors, automatically detect targets etc.). In this talk I will describe our work on single-trial detection and analysis of EEG signatures for visual target recognition and discrimination, as well as our efforts to use such signatures to enable adaptive and "cognitive" interfaces for improving user performance.


BIOSIG - A SOFTWARE LIBRARY FOR BCI RESEARCH
Alois Schlögl, C. Brunner, B. Graimann, R. Leeb, G. Müller-Putz, R. Scherer, G. Townsend, C. Vidaurre, G. Pfurtscheller (University of Graz)

In the field of BCI research, many groups "reinvent" or need to re-implement the same methods. This hinders a faster advance in the field. In order to overcome this problem, the open source project BIOSIG http://biosig.sf.net/ has been started. Biosig provides a platform for biomedical signal processing in general. It already contains a comprehensive set of algorithms for the offline analysis of brain-computer interface (BCI) recordings. We'll show some results from various BCI data sets including four-class BCI data, adaptive classifiers, navigation through virtual reality, and asynchronous BCI detection.


ADVANCES IN INVASIVE BCI
Andrew Schwartz (University of Pittsburgh)

Over the years, we have shown that detailed predictive information of the arm's trajectory can be extracted from populations composed of single unit recordings from motor cortex. By developing techniques to record these populations and process the signal in real-time, we have been successful in demonstrating the efficacy of these recordings as a control signal for intended movements in 3D space. Having shown that closed-loop control of a cortical prosthesis can produce very good brain-controlled movements in virtual reality, we have been extending this work to robot control. By introducing an anthropomorphic robot arm into our closed-loop system, we have shown that a monkey can easily control the robot's movement with direct brain-control while watching the movement in virtual-reality. The animal learned this rapidly and produced good movements in 3D space. The next step was to have the animal visualize and move the arm directly without the VR display. This was much more difficult for the animal to learn, as it seemed to have difficulty understanding that the robot was to act as a tool. After the animal was trained, it was able to use the robot to reach for hand-held (by the investigator) targets. We are now training monkeys and developing hardware and software to demonstrate a prosthetic device that can be used to reach out for food targets at different locations in space, and to retrieve them so they can be eaten.


A TEMPORAL KERNEL-BASED MODEL FOR TRACKING HAND-MOVEMENTS FROM NEURAL ACTIVITIES
Lavi Shpigelman, Koby Crammer, Rony Paz, Eilon Vaadia, Yoram Singer (The Hebrew University Jerusalem)

We devise and experiment with a dynamical kernel-based system for tracking hand movements from neural activity. The state of the system corresponds to the hand location, velocity, and acceleration, while the system's input are the instantaneous spike rates. The system's state dynamics is defined as a combination of a linear mapping from the previous estimated state and a kernel-based mapping tailored for modeling neural activities. In contrast to generative models, the activity-to-state mapping is learned using discriminative methods by minimizing a noise-robust version of the L1 norm. We use this approach to predict hand trajectories on the basis of neuronal activity in motor cortex of behaving monkeys and find that the proposed approach is more accurate than both a static approach based on support vector regression and the Kalman filter.


Organizers: