BCI Competition III, data set I Paul Hammon University of California, San Diego For this dataset, we tried a variety of different feature types and used SVMs [1] and L1 regularized logistic regression [2] as classification algorithms. After looking at the training and test sets, we decided to discard all channels which seemed to have wandering voltages in the test set. These channels included: [1 : 7, 9 : 11, 14 : 18, 32, 33, 36, 44 : 46, 53 : 54, 57 : 64]. To reduce the dimensionality somewhat, we subsampled the data by a factor of 4. We also decided to subtract off the mean from each channel for each trial. Due to the increased variance of the test data, we scaled each channel for each trial to have a standard deviation of one. We devised a set of pre-processing and feature selection steps which we thought might correlate well with the class labels. Pre-processing choices included: (1) selection of a set of 8 channels [29 : 32, 37 : 40] which had noticeably different average spectral power (0 – 45 Hz) between the two conditions on the test data (2) the ICA components of the data [3] (3) low-pass filtering 0 – 50 Hz The features we chose were: (1) autoregressive coefficients (order 2, re-estimated every second) (2) level 2 Haar wavelet decomposition (3) spectral power estimates from 0 to 45 Hz estimated every second As output processing, we tried creating histograms of each feature Different combinations of these pre-processing, feature extraction, and post-processing steps were tested using 10-fold cross-validation on the training set. The top 5 feature sets were tested against each other individually and in combination by either concatenating into a single vector or by building meta-classifiers. The best tested scheme was to combine the top 5 feature sets by concatenation, scale each feature individually, and train with regularized logistic regression. [1] Thorsten Joachims, Learning to Classify Text Using Support Vector Machines. Dissertation, Kluwer, 2002. [2] A. Ng. Feature selection, L1 vs. L2 regularization, and rotational invariance. Proceedings of the Twenty-first International Conference on Machine Learning, 2004. [3] A. Hyvärinen. Fast and Robust Fixed-Point Algorithms for Independent Component Analysis. IEEE Transactions on Neural Networks 10(3):626-634, 1999.