With the development of human–computer interaction technology, brain–computer interface (BCI) has been widely used in medical, entertainment, military, and other fields. Imagined speech is the latest paradigm of BCI and represents the mental process of imagining a word without making a sound or making clear facial movements. Imagined speech allows patients with physical disabilities to communicate with the outside world and use smart devices through imagination. Imagined speech can meet the needs of more complex manipulative tasks considering its more intuitive features. This study proposes a classification method of imagined speech Electroencephalogram (EEG) signals with discrete wavelet transform (DWT) and support vector machine (SVM). An open dataset that consists of 15 subjects imagining speaking six different words, namely, up, down, left, right, backward, and forward, is used. The objective is to improve the classification accuracy of imagined speech BCI system. The features of EEG signals are first extracted by DWT, and the imagined words are classified by SVM with the above features. Experimental results show that the proposed method achieves an average accuracy of 61.69%, which is better than those of existing methods for classifying imagined speech tasks.
You may also start an advanced similarity search for this article.