A Review on: EEG-Based Brain-Computer Interfaces for Imagined Speech Classification
Main Article Content
Abstract
In the field of medical science, particularly in neuroscience, recent studies have focused heavily on combining artificial intelligence with electroencephalography (EEG) for brain–computer interfaces (BCI). This area has become a crucial domain for research because of its potential to understand brain activity and develop new technologies to interpret brain signals for various applications. The goal of Brain-Computer Interfaces (BCIs) is to help people with speaking disabilities communicate. BCIs connect the brain to a computer, allowing individuals to control devices or communicate based on their thoughts. Brain signals are usually measured using an EEG. EEG signals constantly change and are unique to each person, varying in frequency. This study aimed to explore the potential of electroencephalogram (EEG) signals in classifying imagined speech data, with a focus on understanding the brain's response to speech-related activities and its application in BCI. This study highlights the importance of feature extraction techniques, including time-domain, frequency-domain, and time-frequency domain analyses, in enhancing the classification of EEG-based imagined speech data and covering EEG signal processing and classification, including data acquisition, pre-processing, feature extraction, and classification. Linear classifiers, such as support vector machines and logistic regression, are employed alongside neural networks, particularly convolutional neural networks (CNNs) and artificial neural networks (ANNs), to analyze and classify EEG data associated with imagined speech and applications of EEG. Research indicates that EEG data used for analyzing brain activity are complex and can be gathered via different techniques using different devices. Multiple steps, such as preprocessing, feature extraction, and classification, may be necessary based on the signal collection method and study objectives.