Vis enkel innførsel

dc.contributor.authorMoctezuma, Luis Alfredo
dc.contributor.authorAbe, Takashi
dc.contributor.authorMolinas Cabrera, Maria Marta
dc.date.accessioned2022-03-24T12:28:45Z
dc.date.available2022-03-24T12:28:45Z
dc.date.issued2022
dc.description.abstractIn this study we explore how different levels of emotional intensity (Arousal) and pleasantness (Valence) are reflected in Electroencephalographic (EEG) signals. We performed the experiments on EEG data of 32 subjects from the DEAP public dataset, where the subjects were stimulated using 60-second videos to elicitate different levels of Arousal/Valence and then self-reported the rating from 1-9 using the Self-Assessment Manikin (SAM). The EEG data was pre-processed and used as input to a Convolutional Neural Network (CNN). First, the 32 EEG channels were used to compute the maximum accuracy level obtainable for each subject as well as for creating a single model using data from all the subjects. The experiment was repeated using one channel at a time, to see if specific channels contain more information to discriminate between Low vs High Arousal/Valence. The results indicate than using one channel the accuracy is lower compared to using all the 32 channels. An optimization process for EEG channel selection is then designed with the Non-dominated Sorting Genetic Algorithm II (NSGA-II) with the objective to obtain optimal channel combinations with high accuracy recognition. The genetic algorithm evaluates all possible combinations using a chromosome representation for all the 32 channels, and the EEG data from each chromosome in the different populations are tested iteratively solving two unconstrained objectives; to maximize classification accuracy and to reduce the number of required EEG channels for the classification process. Best combinations obtained from a Pareto-front suggests that as few as 8-10 channels can fulfill this condition and provide the basis for a lighter design of EEG systems for emotion recognition. In the best case, the results show accuracies of up to 1.00 for Low vs High Arousal using 8 EEG channels, and 1.00 for Low vs High Valence using only 2 EEG channels. These results are encouraging for research and healthcare applications that will require automatic emotion recognition with wearable EEG.en_US
dc.identifier.citationMoctezuma LA, Abe T, Molinas Cabrera MM. Two-dimensional CNN-based distinction of human emotions from EEG channels selected by Multi-Objective evolutionary algorithm. Scientific Reports. 2022;12en_US
dc.identifier.cristinIDFRIDAID 2007351
dc.identifier.doi10.1038/s41598-022-07517-5
dc.identifier.issn2045-2322
dc.identifier.urihttps://hdl.handle.net/10037/24545
dc.language.isoengen_US
dc.publisherNature Researchen_US
dc.relation.journalScientific Reports
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2022 The Author(s)en_US
dc.titleTwo-dimensional CNN-based distinction of human emotions from EEG channels selected by Multi-Objective evolutionary algorithmen_US
dc.type.versionpublishedVersionen_US
dc.typeJournal articleen_US
dc.typeTidsskriftartikkelen_US
dc.typePeer revieweden_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel