No full text
Poster (Scientific congresses, symposiums and conference proceedings)
A comparative study of automatic classifiers to recognize speakers based on fricatives
Hosseini Kivanani, Nina; Asadi, Homa; Schommer, Christoph et al.
20221st Interdisciplinary Conference on Voice Identity (VoiceID): Perception, Production, and Computational Approaches
 

Files


Full Text
No document available.

Send to



Details



Abstract :
[en] Speakers’ voices are highly individual and for this reason speakers can be identified based on their voice. Nevertheless, voices are often more variable within the same speaker than they are between speakers, which makes it difficult for humans and machines to differentiate between speakers (Hansen, J. H., & Hasan, T., 2015). To date, various machine learning methods have been developed to recognize speakers based on the acoustic characteristics of their speech; however, not all of them have proven equally effective in speaker identification, and depending on the obtained techniques, the system achieves a different result. Here, different machine learning classifiers have been applied to identify the best classification model (i.e., Naïve Bayes (NB), support vector machines (SVM), random forests (RF), & k-nearest neighbors (KNN)) for categorizing 4 speaking styles based on the segment types (voiceless fricatives) considering acoustic features of center of gravity, standard deviation, and skewness. We used a dataset consisting of speech samples from 7 native Persian subjects speaking in 4 different speaking styles: read, spontaneous, clear, and child-directed speech. The results revealed that the best performing model to predict the speakers based on the segment type was RF model with an accuracy of 81,3%, followed by SVM (76.3%), NB (75.4%), and KNN (74%) (Table 1). Our results showed that the RF performed the best for voiceless fricatives /f/, /s/, and / ʃ / which may indicate that these segments are much more speaker-specific than others (Gordon et al., 2002), and the model performance was low for the voiceless fricatives of /h/ and /x/. Performance can be seen in the confusion matrix (Figure 1), which produced high precision and recall values (above 80%) for /f/, /s/ and / ʃ / (Table 2). We found that the model performance improved when the data related to clear speaking style; the information in individual speakers (i.e., voiceless fricatives) are more distinguishable in clear style than other styles (Table 1).
Disciplines :
Computer science
Author, co-author :
Hosseini Kivanani, Nina  ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
Asadi, Homa;  University of Isfahan, University of Zurich
Schommer, Christoph  ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
Volker, Dellwo;  University of Zurich
External co-authors :
yes
Language :
English
Title :
A comparative study of automatic classifiers to recognize speakers based on fricatives
Publication date :
July 2022
Event name :
1st Interdisciplinary Conference on Voice Identity (VoiceID): Perception, Production, and Computational Approaches
Event place :
Zurich, Switzerland
Event date :
from 04-07-2022 to 06-07-2022
Audience :
International
Available on ORBilu :
since 19 July 2022

Statistics


Number of views
136 (36 by Unilu)
Number of downloads
0 (0 by Unilu)

Bibliography


Similar publications



Contact ORBilu