Author
Listed:
- Ji Chaoqun
- Chen Wei
- Ye Peng
- Wang Zhou
- Zhou Shuhang
Abstract
In the domain of speaker verification, Softmax can be used as a backend for multi-classification, but traditional Softmax methods have some limitations that limit performance. During the training phase, Softmax is used for multi-class training, while the speaker verification stage is a binary classification validation, leading to a discrepancy between the multi-class training in the training phase and the binary classification validation in the verification stage. It is also important to notice the issue of the disparity in the number of positive and negative samples in the sampling process of a binary classification problem. The difference in positive and negative samples can lead to the dominance of negative sample gradients during machine learning training, which can affect the performance of the speaker verification system. During the process of calculating similarity between positive and negative samples, there may be encountered an issue of overlapping similarity scores. If the overlapping portion is too large, it can reduce the discriminability between positive and negative samples, affecting the speaker system’s ability to distinguish between positive and negative samples. Considering the relatively compact distribution of positive and negative sample spaces, it is beneficial for enhancing the performance of the speaker system, and focusing more on the learning of difficult samples is conducive to improving the network’s convergence and generalization. Thus, this paper introduces an adaptive target function capable of solving these issues (SphereSpeaker). SphereSpeaker introduces different types of hyperparameters on the basis of Softmax, making it more suitable for handling speaker verification problems. SphereSpeaker also introduces three different angular margins to update the network, further enhancing the stability and generalization ability of the network model. Meanwhile, considering the issues of gradient vanishing, gradient explosion, and model degradation that can occur in deep neural networks, this paper introduces a deep neural network, which is named as Residual Network PReLu(ResNet-P). The experimental results indicate that compared to other deep neural network methods, this method has the lowest equal error rate, significantly improving the performance of the speaker verification system.
Suggested Citation
Ji Chaoqun & Chen Wei & Ye Peng & Wang Zhou & Zhou Shuhang, 2025.
"Target sample mining with modified activation residual network for speaker verification,"
PLOS ONE, Public Library of Science, vol. 20(4), pages 1-14, April.
Handle:
RePEc:plo:pone00:0320256
DOI: 10.1371/journal.pone.0320256
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0320256. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.