End-to-end losses based on speaker basis vectors and all-speaker hard negative mining for speaker verification

Hee Soo Heo, Jee Weon Jung, IL Ho Yang, Sung Hyun Yoon, Hye Jin Shim, Ha Jin Yu

Research output: Contribution to journalConference articlepeer-review

13 Scopus citations

Abstract

In recent years, speaker verification has primarily performed using deep neural networks that are trained to output embeddings from input features such as spectrograms or Mel-filterbank energies. Studies that design various loss functions, including metric learning have been widely explored. In this study, we propose two end-to-end loss functions for speaker verification using the concept of speaker bases, which are trainable parameters. One loss function is designed to further increase the inter-speaker variation, and the other is designed to conduct the identical concept with hard negative mining. Each speaker basis is designed to represent the corresponding speaker in the process of training deep neural networks. In contrast to the conventional loss functions that can consider only a limited number of speakers included in a mini-batch, the proposed loss functions can consider all the speakers in the training set regardless of the mini-batch composition. In particular, the proposed loss functions enable hard negative mining and calculations of between-speaker variations with consideration of all speakers. Through experiments on VoxCeleb1 and VoxCeleb2 datasets, we confirmed that the proposed loss functions could supplement conventional softmax and center loss functions.

Fingerprint

Dive into the research topics of 'End-to-end losses based on speaker basis vectors and all-speaker hard negative mining for speaker verification'. Together they form a unique fingerprint.

Cite this