home..

Unsupervised Representation Learning

Unsupervised representation learning has recently become a popular approach to pretraining deep neural network models. In image classification, techniques like SimCLR and MoCo have demonstrated impressive performance. Similar approaches have been applied to speech with models like Wav2Vec 2.0.

We are also exploring similar ideas for speech and other time-domain signals. Our first success adapted the HuBERT approach for the purposes of unsupervised domain adaptation.

Comments? Send me an email.
© 2023 William Hartmann   •  Theme  Moonwalk