Papers
arxiv:2406.12209

Interface Design for Self-Supervised Speech Models

Published on Jun 18
· Submitted by atosystem on Jun 20
Authors:

Abstract

Self-supervised speech (SSL) models have recently become widely adopted for many downstream speech processing tasks. The general usage pattern is to employ SSL models as feature extractors, and then train a downstream prediction head to solve a specific task. However, different layers of SSL models have been shown to capture different types of information, and the methods of combining them are not well studied. To this end, we extend the general framework for SSL model utilization by proposing the interface that connects the upstream and downstream. Under this view, the dominant technique of combining features via a layerwise weighted sum can be regarded as a specific interface. We propose several alternative interface designs and demonstrate that the weighted sum interface is suboptimal for many tasks. In particular, we show that a convolutional interface whose depth scales logarithmically with the depth of the upstream model consistently outperforms many other interface designs.

Community

Paper author Paper submitter
edited Jun 20

Interface Design for Self-Supervised Speech Models

Weighted Sum is suboptimal!
This paper highlighted the importance of the way we utilize Self-supervied Speech Models and proposed “Hierarchical Convolution” which generally improves the utilization of Self-supervied Speech Models on multiple tasks, even under end-to-end fine-tuning scenario.

Paper: https://arxiv.org/abs/2406.12209
Code: https://github.com/atosystem/SSL_Interface

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.12209 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.12209 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.12209 in a Space README.md to link it from this page.

Collections including this paper 2