background
logo
ArxivPaperAI

Style Modeling for Multi-Speaker Articulation-to-Speech

Author:
Miseul Kim, Zhenyu Piao, Jihyun Lee, Hong-Goo Kang
Keyword:
Electrical Engineering and Systems Science, Audio and Speech Processing, Audio and Speech Processing (eess.AS), Sound (cs.SD)
journal:
--
date:
2023-12-21 00:00:00
Abstract
In this paper, we propose a neural articulation-to-speech (ATS) framework that synthesizes high-quality speech from articulatory signal in a multi-speaker situation. Most conventional ATS approaches only focus on modeling contextual information of speech from a single speaker's articulatory features. To explicitly represent each speaker's speaking style as well as the contextual information, our proposed model estimates style embeddings, guided from the essential speech style attributes such as pitch and energy. We adopt convolutional layers and transformer-based attention layers for our model to fully utilize both local and global information of articulatory signals, measured by electromagnetic articulography (EMA). Our model significantly improves the quality of synthesized speech compared to the baseline in terms of objective and subjective measurements in the Haskins dataset.
PDF: Style Modeling for Multi-Speaker Articulation-to-Speech.pdf
Empowered by ChatGPT