background
logo
ArxivPaperAI

ANIM-400K: A Large-Scale Dataset for Automated End-To-End Dubbing of Video

Author:
Kevin Cai, Chonghua Liu, David M. Chan
Keyword:
Electrical Engineering and Systems Science, Audio and Speech Processing, Audio and Speech Processing (eess.AS), Computation and Language (cs.CL), Computer Vision and Pattern Recognition (cs.CV), Sound (cs.SD)
journal:
--
date:
2024-01-10 00:00:00
Abstract
The Internet's wealth of content, with up to 60% published in English, starkly contrasts the global population, where only 18.8% are English speakers, and just 5.1% consider it their native language, leading to disparities in online information access. Unfortunately, automated processes for dubbing of video - replacing the audio track of a video with a translated alternative - remains a complex and challenging task due to pipelines, necessitating precise timing, facial movement synchronization, and prosody matching. While end-to-end dubbing offers a solution, data scarcity continues to impede the progress of both end-to-end and pipeline-based methods. In this work, we introduce Anim-400K, a comprehensive dataset of over 425K aligned animated video segments in Japanese and English supporting various video-related tasks, including automated dubbing, simultaneous translation, guided video summarization, and genre/theme/style classification. Our dataset is made publicly available for research purposes at https://github.com/davidmchan/Anim400K.
PDF: ANIM-400K: A Large-Scale Dataset for Automated End-To-End Dubbing of Video.pdf
Empowered by ChatGPT