background
logo
ArxivPaperAI

Real-time Low-latency Music Source Separation using Hybrid Spectrogram-TasNet

Author:
Satvik Venkatesh, Arthur Benilov, Philip Coleman, Frederic Roskam
Keyword:
Electrical Engineering and Systems Science, Audio and Speech Processing, Audio and Speech Processing (eess.AS), Machine Learning (cs.LG), Sound (cs.SD)
journal:
--
date:
2024-02-27 00:00:00
Abstract
There have been significant advances in deep learning for music demixing in recent years. However, there has been little attention given to how these neural networks can be adapted for real-time low-latency applications, which could be helpful for hearing aids, remixing audio streams and live shows. In this paper, we investigate the various challenges involved in adapting current demixing models in the literature for this use case. Subsequently, inspired by the Hybrid Demucs architecture, we propose the Hybrid Spectrogram Time-domain Audio Separation Network HS-TasNet, which utilises the advantages of spectral and waveform domains. For a latency of 23 ms, the HS-TasNet obtains an overall signal-to-distortion ratio (SDR) of 4.65 on the MusDB test set, and increases to 5.55 with additional training data. These results demonstrate the potential of efficient demixing for real-time low-latency music applications.
PDF: Real-time Low-latency Music Source Separation using Hybrid Spectrogram-TasNet.pdf
Empowered by ChatGPT