SoundCTM: Uniting Score-based and Consistency Models for Text-to-Sound Generation

Koichi Saito1, Dongjun Kim2, Takashi Shibuya1, Chieh-Hsin Lai1,
Zhi Zhong3, Yuhta Takida1, Yuki Mitsufuji1,3,
1Sony AI, 2Stanford University, 3Sony Group Corporation

Abstract

Sound content is an indispensable element for multimedia works such as video games, music, and films. Recent high-quality diffusion-based sound generation models can serve as valuable tools for the creators. However, despite producing high-quality sounds, these models often suffer from slow inference speeds. This drawback burdens creators, who typically refine their sounds through trial and error to align them with their artistic intentions. To address this issue, we introduce Sound Consistency Trajectory Models (SoundCTM). Our model enables flexible transitioning between high-quality 1-step sound generation and superior sound quality through multi-step generation. This allows creators to initially control sounds with 1-step samples before refining them through multi-step generation. While CTM fundamentally achieves flexible 1-step and multi-step generation, its impressive performance heavily depends on an additional pretrained feature extractor and an adversarial loss, which are expensive to train and not always available in other domains. Thus, we reframe CTM's training framework and introduce a novel feature distance by utilizing the teacher's network for a distillation loss. Additionally, while distilling classifier-free guided trajectories, we train conditional and unconditional student models simultaneously and interpolate between these models during inference. We also propose training-free controllable frameworks for SoundCTM, leveraging its flexible sampling capability. SoundCTM achieves both promising 1-step and multi-step real-time sound generation without using any extra off-the-shelf networks. Furthermore, we demonstrate SoundCTM's capability of controllable sound generation in a training-free manner.

Text-to-Sound Generation (various sampling steps)

Text prompts SoundCTM (1 step) SoundCTM (4 steps) SoundCTM (16 steps) TANGO (Teacher model)
Loud explosions and bangs followed by men speaking.
An adult male speaks while birds chirp, slight rustling and brief buzzing occur, and emergency vehicle sirens are blaring in the distance.
Thunder claps, and hard rain falls and splashes on surfaces.
Birds chirping and water dripping with some banging in the background.
A man speaks and then an audience claps.
Frogs croaking and a humming with insects vocalizing.
Ocean waves crashing as water trickles as gusts of wind blow and seagulls squawk in the distance.

Text-to-Sound Generation (1 step generation (NFE=1))

Text prompts SoundCTM (Ours) ConsistencyTTA ConsistencyTTA-CLAP-FT
People are speaking as a vehicle goes by.
An adult male speaks while birds chirp, slight rustling and brief buzzing occur, and emergency vehicle sirens are blaring in the distance.
Wind is blowing along with some engines.
Thunder claps, and hard rain falls and splashes on surfaces.
A man is speaking with crowd noise in the background.
A duck quacking.
Birds cooing and rustling.

SoundCTM's Training-free Controllable Generation (Sound Intensity Control)

Text prompts and Target Intensity Default Text-to-Sound Generation (16 steps) z-T Optimization + 16-step Generation Loss-based Guidance (16 steps)
A child crying and a car door closing.
Features

Features

Features

Features
Loud wind noise followed by a car accelerating fast.
Features

Features

Features

Features
Ducks quacking and man speaking.
Features

Features

Features

Features
A bell ringing repeatedly.
Features

Features

Features

Features
Rain falls and a man speaks with distant thunder.
Features

Features

Features

Features
Speech and then a pop and laughter.
Features

Features

Features

Features

BibTeX

@article{saito2024soundctm,
        title={SoundCTM: Uniting Score-based and Consistency Models for Text-to-Sound Generation}, 
        author={Koichi Saito and Dongjun Kim and Takashi Shibuya and Chieh-Hsin Lai and Zhi Zhong and Yuhta Takida and Yuki Mitsufuji},
        journal={arXiv preprint arXiv:2405.18503},
        year={2024}
      }