Probability Density Distillation with Generative Adversarial Networks for High-Quality Parallel Waveform Generation

Preprint: arXiv:1904.04472, Published version: ISCA Archive Interspeech 2019

Authors

  • Ryuichi Yamamoto (LINE Corp.)
  • Eunwoo Song (NAVER Corp.)
  • Jae-Min Kim (NAVER Corp.)

Abstract

This paper proposes an effective probability density distillation (PDD) algorithm for WaveNet-based parallel waveform generation (PWG) systems. Recently proposed teacher-student frameworks in the PWG system have successfully achieved a real-time generation of speech signals. However, the difficulties optimizing the PDD criteria without auxiliary losses result in quality degradation of synthesized speech. To generate more natural speech signals within the teacher-student framework, we propose a novel optimization criterion based on generative adversarial networks (GANs). In the proposed method, the inverse autoregressive flow-based student model is incorporated as a generator in the GAN framework, and jointly optimized by the PDD mechanism with the proposed adversarial learning method. As this process encourages the student to model the distribution of realistic speech waveform, the perceptual quality of the synthesized speech becomes much more natural. Our experimental results verify that the PWG systems with the proposed method outperform both those using conventional approaches, and also autoregressive generation systems with a well-trained teacher WaveNet.

Audio samples

There are 8 different systems, that include 6 parallel waveform generation systems (Student-*) trained by different optimization criteria as follows:

  1. Ground truth: Recorded speech.
  2. Teacher: Teacher Gaussian WaveNet [1].
  3. Student-AX: STFT auxiliary loss.
  4. Student-AXAD: STFT and adversarial losses.
  5. Student-KL: KLD loss (Ablation study; not used for subjective evaluations).
  6. Student-KLAX: KLD and STFT auxiliary losses.
  7. Student-KLAXAD: KLD, STFT, and adversarial losses (proposed).
  8. Student-KLAXAD*: Weights optimized version of the above (proposed).

Copy-synthesis

Japanese female speaker

Sample 1

Ground truthTeacherStudent-AX
Student-AXAVStudent-KLStudent-KLAX
Student-KLAXADStudent-KLAXAD*

Sample 2

Ground truthTeacherStudent-AX
Student-AXAVStudent-KLStudent-KLAX
Student-KLAXADStudent-KLAXAD*

Sample 3

Ground truthTeacherStudent-AX
Student-AXAVStudent-KLStudent-KLAX
Student-KLAXADStudent-KLAXAD*

Sample 4

Ground truthTeacherStudent-AX
Student-AXAVStudent-KLStudent-KLAX
Student-KLAXADStudent-KLAXAD*

Sample 5

Ground truthTeacherStudent-AX
Student-AXAVStudent-KLStudent-KLAX
Student-KLAXADStudent-KLAXAD*

References

  • [1]: W. Ping, K. Peng, and J. Chen, “ClariNet: Parallel wave generation in end-to-end text-to-speech,” in Proc. ICLR, 2019 (arXiv).

Acknowledgements

Work performed with nVoice, Clova Voice, Naver Corp.

Citation

@inproceedings{Yamamoto2019,
  author={Ryuichi Yamamoto and Eunwoo Song and Jae-Min Kim},
  title={{Probability Density Distillation with Generative Adversarial Networks for High-Quality Parallel Waveform Generation}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={699--703},
  doi={10.21437/Interspeech.2019-1965},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1965}
}
Ryuichi Yamamoto
Ryuichi Yamamoto
Engineer/Researcher

I am a engineer/researcher passionate about speech synthesis. I love to write code and enjoy open-source collaboration on GitHub. Please feel free to reach out on Twitter and GitHub.

Related