DRSpeech: Degradation-Robust Text-to-Speech Synthesis with Frame-Level and Utterance-Level Acoustic Representation Learning Takaaki Saeki, Kentaro Tachibana, Ryuichi Yamamoto Apr 4, 2022 Go to Project Site Deep Learning TTS Interspeech Ryuichi Yamamoto Engineer/Researcher I am a engineer/researcher passionate about speech synthesis. I love to write code and enjoy open-source collaboration on GitHub. Please feel free to reach out on Twitter and GitHub. Related TTS-by-TTS 2: Data-selective Augmentation for Neural Speech Synthesis Using Ranking Support Vector Machine with Variational Autoencoder Cross-Speaker Emotion Transfer for Low-Resource Text-to-Speech Using Non-Parallel Voice Conversion with Pitch-Shift Data Augmentation Language Model-Based Emotion Prediction Methods for Emotional Speech Synthesis Systems High-fidelity Parallel WaveGAN with Multi-band Harmonic-plus-Noise Model Neural text-to-speech with a modeling-by-generation excitation vocoder