Repository

NVIDIA/tacotron2

Tacotron 2 - PyTorch implementation with faster-than-realtime inference
3921 1095 122 505
  • 000
我在训练采用标贝数据集 基于pytorch的中文语音合成baseline,目前尝试了拼音建模(字母编码转sequence),batch size 设为32,采样率为48000。目前训练57k个step后,loss已经不再下降,但完全没有对齐,inference结果也不对。请问可能是什么原因,应该如何调整呢? In the training, I use the biaobei dataset and modif...
  • 000
I am trying to train a model on a data set that employs a significant number of domain specific acronyms. After digging through the issues here it is still unclear to me the best way to do this. Should I be pre-processing strings in the training .txt file to replace them with ARPAbet characters, jus...