You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your job first, I want to know that how many data should i use to train in your repo. I want to use the cmu arctic dataset for training english tts, but there is about one hour for a speaker, can it work to train in you repo?Because i use the nvidia repo to train but the result is bad. Also, the result is very different for different batch size. I also use some of the libritts data to train on nvdia repo, about 3hour for 10 speaker, but the result is quiet bad too. Do you have some idea about how to train on small dataset ?
The text was updated successfully, but these errors were encountered:
Multi-speaker is supported as well. For instance, you might collect 8 speakers and one hour for each corpus and record the directory in scripts/train_tacotron2.sh. Then the total amounts of data might help.
Thanks for your job first, I want to know that how many data should i use to train in your repo. I want to use the cmu arctic dataset for training english tts, but there is about one hour for a speaker, can it work to train in you repo?Because i use the nvidia repo to train but the result is bad. Also, the result is very different for different batch size. I also use some of the libritts data to train on nvdia repo, about 3hour for 10 speaker, but the result is quiet bad too. Do you have some idea about how to train on small dataset ?
The text was updated successfully, but these errors were encountered: