In this repository, I will be posting code that I think can be used to fine-tune various parts of XTTS.
I'm an enthusiast and just testing and trying, so don't expect everything to work perfectly. 😄
The development of DVAE fine-tuning scripts is completed. You can find more details and instructions in the dvae-finetune directory.
The basic idea for DVAE fine-tuning is taken from this GitHub issue.
The original training recipe for GPT-2 fine-tuning can be found in the xtts-finetune-webui repository.
Currently, I'm working on incorporating the following suggestions:
- Improving speaker conditioning for out-of-training data samples/speakers.
- Modifying the training recipe to make the model more robust.
- Exploring the use of different spoken content while keeping the speaker characteristics the same during training.
The goal is to enhance the model's ability to capture speaker style and improve performance on out-of-distribution samples.
At the moment, there is no specific information or code available for fine-tuning the HifiGAN component of XTTS.
To get started with XTTS fine-tuning, please refer to the individual directories for each component. Each directory contains a separate README file with specific instructions, requirements, and examples for fine-tuning the respective component.
Feel free to explore, experiment, and contribute to this repository. If you have any questions or suggestions, don't hesitate to reach out.
Happy fine-tuning! 😊