Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change the code to use multi-gpu, but can not speed up the training. #65

Open
Naminwang opened this issue Dec 4, 2019 · 2 comments
Open

Comments

@Naminwang
Copy link

Use DataParallel model to start a multi-gpu training, change the config.yaml batch size, can not speed up the training.

@mindmapper15
Copy link

mindmapper15 commented Dec 19, 2019

I've also tried multi-gpus and had same issue with you.
Then I found the biggest overhead is in GE2ELoss part. Especially get cosine similarity matrix and calculate loss part.

https://github.com/HarryVolek/PyTorch_Speaker_Verification/blob/11b1d1932b0a226de9cabd8652c0c2ea1446611f/utils.py

Just copy and paste this codes to your utils.py.
I don't know why author didn't have merged this codes yet, but it is much faster than original codes.

@Naminwang
Copy link
Author

I've also tried multi-gpus and had same issue with you.
Then I found the biggest overhead is in GE2ELoss part. Especially get cosine similarity matrix and calculate loss part.

https://github.com/HarryVolek/PyTorch_Speaker_Verification/blob/11b1d1932b0a226de9cabd8652c0c2ea1446611f/utils.py

Just copy and paste this codes to your utils.py.
I don't know why author didn't have merged this codes yet, but it is much faster than original codes.

Thank you very much, I will try to use the code in the link.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants