Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Latest NVIDIA drivers installed (Oct 19th 2023) slows down ComfyUI #1793

Open
Marventus opened this issue Oct 19, 2023 · 6 comments
Open

Latest NVIDIA drivers installed (Oct 19th 2023) slows down ComfyUI #1793

Marventus opened this issue Oct 19, 2023 · 6 comments
Labels
User Support A user needs help with something, probably not a bug.

Comments

@Marventus
Copy link

Installed the latest NVIDIA drivers and the KSampler is now taking 4-5 times more time to do the same amount of work that happened before.

@bobpuffer1
Copy link

I'm not seeing this. I installed the drivers that dropped about two days ago and I still see the same performance on my production workflow.

@Marventus
Copy link
Author

I cannot tell what has happened exactly. It now works fine as long as I am only generating a few images. If I do a stack of 15 or more it suddenly slows to a crawl and then crashes. 1024x1024 size images. I will give it a few more tests over a few days, this was with SDXL only. 1.5 is working fine.

@bobpuffer1
Copy link

I'm not using sdxl

@elv56
Copy link

elv56 commented Oct 21, 2023

noticed that as well since last nvidia update, on sdxl as well, KSampler and vae decode are becoming 1.5-2x slower than usual

@Marventus
Copy link
Author

noticed that as well since last nvidia update, on sdxl as well, KSampler and vae decode are becoming 1.5-2x slower than usual

Yeah its definitely happening, I ran out of VRAM today - that has never happened before.

@wheels213
Copy link

Same issue. Using the same workflow I've been building for the last week or so. Updated yesterday, managed to get a couple of exports out and since then I just keep hitting CUDA out of memory errors. A couple of models will still output a result, but models that were working previously bring up OOM errors, etc, etc.

One thing I did notice after playing with my page file and keeping an eye on my resources was that my RAM was filling up alongside my VRAM and then spiking before comfy errored out.

New to this so not sure if any of this is related, or if it means I just need more RAM.

At the moment I'm on running an RTX 3080 with 32gb DDR4 running 512 x 512 and 512 x 768 generations.

@mcmonkey4eva mcmonkey4eva added the User Support A user needs help with something, probably not a bug. label Sep 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

No branches or pull requests

5 participants