-
Notifications
You must be signed in to change notification settings - Fork 5.3k
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to generate images using Flux Models - OOM #4685
Comments
Same problem here. I have the exact same CPU, 32GB RAM but a 4090. The error message is almost identical to yours. Latest ComfyUI version, Flux Dev workflow taken from https://comfyanonymous.github.io/ComfyUI_examples/flux/ using the fp16 model. (This should be doable on a 4090, shouldn't it?) None of the tips found in other issues (e.g. the |
Have you tried loading the models with the diffusion model loader instead of the unet loader? They recently changed the path for the models from unet to diffusion_model. city96/ComfyUI-GGUF#39 . don't know if it effects anything but it might be worth a try. I run flux1-dev on my 3060 ti with 8gb and 128gb system ram. I have learned that prompt size has some effect. If I try to use a very lengthy prompt on my system I get OOM errors also. |
So I got it to work. My setup is Windows, with Docker running on WSL. ComfyUI is running in a Docker image. This setup worked fine in the past, before Flux. It turns out that to load the Flux model, it actually loads the whole thing into RAM first (not VRAM but your regular RAM). And it needs about 32GB of it. I only have 32GB of RAM, with about 31GB left after Windows itself, WSL, Docker, and Firefox. But Docker only assigns 2GB of RAM to a container per default. Increasing this to 30GB (plus the default 25% swap space) makes it work. However, it takes forever to load, I guess in part due to the swapping but I suppose that even without swapping, it would take quite some time. The good news is that once it's loaded into VRAM, you don't have to do it again. Generating images with the fp16 model also takes fairly long (about 5mins per image, one image took 13 minutes!), even on a 4090. Other people are reporting much faster generation times, even on smaller cards. So I'm not sure if I'm doing something wrong here. ps. I'm using the default Workflow for Flux listed on the ComfyUI GitHub page. |
After spending some more time hunting down the slow generation issue, I found out it was because I had the |
Thank you for all comments! I'll try giving ComfyUI as much RAM and swap space as possible but I have to reinstall the whole system first because my SSD died. |
Sorry I clicked the wrong button |
Might be a good idea anyway because when I upgraded my Nvidia drivers, it all stopped working again. After (again) some time googling for the issue and trying different things, it turned out I also had to upgrade Docker Desktop to make it work again. I have to say it's all a big mess... If the payoff wasn't so great, I would not bother spending hours and hours on solving configuration issues. No tutorial ever mentions this, either. They make it seem so easy. |
Your question
Flux Models will simply not work when trying to generate images. I get an error Code looking like following:
Error occurred when executing UNETLoader:
*I also tried putting the flux models in the checkpoint folderunable to mmap 23782506688 bytes from file : Cannot allocate memory (12)
File "/home/keks-sub-2/ComfyUI/execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb,
pre_execute_cb=pre_execute_cb)
Much more lines with files are following, but I don't want to clutter this report.
Stable Diffusion Models work perfectly fine and the system is sufficient.
I am running on a
Ryzen 7 7800X3D
32 GB RAM
RX 7900XT
2TB NVME SSD
Since Windows support for AI on AMD GPUs is mediocre at best, I installed Ubuntu 22.04 on Linux Subsystem for Windows and run ComfyUI with ROCM 6.1.
I followed the AMD tutorial on this Github page and installed the correct version following this tutorial.
This issue might not be related to Flux, but I don't see where else I could start.
I hope someone can help me with this.
Thank you for your time!
Logs
No response
Other
No response
The text was updated successfully, but these errors were encountered: