Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to generate images using Flux Models - OOM #4685

Open
Scherzkeks123 opened this issue Aug 29, 2024 · 7 comments
Open

Unable to generate images using Flux Models - OOM #4685

Scherzkeks123 opened this issue Aug 29, 2024 · 7 comments
Labels
User Support A user needs help with something, probably not a bug.

Comments

@Scherzkeks123
Copy link

Scherzkeks123 commented Aug 29, 2024

Your question

Flux Models will simply not work when trying to generate images. I get an error Code looking like following:
Error occurred when executing UNETLoader: *I also tried putting the flux models in the checkpoint folder

unable to mmap 23782506688 bytes from file : Cannot allocate memory (12)

File "/home/keks-sub-2/ComfyUI/execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

Much more lines with files are following, but I don't want to clutter this report.

Stable Diffusion Models work perfectly fine and the system is sufficient.
I am running on a
Ryzen 7 7800X3D
32 GB RAM
RX 7900XT
2TB NVME SSD

Since Windows support for AI on AMD GPUs is mediocre at best, I installed Ubuntu 22.04 on Linux Subsystem for Windows and run ComfyUI with ROCM 6.1.

I followed the AMD tutorial on this Github page and installed the correct version following this tutorial.

This issue might not be related to Flux, but I don't see where else I could start.
I hope someone can help me with this.
Thank you for your time!

Logs

No response

Other

No response

@Scherzkeks123 Scherzkeks123 added the User Support A user needs help with something, probably not a bug. label Aug 29, 2024
@rivo
Copy link

rivo commented Sep 1, 2024

Same problem here. I have the exact same CPU, 32GB RAM but a 4090. The error message is almost identical to yours. Latest ComfyUI version, Flux Dev workflow taken from https://comfyanonymous.github.io/ComfyUI_examples/flux/ using the fp16 model. (This should be doable on a 4090, shouldn't it?)

None of the tips found in other issues (e.g. the --reserve-vram option or reverting to a previous ComfyUI version) have helped so far.

@jmorbit
Copy link

jmorbit commented Sep 3, 2024

Have you tried loading the models with the diffusion model loader instead of the unet loader? They recently changed the path for the models from unet to diffusion_model. city96/ComfyUI-GGUF#39 . don't know if it effects anything but it might be worth a try. I run flux1-dev on my 3060 ti with 8gb and 128gb system ram. I have learned that prompt size has some effect. If I try to use a very lengthy prompt on my system I get OOM errors also.

@rivo
Copy link

rivo commented Sep 8, 2024

So I got it to work. My setup is Windows, with Docker running on WSL. ComfyUI is running in a Docker image. This setup worked fine in the past, before Flux.

It turns out that to load the Flux model, it actually loads the whole thing into RAM first (not VRAM but your regular RAM). And it needs about 32GB of it. I only have 32GB of RAM, with about 31GB left after Windows itself, WSL, Docker, and Firefox. But Docker only assigns 2GB of RAM to a container per default. Increasing this to 30GB (plus the default 25% swap space) makes it work.

However, it takes forever to load, I guess in part due to the swapping but I suppose that even without swapping, it would take quite some time. The good news is that once it's loaded into VRAM, you don't have to do it again.

Generating images with the fp16 model also takes fairly long (about 5mins per image, one image took 13 minutes!), even on a 4090. Other people are reporting much faster generation times, even on smaller cards. So I'm not sure if I'm doing something wrong here.

ps. I'm using the default Workflow for Flux listed on the ComfyUI GitHub page.

@rivo
Copy link

rivo commented Sep 14, 2024

After spending some more time hunting down the slow generation issue, I found out it was because I had the ---gpu-only flag set. Removed it and images now take a few seconds. (Also, it seems you should not use --highvram either.)

@Scherzkeks123
Copy link
Author

Thank you for all comments! I'll try giving ComfyUI as much RAM and swap space as possible but I have to reinstall the whole system first because my SSD died.
Maybe it'll help but I'll install a Dual Boot Windows/Linux system

@Scherzkeks123
Copy link
Author

Sorry I clicked the wrong button

@rivo
Copy link

rivo commented Sep 15, 2024

I have to reinstall the whole system first

Might be a good idea anyway because when I upgraded my Nvidia drivers, it all stopped working again. After (again) some time googling for the issue and trying different things, it turned out I also had to upgrade Docker Desktop to make it work again.

I have to say it's all a big mess... If the payoff wasn't so great, I would not bother spending hours and hours on solving configuration issues. No tutorial ever mentions this, either. They make it seem so easy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

No branches or pull requests

4 participants
@rivo @jmorbit @Scherzkeks123 and others