Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After use sd3_medium_incl_clips_t5xxlfp16.safetensors model , ComfyUI is disconnected. #3911

Closed
Desperado1001 opened this issue Jun 30, 2024 · 4 comments
Labels
Stale This issue is stale and will be autoclosed soon. User Support A user needs help with something, probably not a bug.

Comments

@Desperado1001
Copy link

Desperado1001 commented Jun 30, 2024

Your question

After use sd3_medium_incl_clips_t5xxlfp16.safetensors model , ComfyUI is disconnected.But other model,such as dreamshaperXL_v21TurboDPMSDE.safetensors can run right.

Logs

G:\comfyUI\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
Total VRAM 12288 MB, total RAM 32605 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
Using pytorch cross attention

Import times for custom nodes:
   0.0 seconds: G:\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation-main
   0.0 seconds: G:\comfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type FLOW
Using pytorch attention in VAE
Using pytorch attention in VAE

G:\comfyUI\ComfyUI_windows_portable>pause
请按任意键继续. . .

Other

No response

@Desperado1001 Desperado1001 added the User Support A user needs help with something, probably not a bug. label Jun 30, 2024
@mcmonkey4eva
Copy link
Collaborator

mcmonkey4eva commented Jul 1, 2024

Does the same happen using a non-T5 model?

It's likely just you're using enough of your RAM on other things that loading the full fat model with the big fat T5 on it is pushing you over the limit and the process is crashing from running out of memory.

Look at the resource usage tab of task manager while loading to see if the charts spike up to the top.

@mo-bai
Copy link

mo-bai commented Jul 1, 2024

Does the same happen using a non-T5 model?

It's likely just you're using enough of your RAM on other things that loading the full fat model with the big fat T5 on it is pushing you over the limit and the process is crashing from running out of memory.很可能只是你在其他事情上使用了足够多的内存,而加载带有大容量 T5 的全脂模型使你的内存超过了极限,进程因内存耗尽而崩溃。

Look at the resource usage tab of task manager while loading to see if the charts spike up to the top.在加载时查看任务管理器的资源使用情况选项卡,看看图表是否飙升到顶部。

It's happening to me, too.
I tried sd3_medium_incl_clips_t5xxlfp16.safetensors model and sd3_medium_incl_clips_t5xxlfp8.safetensors model.

I checked the resource usage tab of task manager while loading. and then the cpu and memory peak usage is less than 50% and the GPU usage is hardly up at all

log

E:\program_files\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
Total VRAM 16380 MB, total RAM 32607 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : cudaMallocAsync
Using pytorch cross attention

Import times for custom nodes:
   0.0 seconds: E:\program_files\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
   0.0 seconds: E:\program_files\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\AIGODLIKE-ComfyUI-Translation-main

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type FLOW
Using pytorch attention in VAE
Using pytorch attention in VAE

E:\program_files\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable>pause

@huangkun1985
Copy link

same issues happened to me, i cannot fix it, here is the coding:

D:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
Total VRAM 24564 MB, total RAM 32606 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention
Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type FLOW
Using pytorch attention in VAE
Using pytorch attention in VAE

D:\ComfyUI_windows_portable>pause
请按任意键继续. . .

Copy link

github-actions bot commented Sep 5, 2024

This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically.

@github-actions github-actions bot added the Stale This issue is stale and will be autoclosed soon. label Sep 5, 2024
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Stale This issue is stale and will be autoclosed soon. User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

No branches or pull requests

4 participants