Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: RuntimeError: CUDA error: an illegal memory access was encountered #3602

Open
2 of 5 tasks
Abdelmoulak opened this issue Sep 7, 2024 · 2 comments
Open
2 of 5 tasks
Labels
bug Something isn't working triage This needs an (initial) review

Comments

@Abdelmoulak
Copy link

Abdelmoulak commented Sep 7, 2024

Checklist

  • The issue has not been resolved by following the troubleshooting guide
  • The issue exists on a clean installation of Fooocus
  • The issue exists in the current version of Fooocus
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

everytime i try to generate images it either black screens or gives me this error
It keeps doing this everytime even though i have tried everything to fix it.

Steps to reproduce the problem

After any image generation

What should have happened?

it should generate the image

What browsers do you use to access Fooocus?

No response

Where are you running Fooocus?

Locally

What operating system are you using?

Windows 11

Console logs

D:\Softwares\Foocus>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.5.5
[Cleanup] Attempting to delete content of temp dir C:\Users\user\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
You do not have [juggernautXL_v8Rundiffusion.safetensors] but you have [juggernautXL_version6Rundiffusion.safetensors].
Fooocus will use [juggernautXL_version6Rundiffusion.safetensors] to avoid downloading new models, but you are not using the latest models.
Use --always-download-new-model to avoid fallback and always get new models.
Total VRAM 6144 MB, total RAM 16202 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 Laptop GPU : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Refiner unloaded.
Running on local URL:  http://127.0.0.1:7866

To create a public link, set `share=True` in `launch()`.
model_type EPS
UNet ADM Dimension 2816
IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.
--------
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Base model loaded: D:\Softwares\Foocus\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors
VAE loaded: None
Request to load LoRAs [] for model [D:\Softwares\Foocus\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors].
Fooocus V2 Expansion: Vocab with 642 words.
D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  return self.fget.__get__(instance, owner)()
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.89 seconds
Started worker with PID 9460
App started successful. Use the app with http://127.0.0.1:7866/ or 127.0.0.1:7866
[Parameters] Adaptive CFG = 7
[Parameters] CLIP Skip = 2
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] Seed = 5451640945650293619
[Parameters] CFG = 4
[Fooocus] Loading control models ...
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] two random people, glowing, magic, winning, detailed, highly scientific, intricate, elegant, sharp focus, beautiful light, determined, colorful, artistic, fine detail, iconic, imposing, epic, clear, crisp, color, relaxed, attractive, complex, enhanced, loving, symmetry, novel, cinematic, dramatic, background, illuminated, amazing, gorgeous, flowing, elaborate
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] two random people, glowing, infinite, detailed, dramatic, vibrant colors, inspired, open artistic, creative, fair, adventurous, emotional, cinematic, cute, colorful, highly coherent, cool, trendy, iconic, awesome, surreal, best, winning, perfect composition, beautiful, epic, stunning, amazing detail, pretty background, very inspirational,, full color, professional
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.26 seconds
[Fooocus] Encoding positive #2 ...
[Fooocus] Encoding negative #1 ...
[Fooocus] Encoding negative #2 ...
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (896, 1152)
Preparation time: 6.64 seconds
Using karras scheduler.
[Fooocus] Preparing task 1/2 ...
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
loading in lowvram mode 3120.7200269699097
[Fooocus Model Management] Moving model(s) has taken 4.84 seconds
  7%|█████▌                                                                             | 2/30 [00:07<01:48,  3.88s/it]
Traceback (most recent call last):
  File "D:\Softwares\Foocus\Fooocus\modules\async_worker.py", line 1471, in worker
    handler(task)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\modules\async_worker.py", line 1286, in handler
    imgs, img_paths, current_progress = process_task(all_steps, async_task, callback, controlnet_canny_path,
  File "D:\Softwares\Foocus\Fooocus\modules\async_worker.py", line 295, in process_task
    imgs = pipeline.process_diffusion(
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\modules\default_pipeline.py", line 379, in process_diffusion
    sampled_latent = core.ksampler(
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\modules\core.py", line 310, in ksampler
    samples = ldm_patched.modules.sample.sample(model,
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\modules\sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\modules\samplers.py", line 712, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\modules\sample_hijack.py", line 158, in sample_hacked
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback_wrap, noise, latent_image, denoise_mask, disable_pbar)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\modules\samplers.py", line 557, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\k_diffusion\sampling.py", line 701, in sample_dpmpp_2m_sde_gpu
    return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\k_diffusion\sampling.py", line 613, in sample_dpmpp_2m_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\modules\patch.py", line 321, in patched_KSamplerX0Inpaint_forward
    out = self.inner_model(x, sigma,
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\modules\samplers.py", line 271, in forward
    return self.apply_model(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\modules\samplers.py", line 268, in apply_model
    out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
  File "D:\Softwares\Foocus\Fooocus\modules\patch.py", line 237, in patched_sampling_function
    positive_x0, negative_x0 = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\modules\samplers.py", line 222, in calc_cond_uncond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\modules\model_base.py", line 85, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\modules\patch.py", line 437, in patched_unet_forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 43, in forward_timestep_embed
    x = layer(x, context, transformer_options)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\ldm\modules\attention.py", line 613, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\ldm\modules\attention.py", line 440, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\ldm\modules\diffusionmodules\util.py", line 189, in checkpoint
    return func(*inputs)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\ldm\modules\attention.py", line 500, in _forward
    n = self.attn1(n, context=context_attn1, value=value_attn1)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\ldm\modules\attention.py", line 395, in forward
    return self.to_out(out)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\container.py", line 215, in forward
    input = module(input)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\Softwares\Foocus\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\modules\ops.py", line 25, in forward
    return self.forward_ldm_patched_cast_weights(*args, **kwargs)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\modules\ops.py", line 20, in forward_ldm_patched_cast_weights
    weight, bias = cast_bias_weight(self, input)
  File "D:\Softwares\Foocus\Fooocus\ldm_patched\modules\ops.py", line 9, in cast_bias_weight
    weight = s.weight.to(device=input.device, dtype=input.dtype, non_blocking=non_blocking)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Total time: 34.98 seconds

Additional information

i dont think it did that before fooocus 2.5.5

@Abdelmoulak Abdelmoulak added bug Something isn't working triage This needs an (initial) review labels Sep 7, 2024
@Abdelmoulak
Copy link
Author

works for now, just needed to wait for a graphics card update

@Abdelmoulak
Copy link
Author

Abdelmoulak commented Sep 15, 2024

after a few days, the same issue came back. any help on permanent fix ?
but with this error this time
RuntimeWarning: invalid value encountered in cast x_sample = x_sample.cpu().numpy().clip(0, 255).astype(np.uint8)

@Abdelmoulak Abdelmoulak reopened this Sep 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage This needs an (initial) review
Projects
None yet
Development

No branches or pull requests

1 participant