Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error 1006 when using image prompt #2239

Closed
cristicozea opened this issue Feb 12, 2024 · 16 comments
Closed

Error 1006 when using image prompt #2239

cristicozea opened this issue Feb 12, 2024 · 16 comments
Labels
duplicate This issue or pull request already exists question Further information is requested

Comments

@cristicozea
Copy link

Read Troubleshoot

[x] I confirm that I have read the Troubleshoot guide before making this issue.

Describe the problem

I have downloaded Fooocus as instructed. Clicked on run.bat and waited until the download is ready. However nothing happens after the download is finished. Each time I open the run file it starts downloading again.

Can you please help?

Full Console Log

C:\Focus>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.865
Downloading: "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_v8Rundiffusion.safetensors" to C:\Focus\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors

100%|████████████████████████████████████████████████████████████████████████████▉| 6.62G/6.62G [09:33<00:00, 12.7MB/s]

@eddyizm
Copy link
Contributor

eddyizm commented Feb 12, 2024

Huggingface was likely having some issues yesterday.

It should only download the first time. It could possibly be corrupt. Try shutting down the app, delete that model ]juggernautXL_v8Rundiffusion.safetensors` then start it back up. It will hopefully download it successfully and then continue.

If that doesn't work try to go to the browser and download it directly. Then you can move it to the proper folder and restart the app.

@cristicozea
Copy link
Author

Huggingface was likely having some issues yesterday.

It should only download the first time. It could possibly be corrupt. Try shutting down the app, delete that model ]juggernautXL_v8Rundiffusion.safetensors` then start it back up. It will hopefully download it successfully and then continue.

If that doesn't work try to go to the browser and download it directly. Then you can move it to the proper folder and restart the app.

Thanks for getting back to me. I have tried deleting everything and starting from scratch. Did not work. I have also tried downloading it from the browser directly. I have tried this by clicking on the file - it started downloading from the browser. Still doesn't work.

@cristicozea
Copy link
Author

UPDATE: I have now managed to download the file directly from the browser and Foocus does open. However when I ask to generate something i get 'ERROR'

@cristicozea
Copy link
Author

UPDATE: I have now managed to download the file directly from the browser and Foocus does open. However when I ask to generate something i get 'ERROR'

UPDATE2: Apparently I had to update my driver and now it works. However it took 347.50 seconds to generate an image :(
Is it normal to take that long?

I have 16GB RAM and 6GB Vram GTX 1660 Ti

@cristicozea
Copy link
Author

UPDATE: I have now managed to download the file directly from the browser and Foocus does open. However when I ask to generate something i get 'ERROR'

UPDATE2: Apparently I had to update my driver and now it works. However it took 347.50 seconds to generate an image :( Is it normal to take that long?

I have 16GB RAM and 6GB Vram GTX 1660 Ti

UPDATE3: When I try image input I get 'Connection errored out' -- I have checked the System swap and it looks good. Automatic is checked.

@mashb1t mashb1t added the question Further information is requested label Feb 12, 2024
@eddyizm
Copy link
Contributor

eddyizm commented Feb 12, 2024

Can you post your full log?

@cristicozea
Copy link
Author

Can you post your full log?

The log only says '1006'

@eddyizm
Copy link
Contributor

eddyizm commented Feb 12, 2024

That is not your log, the full terminal output from the window.

@mashb1t
Copy link
Collaborator

mashb1t commented Feb 12, 2024

https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md#error-1006 => System swap or memory issue, please double check and provide your full terminal log.

@cristicozea
Copy link
Author

https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md#error-1006 => System swap or memory issue, please double check and provide your full terminal log.

Each time i input an image i get: Error Connection errored out.

I have followed the guide and I checked the box 'Automatically manage paging file size for all drivers'
I also have 60GB free space on my SSD

Please find the log below ( this is what i get in cmd ):

C:\Focus>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.865
Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch().
Total VRAM 6144 MB, total RAM 16221 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce GTX 1660 Ti : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
1006
Base model loaded: C:\Focus\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [C:\Focus\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [C:\Focus\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [C:\Focus\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cpu, use_fp16 = False.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865
1006

@mashb1t
Copy link
Collaborator

mashb1t commented Feb 12, 2024

I don't know if this helps but please check on which disk the pagefile has been created. You can see the location in the same dialogue where you've checked "Automatically manage paging file size for all drivers".
Your 1660 should be able to run Fooocus, not very fast but at least without errors like this one.

@cristicozea
Copy link
Author

I don't know if this helps but please check on which disk the pagefile has been created. You can see the location in the same dialogue where you've checked "Automatically manage paging file size for all drivers". Your 1660 should be able to run Fooocus, not very fast but at least without errors like this one.

Thanks for getting back to me. I only have one partition on this PC so it should be fine. When I installed Windows it created a secondary windriver partition, hopefully it doesn;t affect this. Anyway, I checked and it all looks ok.

Do you have any other ideas? I can generate an image, but i only get this error when I input an image :(

@mashb1t
Copy link
Collaborator

mashb1t commented Feb 12, 2024

Check any issue in https://github.com/search?q=repo%3Alllyasviel%2FFooocus+1006&type=issues

This issue has turned into a duplicate of many 1006 issues.

On Colab this indicates too high RAM usage or resource linit peaks. Feel free to check with either or a combination of the following args which helped some users in #1710 to balance the load:

--attention-split --disable-offload-from-vram --always-high-vram

Maybe --attention-split helps but the other args are not suitable for you as VRAM is small compared to RAM on your system. It's 99% a swap issue.

@mashb1t mashb1t added the duplicate This issue or pull request already exists label Feb 12, 2024
@mashb1t mashb1t changed the title First run endlessly downloading files Error 1006 when using image prompt Feb 12, 2024
@cristicozea
Copy link
Author

Check any issue in https://github.com/search?q=repo%3Alllyasviel%2FFooocus+1006&type=issues

This issue has turned into a duplicate of many 1006 issues.

On Colab this indicates too high RAM usage or resource linit peaks. Feel free to check with either or a combination of the following args which helped some users in #1710 to balance the load:

--attention-split --disable-offload-from-vram --always-high-vram

Maybe --attention-split helps but the other args are not suitable for you as VRAM is small compared to RAM on your system. It's 99% a swap issue.

Thank you. I will try this. Can you tell me in which file I should add it?

@cristicozea
Copy link
Author

--attention-split --disable-offload-from-vram --always-high-vram

I am not a programmer so I'm not sure how to put it exactly. Step by step instructions will help. I put it in networking.py but now i get a syntax error.

@cristicozea
Copy link
Author

--attention-split --disable-offload-from-vram --always-high-vram

I am not a programmer so I'm not sure how to put it exactly. Step by step instructions will help. I put it in networking.py but now i get a syntax error.

UPDATE: I have solved the issue by adding this code starting from line 162:

            ws_max_queue=64,
            ws_ping_interval=60.0,
            ws_ping_timeout=10.0,
            ws_per_message_deflate=False,
            reload=True,
            timeout_notify=120

Thank you all for your help!

@mashb1t mashb1t closed this as completed Feb 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants