Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add setup instructions for Windows (Intel GPUs) - ARC #2120

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

cryscript
Copy link

No description provided.

@mashb1t mashb1t linked an issue Jan 31, 2024 that may be closed by this pull request
@mashb1t mashb1t added the documentation Improvements or additions to documentation label Jan 31, 2024
@mashb1t mashb1t changed the title Windows(Intel GPUs) Add setup instructions for Windows (Intel GPUs) - ARC Feb 1, 2024
Copy link
Collaborator

@mashb1t mashb1t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally don't have an Intel GPU, looking for some people to confirm if the steps are working.
The instructions themselves look good though 👍

To people having Intel GPUs: do these instructions enable you to successfully run Fooocus? Your feedback is appreciated!

@mashb1t mashb1t added the help wanted Extra attention is needed label Feb 1, 2024
@killacan
Copy link

killacan commented Feb 3, 2024

I followed the install instructions on Windows 10 with Intel ARC A770 and it appears to work for me.

@MaciejDromin
Copy link

I can also confirm that following this steps on Win10 for Intel Arc 770 GPU works like a charm.
Linux on the other hand (Fedora 39) doing the same steps but installing torch extension according to this guide: Intel Extension for PyTorch is failing with the following exception: AssertionError("Torch not compiled with CUDA enabled"). I'm trying to find a solution because I would love to get it working on linux.

@mashb1t mashb1t removed the help wanted Extra attention is needed label Feb 8, 2024
@cryscript
Copy link
Author

@MaciejDromin if you receiving error AssertionError("Torch not compiled with CUDA enabled") this means that you have wrong torch version or it was replaced when you installed dependencies on first run. Try to reinstall torch and other wheel files from Intel python -m pip install --upgrade torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

Also I got this error when intel_extension_for_pytorch was not imported, but if you have updated Fooocus version all should be ok.

Also you need Intel OpenAPI packages: intel-oneapi-dpcpp-cpp intel-oneapi-mkl-devel.
Before run add OpenAPI env vars to your Fooocus startup script or terminal session source {DPCPPROOT}/env/vars.sh, source {MKLROOT}/env/vars.sh, or you can try to add mkl libs to LD_LIBRARY_PATH env var like export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/oneapi/mkl/2024.0/lib; your_start_command.sh

@mashb1t mashb1t added the Size S small change, basically no testing needed label Feb 8, 2024
@MaciejDromin
Copy link

I tried what You suggested, reinstalled torch, verified that i have intel-oneapi-dpcpp-cpp intel-oneapi-mkl-devel packages and exported path but I'm still getting the same error :/

@midnitedestiny
Copy link

I followed the install instructions on Windows 10 with Intel ARC A770 and it appears to work for me.

wheres the install instructions?

@mashb1t
Copy link
Collaborator

mashb1t commented Feb 16, 2024

@midnitedestiny in this PR, tab files https://github.com/lllyasviel/Fooocus/pull/2120/files

@midnitedestiny
Copy link

@midnitedestiny in this PR, tab files https://github.com/lllyasviel/Fooocus/pull/2120/files

just found them, will give a update if it runs smoothly :)

@charliekayaker
Copy link

@midnitedestiny in this PR, tab files https://github.com/lllyasviel/Fooocus/pull/2120/files

just found them, will give a update if it runs smoothly :)

is it work for you ?

@midnitedestiny
Copy link

midnitedestiny commented Feb 16, 2024 via email

@lgwjames
Copy link

i could try on my amd mac if anyone wants?

@charliekayaker
Copy link

charliekayaker commented Feb 17, 2024

Hi guys! I try the instructions in this PR.
I ran installer.bat successfuly!
image
image

I ran run.bat and i got the next message:
image

Do i need change the versions ?

I am using Windows 10, INTEL GPU and my hardware is:

image

Could you help me, please ? I've been trying for several days with differents ways.

Thanks in advance.

@charliekayaker
Copy link

charliekayaker commented Feb 17, 2024

Try to reinstall torch and other wheel files from Intel

@MaciejDromin if you receiving error AssertionError("Torch not compiled with CUDA enabled") this means that you have wrong torch version or it was replaced when you installed dependencies on first run. Try to reinstall torch and other wheel files from Intel python -m pip install --upgrade torch==2.1.0a0 torchvision==0.16.0a0 torchaudio==2.1.0a0 intel-extension-for-pytorch==2.1.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

Also I got this error when intel_extension_for_pytorch was not imported, but if you have updated Fooocus version all should be ok.

Also you need Intel OpenAPI packages: intel-oneapi-dpcpp-cpp intel-oneapi-mkl-devel. Before run add OpenAPI env vars to your Fooocus startup script or terminal session source {DPCPPROOT}/env/vars.sh, source {MKLROOT}/env/vars.sh, or you can try to add mkl libs to LD_LIBRARY_PATH env var like export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/oneapi/mkl/2024.0/lib; your_start_command.sh

Hi @cryscript, is the version
C:\Users\charlie\Documents\Fooocus_2120pr>.\python_embeded\python.exe -s
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] on win32

print(torch.version)
2.1.0a0+cxx11.abi
properly for FOCUS with Intel GPU ?

In the second part when you mentioned, OpenAPI, is it for my case too ? And you mention two bash scripts but i am in windows, what about it ?

Intel UHD Graphics, 10GB memory

Thanks in advance!

@charliekayaker
Copy link

Yes but when I go and do inpaint to add a necklace it shuts my monitor off but pc keeps running so trying to figure it out

On Fri, Feb 16, 2024, 8:00 AM TheBoss16 @.> wrote: @midnitedestiny https://github.com/midnitedestiny in this PR, tab files https://github.com/lllyasviel/Fooocus/pull/2120/files just found them, will give a update if it runs smoothly :) is it work for you ? — Reply to this email directly, view it on GitHub <#2120 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/BGCW5AE4YOVAPMFZEMLFBBLYT57DFAVCNFSM6AAAAABCT5T4YCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBYG4ZTCNJTG4 . You are receiving this because you were mentioned.Message ID: @.>

Could i ask to you how ?

I created the install.bat and modified my run.bat, however i can get it.

@midnitedestiny
Copy link

Hi guys! I try the instructions in this PR. I ran installer.bat successfuly! image image

I ran run.bat and i got the next message: image

Do i need change the versions ?

I am using Windows 10, INTEL GPU and my hardware is:

image

Could you help me, please ? I've been trying for several days with differents ways.

Thanks in advance.

your install bat is missing these there different then the one you have.

"https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchaudio-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl"

"https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchvision-0.16.0a0+cxx11.abi-cp310-cp310-win_amd64.whl"

"https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl""

@charliekayaker
Copy link

https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl

My photo is bad, is not completely.

This is my installer.bat
.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
.\python_embeded\python.exe -m pip install "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torch-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl"
.\python_embeded\python.exe -m pip install "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchaudio-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl"
.\python_embeded\python.exe -m pip install "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchvision-0.16.0a0+cxx11.abi-cp310-cp310-win_amd64.whl"
.\python_embeded\python.exe -m pip install "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl"

pause

I put .\python_embeded\python.exe -m pip install in every line for the doubts.

My run.bat is
.\python_embeded\python.exe -s Fooocus\entry_with_update.py --unet-in-bf16 --vae-in-bf16 --clip-in-fp16
pause

My torch version:

C:\Users\charlie\Documents\Fooocus_win64_2-1-831\python_embeded>.\python.exe
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.

import torch
import torch
print(torch.version)
2.1.0a0+cxx11.abi

I couldn't run it.

Exception in thread Thread-2 (worker):
Traceback (most recent call last):
File "threading.py", line 1016, in bootstrap_inner
File "threading.py", line 953, in run
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 25, in worker
import modules.default_pipeline as pipeline
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\modules\default_pipeline.py", line 1, in
import modules.core as core
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\modules\core.py", line 1, in
from modules.patch import patch_all
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\modules\patch.py", line 5, in
import ldm_patched.modules.model_base
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_base.py", line 2, in
from ldm_patched.ldm.modules.diffusionmodules.openaimodel import UNetModel, Timestep
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 15, in
from ..attention import SpatialTransformer, SpatialVideoTransformer, default
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\ldm_patched\ldm\modules\attention.py", line 9, in
from .sub_quadratic_attention import efficient_dot_product_attention
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\ldm_patched\ldm\modules\sub_quadratic_attention.py", line 27, in
from ldm_patched.modules import model_management
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_management.py", line 118, in
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\model_management.py", line 87, in get_torch_device
return torch.device(torch.cuda.current_device())
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\cuda_init
.py", line 783, in current_device
lazy_init()
File "C:\Users\charlie\Documents\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\cuda_init
.py", line 289, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

The version installed by installer.bat don't use CUDA, right ?

Thanks in advance.

@mashb1t mashb1t mentioned this pull request Feb 28, 2024
@wange13
Copy link

wange13 commented Feb 29, 2024

I have an Intel (R) Iris(R) Xe Graphics Famiy graphics card, and the operating system is Windows 11. How can I run Fooocus on this kind of laptop with a graphics card and if I have to download a graphics driver, which graphics driver should I download?

@Tigwin
Copy link

Tigwin commented Mar 2, 2024

will foocus work on an Intel Arc A750?

@823863429
Copy link

Thank you for your outstanding work. I have installed the environment, but I keep getting stuck loading models to GPU. Is there any solution?
image
image

@Aoni0
Copy link

Aoni0 commented Mar 23, 2024

IMG-20240322-WA0017.jpg

Alhamdulillah, my Fooocus is running smoothly, but I'm having trouble when I select FaceSwap and PyraCanny together—it just stops working. Any suggestions on how to fix this?

I'm Using Windows 11

GPU= INTEL Arc A750

I'm using an Intel Arc A750 GPU. Thanks!

@bilalbinkhan1122
Copy link

Thank you for your outstanding work. I have installed the environment, but I keep getting stuck loading models to GPU. Is there any solution? image image

I am Getting the same error anyone here to help please.

@soujeshpj
Copy link

Thank you for your outstanding work. I have installed the environment, but I keep getting stuck loading models to GPU. Is there any solution? image image

I am also facing similar error and issues. PFB the error logs.

reparation time: 27.41 seconds
[Sampler] refiner_swap_method = joint
2024-05-16 13:45:24,429 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7865/api/predict "HTTP/1.1 200 OK"
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
2024-05-16 13:45:24,455 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7865/api/predict "HTTP/1.1 200 OK"
Requested to load SDXL
Loading 1 new model
C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\intel_extension_for_pytorch\frontend.py:465: UserWarning: Conv BatchNorm folding failed during the optimize process.
  warnings.warn(
C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\intel_extension_for_pytorch\frontend.py:472: UserWarning: Linear BatchNorm folding failed during the optimize process.
  warnings.warn(
[Fooocus Model Management] Moving model(s) has taken 108.89 seconds
Traceback (most recent call last):
  File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 913, in worker
    handler(task)
  File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\modules\async_worker.py", line 816, in handler
    imgs = pipeline.process_diffusion(
  File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\modules\default_pipeline.py", line 362, in process_diffusion
    sampled_latent = core.ksampler(
  File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\modules\core.py", line 308, in ksampler
    samples = ldm_patched.modules.sample.sample(model,
  File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 712, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI Lab\Fooocus_win64_2-1-831\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\modules\sample_hijack.py", line 157, in sample_hacked
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback_wrap, noise, latent_image, denoise_mask, disable_pbar)
  File "C:\AI Lab\Fooocus_win64_2-1-831\Fooocus\ldm_patched\modules\samplers.py", line 545, in sample
    noise = noise * torch.sqrt(1.0 + sigmas[0] ** 2.0)
RuntimeError: Native API failed. Native API returns: -999 (Unknown PI error) -999 (Unknown PI error)
2024-05-16 13:47:33,856 - httpx - INFO - HTTP Request: POST http://127.0.0.1:7865/api/predict "HTTP/1.1 200 OK"
Total time: 192.54 seconds

@soujeshpj
Copy link

IMG-20240322-WA0017.jpg

Alhamdulillah, my Fooocus is running smoothly, but I'm having trouble when I select FaceSwap and PyraCanny together—it just stops working. Any suggestions on how to fix this?

I'm Using Windows 11

GPU= INTEL Arc A750

I'm using an Intel Arc A750 GPU. Thanks!

PLs share the installation steps that u did for Intel GPU.

@MessiahNadir
Copy link

MessiahNadir commented Aug 6, 2024

Hello. I am a complete nube in installing anything the way Fooocus should be installed, and my GPU is Intel(R) HD Graphics. I've tried to follow instructions several times (created install.bat, changed run.bat and other files content), however when I run all these in cmd, I get some errors:

install.bat:

D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
Found existing installation: torch 2.1.0a0+cxx11.abi
Uninstalling torch-2.1.0a0+cxx11.abi:
  Successfully uninstalled torch-2.1.0a0+cxx11.abi
Found existing installation: torchvision 0.16.0a0+cxx11.abi
Uninstalling torchvision-0.16.0a0+cxx11.abi:
  Successfully uninstalled torchvision-0.16.0a0+cxx11.abi
Found existing installation: torchaudio 2.1.0a0+cxx11.abi
Uninstalling torchaudio-2.1.0a0+cxx11.abi:
  Successfully uninstalled torchaudio-2.1.0a0+cxx11.abi
WARNING: Skipping torchtext as it is not installed.
WARNING: Skipping functorch as it is not installed.
WARNING: Skipping xformers as it is not installed.

D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -m pip install "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torch-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl"  "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchaudio-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl"  "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchvision-0.16.0a0+cxx11.abi-cp310-cp310-win_amd64.whl"  "https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl"
Collecting torch==2.1.0a0+cxx11.abi
  Downloading https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torch-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl (217.6 MB)
     ---------------------------------------- 217.6/217.6 MB 1.6 MB/s eta 0:00:00
Collecting torchaudio==2.1.0a0+cxx11.abi
  Downloading https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchaudio-2.1.0a0+cxx11.abi-cp310-cp310-win_amd64.whl (2.3 MB)
     ---------------------------------------- 2.3/2.3 MB 8.2 MB/s eta 0:00:00
Collecting torchvision==0.16.0a0+cxx11.abi
  Downloading https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/torchvision-0.16.0a0+cxx11.abi-cp310-cp310-win_amd64.whl (790 kB)
     ---------------------------------------- 790.5/790.5 kB 4.5 MB/s eta 0:00:00
Collecting intel-extension-for-pytorch==2.1.10+xpu
  Downloading https://github.com/Nuullll/intel-extension-for-pytorch/releases/download/v2.1.10+xpu/intel_extension_for_pytorch-2.1.10+xpu-cp310-cp310-win_amd64.whl (367.2 MB)
     ---------------------------------------- 367.2/367.2 MB 1.1 MB/s eta 0:00:00
Requirement already satisfied: filelock in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (3.12.2)
Requirement already satisfied: typing-extensions in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (4.7.1)
Requirement already satisfied: sympy in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (1.12)
Requirement already satisfied: networkx in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (3.1)
Requirement already satisfied: jinja2 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (3.1.2)
Requirement already satisfied: fsspec in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torch==2.1.0a0+cxx11.abi) (2023.6.0)
Requirement already satisfied: numpy in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.16.0a0+cxx11.abi) (1.26.4)
Requirement already satisfied: requests in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.16.0a0+cxx11.abi) (2.31.0)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from torchvision==0.16.0a0+cxx11.abi) (10.4.0)
Requirement already satisfied: psutil in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from intel-extension-for-pytorch==2.1.10+xpu) (6.0.0)
Requirement already satisfied: packaging in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from intel-extension-for-pytorch==2.1.10+xpu) (24.1)
Requirement already satisfied: pydantic in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from intel-extension-for-pytorch==2.1.10+xpu) (2.1.1)
Requirement already satisfied: MarkupSafe>=2.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from jinja2->torch==2.1.0a0+cxx11.abi) (2.1.3)
Requirement already satisfied: annotated-types>=0.4.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from pydantic->intel-extension-for-pytorch==2.1.10+xpu) (0.5.0)
Requirement already satisfied: pydantic-core==2.4.0 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from pydantic->intel-extension-for-pytorch==2.1.10+xpu) (2.4.0)
Requirement already satisfied: charset-normalizer<4,>=2 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from requests->torchvision==0.16.0a0+cxx11.abi) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from requests->torchvision==0.16.0a0+cxx11.abi) (3.4)
Requirement already satisfied: urllib3<3,>=1.21.1 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from requests->torchvision==0.16.0a0+cxx11.abi) (2.0.3)
Requirement already satisfied: certifi>=2017.4.17 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from requests->torchvision==0.16.0a0+cxx11.abi) (2023.5.7)
Requirement already satisfied: mpmath>=0.19 in d:\fooocus_win64_2-5-0\python_embeded\lib\site-packages (from sympy->torch==2.1.0a0+cxx11.abi) (1.3.0)
Installing collected packages: torch, torchvision, torchaudio
  WARNING: The scripts convert-caffe2-to-onnx.exe, convert-onnx-to-caffe2.exe and torchrun.exe are installed in 'D:\Fooocus_win64_2-5-0\python_embeded\Scripts' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.

And run_realistic.bat:

D:\Fooocus_win64_2-5-0>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --unet-in-bf16 --vae-in-bf16 --clip-in-fp16
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py', '--unet-in-bf16', '--vae-in-bf16', '--clip-in-fp16']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.5.3
[Cleanup] Attempting to delete content of temp dir C:\Users\Gigabyte\AppData\Local\Temp\fooocus
[Cleanup] Cleanup successful
D:\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
Traceback (most recent call last):
  File "D:\Fooocus_win64_2-5-0\Fooocus\entry_with_update.py", line 46, in <module>
    from launch import *
  File "D:\Fooocus_win64_2-5-0\Fooocus\launch.py", line 147, in <module>
    from webui import *
  File "D:\Fooocus_win64_2-5-0\Fooocus\webui.py", line 10, in <module>
    import modules.async_worker as worker
  File "D:\Fooocus_win64_2-5-0\Fooocus\modules\async_worker.py", line 3, in <module>
    from extras.inpaint_mask import generate_mask_from_image, SAMOptions
  File "D:\Fooocus_win64_2-5-0\Fooocus\extras\inpaint_mask.py", line 6, in <module>
    from extras.GroundingDINO.util.inference import default_groundingdino
  File "D:\Fooocus_win64_2-5-0\Fooocus\extras\GroundingDINO\util\inference.py", line 3, in <module>
    import ldm_patched.modules.model_management as model_management
  File "D:\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 121, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "D:\Fooocus_win64_2-5-0\Fooocus\ldm_patched\modules\model_management.py", line 90, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "D:\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 783, in current_device
    _lazy_init()
  File "D:\Fooocus_win64_2-5-0\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 289, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

D:\Fooocus_win64_2-5-0>pause
Для продолжения нажмите любую клавишу . . .

Please help me with further advice or instructions. What do I do to successfully install and use Fooocus on my computer? (Windows 10)

Many Thanks for your attention and help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation Size S small change, basically no testing needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support for Intel ARC GPUs