Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsupported type byte size: ComplexFloat #109

Open
cczw2010 opened this issue Oct 10, 2023 · 12 comments
Open

Unsupported type byte size: ComplexFloat #109

cczw2010 opened this issue Oct 10, 2023 · 12 comments

Comments

@cczw2010
Copy link

System : Apple (mac pro m2)
Use 12 cpu cores for computing 。

when i use Infer from prompt show this error

To create a public link, set `share=True` in `launch()`.
2023-10-10 17:09:35,552 INFO [launch-ui.py:335] synthesize text: [ZH]请注意,默认情况下,只处理语法转译,且 不包含任何 polyfill[ZH]
Building prefix dict from the default dictionary ...
2023-10-10 17:09:35,557 DEBUG [__init__.py:113] Building prefix dict from the default dictionary ...
Loading model from cache /var/folders/br/tr56lxdj3q3d8d1lkqp_xy680000gn/T/jieba.cache
2023-10-10 17:09:35,557 DEBUG [__init__.py:132] Loading model from cache /var/folders/br/tr56lxdj3q3d8d1lkqp_xy680000gn/T/jieba.cache
Loading model cost 0.351 seconds.
2023-10-10 17:09:35,908 DEBUG [__init__.py:164] Loading model cost 0.351 seconds.
Prefix dict has been built successfully.
2023-10-10 17:09:35,908 DEBUG [__init__.py:166] Prefix dict has been built successfully.


VALL-E EOS [493 -> 585]
libc++abi: terminating due to uncaught exception of type c10::Error: Unsupported type byte size: ComplexFloat
Exception raised from getGatherScatterScalarType at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/View.mm:758 (most recent call first):
frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) + 92 (0x13755d2b8 in libc10.dylib)
frame #1: at::native::mps::getGatherScatterScalarType(at::Tensor const&) + 304 (0x2be92313c in libtorch_cpu.dylib)
frame #2: invocation function for block in at::native::mps::gatherViewTensor(at::Tensor const&, at::Tensor&) + 128 (0x2be924c8c in libtorch_cpu.dylib)
frame #3: _dispatch_client_callout + 20 (0x19cdf0400 in libdispatch.dylib)
frame #4: _dispatch_lane_barrier_sync_invoke_and_complete + 56 (0x19cdff97c in libdispatch.dylib)
frame #5: at::native::mps::gatherViewTensor(at::Tensor const&, at::Tensor&) + 888 (0x2be923824 in libtorch_cpu.dylib)
frame #6: at::native::mps::mps_copy_(at::Tensor&, at::Tensor const&, bool) + 3096 (0x2be87ab44 in libtorch_cpu.dylib)
frame #7: at::native::copy_impl(at::Tensor&, at::Tensor const&, bool) + 1944 (0x2ba5f75e4 in libtorch_cpu.dylib)
frame #8: at::native::copy_(at::Tensor&, at::Tensor const&, bool) + 100 (0x2ba5f6d8c in libtorch_cpu.dylib)
frame #9: at::_ops::copy_::call(at::Tensor&, at::Tensor const&, bool) + 288 (0x2bb32d6f8 in libtorch_cpu.dylib)
frame #10: at::native::clone(at::Tensor const&, c10::optional<c10::MemoryFormat>) + 444 (0x2ba981f64 in libtorch_cpu.dylib)
frame #11: at::_ops::clone::call(at::Tensor const&, c10::optional<c10::MemoryFormat>) + 276 (0x2bb03b0a4 in libtorch_cpu.dylib)
frame #12: at::_ops::contiguous::call(at::Tensor const&, c10::MemoryFormat) + 272 (0x2bb45fa40 in libtorch_cpu.dylib)
frame #13: at::TensorBase::__dispatch_contiguous(c10::MemoryFormat) const + 40 (0x2ba447110 in libtorch_cpu.dylib)
frame #14: at::native::mps::binaryOpTensor(at::Tensor const&, at::Tensor const&, c10::Scalar const&, at::Tensor const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, MPSGraphTensor* (at::native::mps::BinaryOpCachedGraph*, MPSGraphTensor*, MPSGraphTensor*) block_pointer) + 968 (0x2be86331c in libtorch_cpu.dylib)
frame #15: at::native::structured_mul_out_mps::impl(at::Tensor const&, at::Tensor const&, at::Tensor const&) + 128 (0x2be8673dc in libtorch_cpu.dylib)
frame #16: at::(anonymous namespace)::wrapper_MPS_mul_Tensor(at::Tensor const&, at::Tensor const&) + 140 (0x2bc003e88 in libtorch_cpu.dylib)
frame #17: at::_ops::mul_Tensor::call(at::Tensor const&, at::Tensor const&) + 284 (0x2bae41878 in libtorch_cpu.dylib)
frame #18: torch::autograd::THPVariable_mul(_object*, _object*, _object*) + 396 (0x16e69c9e8 in libtorch_python.dylib)
frame #19: _object* torch::autograd::TypeError_to_NotImplemented_<&torch::autograd::THPVariable_mul(_object*, _object*, _object*)>(_object*, _object*, _object*) + 12 (0x16e5f8a3c in libtorch_python.dylib)
frame #20: method_vectorcall_VARARGS_KEYWORDS + 488 (0x1025cbb3c in python3.10)
frame #21: vectorcall_maybe + 260 (0x102639650 in python3.10)
frame #22: slot_nb_multiply + 160 (0x102635d74 in python3.10)
frame #23: binary_op1 + 316 (0x1025a05a0 in python3.10)
frame #24: PyNumber_Multiply + 36 (0x1025a0b24 in python3.10)
frame #25: _PyEval_EvalFrameDefault + 3920 (0x1026a3318 in python3.10)
frame #26: _PyEval_Vector + 2056 (0x1026a1ad0 in python3.10)
frame #27: method_vectorcall + 516 (0x1025c15f8 in python3.10)
frame #28: _PyEval_EvalFrameDefault + 27276 (0x1026a8e54 in python3.10)
frame #29: _PyEval_Vector + 2056 (0x1026a1ad0 in python3.10)
frame #30: _PyObject_FastCallDictTstate + 320 (0x1025be094 in python3.10)
frame #31: _PyObject_Call_Prepend + 164 (0x1025bec8c in python3.10)
frame #32: slot_tp_call + 116 (0x102633ce8 in python3.10)
frame #33: _PyObject_MakeTpCall + 612 (0x1025bdde0 in python3.10)
frame #34: call_function + 676 (0x1026acde4 in python3.10)
frame #35: _PyEval_EvalFrameDefault + 26388 (0x1026a8adc in python3.10)
frame #36: _PyEval_Vector + 2056 (0x1026a1ad0 in python3.10)
frame #37: PyVectorcall_Call + 156 (0x1025be624 in python3.10)
frame #38: _PyEval_EvalFrameDefault + 27276 (0x1026a8e54 in python3.10)
frame #39: _PyEval_Vector + 2056 (0x1026a1ad0 in python3.10)
frame #40: method_vectorcall + 164 (0x1025c1498 in python3.10)
frame #41: call_function + 524 (0x1026acd4c in python3.10)
frame #42: _PyEval_EvalFrameDefault + 26612 (0x1026a8bbc in python3.10)
frame #43: _PyEval_Vector + 2056 (0x1026a1ad0 in python3.10)
frame #44: _PyEval_EvalFrameDefault + 27276 (0x1026a8e54 in python3.10)
frame #45: _PyEval_Vector + 2056 (0x1026a1ad0 in python3.10)
frame #46: _PyEval_EvalFrameDefault + 27276 (0x1026a8e54 in python3.10)
frame #47: _PyEval_Vector + 2056 (0x1026a1ad0 in python3.10)
frame #48: context_run + 348 (0x1026c587c in python3.10)
frame #49: cfunction_vectorcall_FASTCALL_KEYWORDS + 112 (0x1026114e8 in python3.10)
frame #50: _PyEval_EvalFrameDefault + 27276 (0x1026a8e54 in python3.10)
frame #51: _PyEval_Vector + 2056 (0x1026a1ad0 in python3.10)
frame #52: call_function + 524 (0x1026acd4c in python3.10)
frame #53: _PyEval_EvalFrameDefault + 26348 (0x1026a8ab4 in python3.10)
frame #54: _PyEval_Vector + 2056 (0x1026a1ad0 in python3.10)
frame #55: call_function + 524 (0x1026acd4c in python3.10)
frame #56: _PyEval_EvalFrameDefault + 26348 (0x1026a8ab4 in python3.10)
frame #57: _PyEval_Vector + 2056 (0x1026a1ad0 in python3.10)
frame #58: method_vectorcall + 336 (0x1025c1544 in python3.10)
frame #59: thread_run + 180 (0x102774d98 in python3.10)
frame #60: pythread_wrapper + 48 (0x102710144 in python3.10)
frame #61: _pthread_start + 148 (0x19cf9ffa8 in libsystem_pthread.dylib)
frame #62: thread_start + 8 (0x19cf9ada0 in libsystem_pthread.dylib)

run.sh: line 3: 22444 Abort trap: 6           ./venv/bin/python launch-ui.py
@cryptrr
Copy link

cryptrr commented Oct 20, 2023

Got the same error. On Mac Mini M2

@toby1991
Copy link

+1 mbp m1 pro

@hidecloud
Copy link

same error on Macbook M2 MAX

@graysonchen
Copy link

Same error on mbp m1 pro

@graysonchen
Copy link

https://github.com/Plachtaa/VALL-E-X/pull/102/files
I commented out the codes. I am using CPUs that are working now.

@cczw2010
Copy link
Author

cczw2010 commented Nov 1, 2023

https://github.com/Plachtaa/VALL-E-X/pull/102/files I commented out the codes. I am using CPUs that are working now.

but the same error. Does the running code need any modifications?

@alexivaner
Copy link

https://github.com/Plachtaa/VALL-E-X/pull/102/files I commented out the codes. I am using CPUs that are working now.

If I comment MPS support part, I will get no hardware supported error

@alexivaner
Copy link

https://github.com/Plachtaa/VALL-E-X/pull/102/files I commented out the codes. I am using CPUs that are working now.

It is working now to use CPU, is it still not possible today to utilize MPS in Vall-E?

@liucr
Copy link

liucr commented Dec 21, 2023

I met the same error. Here is the output of the command line terminal.

/Users//work/study/exercises/VALL-E-X/venv/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.
  warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.")
VALL-E EOS [0 -> 897]
libc++abi: terminating due to uncaught exception of type c10::Error: Unsupported type byte size: ComplexFloat
Exception raised from getGatherScatterScalarType at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/View.mm:744 (most recent call first):
frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) + 92 (0x100dd78d0 in libc10.dylib)
frame #1: at::native::mps::getGatherScatterScalarType(at::Tensor const&) + 400 (0x113a2b2e0 in libtorch_cpu.dylib)
frame #2: invocation function for block in at::native::mps::gatherViewTensor(at::Tensor const&, at::Tensor&) + 132 (0x113a2a220 in libtorch_cpu.dylib)
frame #3: _dispatch_client_callout + 20 (0x1873c0910 in libdispatch.dylib)
frame #4: _dispatch_lane_barrier_sync_invoke_and_complete + 56 (0x1873cfcc4 in libdispatch.dylib)
frame #5: at::native::mps::gatherViewTensor(at::Tensor const&, at::Tensor&) + 896 (0x113a28c90 in libtorch_cpu.dylib)
frame #6: at::native::mps::mps_copy_(at::Tensor&, at::Tensor const&, bool) + 3896 (0x113952e58 in libtorch_cpu.dylib)
frame #7: at::native::copy_impl(at::Tensor&, at::Tensor const&, bool) + 2592 (0x10f1ebec0 in libtorch_cpu.dylib)
frame #8: at::native::copy_(at::Tensor&, at::Tensor const&, bool) + 100 (0x10f1eb3e0 in libtorch_cpu.dylib)
frame #9: at::_ops::copy_::call(at::Tensor&, at::Tensor const&, bool) + 292 (0x10ff71960 in libtorch_cpu.dylib)
frame #10: at::native::clone(at::Tensor const&, c10::optional<c10::MemoryFormat>) + 456 (0x10f56f018 in libtorch_cpu.dylib)
frame #11: at::_ops::clone::call(at::Tensor const&, c10::optional<c10::MemoryFormat>) + 280 (0x10fc35bd0 in libtorch_cpu.dylib)
frame #12: at::_ops::contiguous::call(at::Tensor const&, c10::MemoryFormat) + 280 (0x1100b8230 in libtorch_cpu.dylib)
frame #13: at::TensorBase::__dispatch_contiguous(c10::MemoryFormat) const + 40 (0x10f0321ac in libtorch_cpu.dylib)
frame #14: at::native::mps::binaryOpTensor(at::Tensor const&, at::Tensor const&, c10::Scalar const&, at::Tensor const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, MPSGraphTensor* (at::native::mps::BinaryOpCachedGraph*, MPSGraphTensor*, MPSGraphTensor*) block_pointer) + 940 (0x1139386b8 in libtorch_cpu.dylib)
frame #15: at::native::structured_mul_out_mps::impl(at::Tensor const&, at::Tensor const&, at::Tensor const&) + 112 (0x11393b3e4 in libtorch_cpu.dylib)
frame #16: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, at::Tensor const&), &at::(anonymous namespace)::wrapper_MPS_mul_Tensor(at::Tensor const&, at::Tensor const&)>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, at::Tensor const&>>, at::Tensor (at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) + 136 (0x110d36ac8 in libtorch_cpu.dylib)
frame #17: at::_ops::mul_Tensor::call(at::Tensor const&, at::Tensor const&) + 288 (0x10fa3336c in libtorch_cpu.dylib)
frame #18: torch::autograd::THPVariable_mul(_object*, _object*, _object*) + 408 (0x102f5cc14 in libtorch_python.dylib)
frame #19: _object* torch::autograd::TypeError_to_NotImplemented_<&torch::autograd::THPVariable_mul(_object*, _object*, _object*)>(_object*, _object*, _object*) + 12 (0x102eb5384 in libtorch_python.dylib)
frame #20: method_vectorcall_VARARGS_KEYWORDS + 156 (0x10132b9c0 in Python)
frame #21: slot_nb_multiply + 236 (0x1013b3430 in Python)
frame #22: binary_op1 + 228 (0x1012f7ab0 in Python)
frame #23: PyNumber_Multiply + 36 (0x1012f80c0 in Python)
frame #24: _PyEval_EvalFrameDefault + 3844 (0x101446698 in Python)
frame #25: _PyEval_Vector + 360 (0x101443f28 in Python)
frame #26: method_vectorcall + 288 (0x101320c64 in Python)
frame #27: _PyEval_EvalFrameDefault + 1472 (0x101445d54 in Python)
frame #28: _PyEval_Vector + 360 (0x101443f28 in Python)
frame #29: method_vectorcall + 288 (0x101320c64 in Python)
frame #30: _PyEval_EvalFrameDefault + 1472 (0x101445d54 in Python)
frame #31: _PyEval_Vector + 360 (0x101443f28 in Python)
frame #32: _PyObject_FastCallDictTstate + 96 (0x10131cfb0 in Python)
frame #33: slot_tp_call + 196 (0x1013afccc in Python)
frame #34: _PyObject_MakeTpCall + 136 (0x10131ccf8 in Python)
frame #35: call_function + 380 (0x101453238 in Python)
frame #36: _PyEval_EvalFrameDefault + 23772 (0x10144b470 in Python)
frame #37: _PyEval_Vector + 360 (0x101443f28 in Python)
frame #38: PyVectorcall_Call + 140 (0x10131d800 in Python)
frame #39: _PyEval_EvalFrameDefault + 1472 (0x101445d54 in Python)
frame #40: _PyEval_Vector + 360 (0x101443f28 in Python)
frame #41: method_vectorcall + 124 (0x101320bc0 in Python)
frame #42: call_function + 132 (0x101453140 in Python)
frame #43: _PyEval_EvalFrameDefault + 17484 (0x101449be0 in Python)
frame #44: _PyEval_Vector + 360 (0x101443f28 in Python)
frame #45: _PyEval_EvalFrameDefault + 1472 (0x101445d54 in Python)
frame #46: _PyEval_Vector + 360 (0x101443f28 in Python)
frame #47: call_function + 132 (0x101453140 in Python)
frame #48: _PyEval_EvalFrameDefault + 17352 (0x101449b5c in Python)
frame #49: _PyEval_Vector + 360 (0x101443f28 in Python)
frame #50: pyrun_file + 308 (0x1014aec54 in Python)
frame #51: _PyRun_SimpleFileObject + 336 (0x1014ae398 in Python)
frame #52: _PyRun_AnyFileObject + 216 (0x1014ad9e4 in Python)
frame #53: pymain_run_file_obj + 180 (0x1014d9dd0 in Python)
frame #54: pymain_run_file + 72 (0x1014d9470 in Python)
frame #55: pymain_run_python + 300 (0x1014d8a58 in Python)
frame #56: Py_RunMain + 24 (0x1014d88ec in Python)
frame #57: pymain_main + 56 (0x1014d9f78 in Python)
frame #58: Py_BytesMain + 40 (0x1014da23c in Python)
frame #59: start + 2360 (0x1871f10e0 in dyld)

zsh: abort      python test1.py

I attached the runtime environment information for you to refer to.

Collecting environment information...
PyTorch version: 2.1.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 14.2 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.1.0.2.5)
CMake version: version 3.27.8
Libc version: N/A

Python version: 3.10.11 (v3.10.11:7d4cc5aa85, Apr  4 2023, 19:05:19) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-14.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M3 Max

Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] torch==2.1.2
[pip3] torchaudio==2.1.2
[pip3] torchvision==0.16.2
[conda] numpy                     1.24.3          py311hb57d4eb_0  
[conda] numpy-base                1.24.3          py311h1d85a46_0  
[conda] numpydoc                  1.5.0           py311hca03da5_0  
[conda] open-clip-torch           2.23.0                   pypi_0    pypi
[conda] pytorch                   2.2.0.dev20231206        py3.11_0    pytorch-nightly
[conda] pytorch-lightning         2.1.2                    pypi_0    pypi
[conda] pytorch-optimizer         2.12.0                   pypi_0    pypi
[conda] torch                     2.1.1                    pypi_0    pypi
[conda] torchaudio                2.1.1                    pypi_0    pypi
[conda] torchdiffeq               0.2.3                    pypi_0    pypi
[conda] torchmetrics              1.2.0                    pypi_0    pypi
[conda] torchsde                  0.2.6                    pypi_0    pypi
[conda] torchvision               0.16.1                   pypi_0    pypi

Here is the test1.py code that I ran:

from utils.generation import SAMPLE_RATE, generate_audio, preload_models
from scipy.io.wavfile import write as write_wav
from IPython.display import Audio

# download and load all models
preload_models()

# generate audio from text
text_prompt = """
Hello, my name is Nose. And uh, and I like hamburger. Hahaha... But I also have other interests such as playing tactic toast.
"""
audio_array = generate_audio(text_prompt)

# save audio to disk
write_wav("vallex_generation.wav", SAMPLE_RATE, audio_array)

# play text in notebook
Audio(audio_array, rate=SAMPLE_RATE)

@KodeurKubik
Copy link

Get same error +1

@KodeurKubik
Copy link

KodeurKubik commented Jan 12, 2024

Just found a way around, it works now for me.
utils/generation.py line 30:

Force the device to CPU

device = torch.device("cpu")
if torch.cuda.is_available():
    device = torch.device("cuda", 0)
# if torch.backends.mps.is_available():
    # device = torch.device("mps")

@graysonchen
Copy link

@KodeurKubik yep. But not all the relevant code is commented out. plz check that comment and PR: #109 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants