Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help Needed: X-AnyLabeling Not Using GPU #786

Open
1 task done
shahzaibkhan9 opened this issue Jan 7, 2025 · 6 comments
Open
1 task done

Help Needed: X-AnyLabeling Not Using GPU #786

shahzaibkhan9 opened this issue Jan 7, 2025 · 6 comments
Labels
Clarified Tag for issues that are clearly agreed upon question Further information is requested

Comments

@shahzaibkhan9
Copy link

Search before asking

  • I have searched the X-AnyLabeling Docs and issues and found no similar questions.

Question

System Information
App name: X-AnyLabeling
App version: 2.5.2
Device: GPU

Operating System: Windows-10-10.0.22631-SP0
CPU: Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
GPU: 0, NVIDIA GeForce RTX 4090, 24564
CUDA: V12.6.77
Python Version: 3.10.16

PyQt5 Version: 5.15.7
ONNX Version: 1.17.0
ONNX Runtime Version: None
ONNX Runtime GPU Version: 1.20.1
OpenCV Contrib Python Headless Version: 4.10.0.84

Query:
Hi,

I'm currently working on annotating images using SAM2, but it's using the CPU for processing instead of the GPU. Even though I've updated the preferred_device variable to GPU in app_info.py, the application still defaults to using the CPU.

I’ve attached screenshots for reference. Could someone please help me figure out what might be causing this issue and guide me on how to ensure the GPU is being used?

Thanks in advance!

Screenshot 2025-01-07 162423
Screenshot 2025-01-07 162506

Additional

No response

@shahzaibkhan9 shahzaibkhan9 added the question Further information is requested label Jan 7, 2025
@shahzaibkhan9 shahzaibkhan9 reopened this Jan 7, 2025
@shahzaibkhan9
Copy link
Author

shahzaibkhan9 commented Jan 7, 2025

device switched to GPU but inferencing not working properly.

I tired to run this command python setup.py build_ext --inplace

But getting errors

running build_ext
C:\ProgramData\miniconda3\envs\x-anylabeling-sam2\lib\site-packages\torch\utils\cpp_extension.py:382: UserWarning: Error checking compiler version for cl.exe: [WinError 2] The system cannot find the file specified
  warnings.warn(f'Error checking compiler version for {compiler}: {error}')
C:\ProgramData\miniconda3\envs\x-anylabeling-sam2\lib\site-packages\torch\utils\cpp_extension.py:416: UserWarning: The detected CUDA version (12.6) has a minor version mismatch with the version that was used to compile PyTorch (12.4). Most likely this shouldn't be a problem.
  warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
building 'sam2._C' extension
C:\ProgramData\miniconda3\envs\x-anylabeling-sam2\lib\site-packages\torch\utils\cpp_extension.py:1964: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. 
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
  warnings.warn(
Emitting ninja build file C:\Users\GPU\Downloads\x-anylabeling\segment-anything-2\build\temp.win-amd64-cpython-310\Release\build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/1] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin\nvcc --generate-dependencies-with-compile --dependency-output C:\Users\GPU\Downloads\x-anylabeling\segment-anything-2\build\temp.win-amd64-cpython-310\Release\sam2/csrc/connected_components.obj.d -std=c++17 --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /wd4624 -Xcompiler /wd4067 -Xcompiler /wd4068 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\lib\site-packages\torch\include -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\lib\site-packages\torch\include\TH -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\include" -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\include -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\Include -c C:\Users\GPU\Downloads\x-anylabeling\segment-anything-2\sam2\csrc\connected_components.cu -o C:\Users\GPU\Downloads\x-anylabeling\segment-anything-2\build\temp.win-amd64-cpython-310\Release\sam2/csrc/connected_components.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89
FAILED: C:/Users/GPU/Downloads/x-anylabeling/segment-anything-2/build/temp.win-amd64-cpython-310/Release/sam2/csrc/connected_components.obj
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin\nvcc --generate-dependencies-with-compile --dependency-output C:\Users\GPU\Downloads\x-anylabeling\segment-anything-2\build\temp.win-amd64-cpython-310\Release\sam2/csrc/connected_components.obj.d -std=c++17 --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /wd4624 -Xcompiler /wd4067 -Xcompiler /wd4068 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\lib\site-packages\torch\include -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\lib\site-packages\torch\include\TH -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\include" -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\include -IC:\ProgramData\miniconda3\envs\x-anylabeling-sam2\Include -c C:\Users\GPU\Downloads\x-anylabeling\segment-anything-2\sam2\csrc\connected_components.cu -o C:\Users\GPU\Downloads\x-anylabeling\segment-anything-2\build\temp.win-amd64-cpython-310\Release\sam2/csrc/connected_components.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_89,code=compute_89 -gencode=arch=compute_89,code=sm_89
nvcc fatal   : Cannot find compiler 'cl.exe' in PATH
ninja: build stopped: subcommand failed.
Error compiling objects for extension

Failed to build the SAM 2 CUDA extension due to the error above. You can still use SAM 2 and it's OK to ignore the error above, although some post-processing functionality may be limited (which doesn't affect the results in most cases; (see https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).

@CVHub520
Copy link
Owner

CVHub520 commented Jan 7, 2025

Hey there! @shahzaibkhan9:

The immediate error nvcc fatal : Cannot find compiler 'cl.exe' in PATH indicates that Visual Studio's C++ compiler is not properly configured in your environment. For more information on how to do this, you can search online for common solutions related to configuring the Visual Studio environment for CUDA development.

Also, for a more stable development environment, I recommend using Windows Subsystem for Linux (WSL2) with Ubuntu instead. This approach typically has fewer compatibility issues and is better supported for deep learning development.

@CVHub520 CVHub520 added the Clarified Tag for issues that are clearly agreed upon label Jan 7, 2025
@shahzaibkhan9
Copy link
Author

Hi, I’ve set up the environment using WSL, but I’m encountering the following issue:

INFO:__main__:🚀 X-AnyLabeling v2.5.2 launched!
INFO:__main__:⭐ If you like it, give us a star: https://github.com/CVHub520/X-AnyLabeling
2025-01-09 11:56:29,221 | INFO    | config:get_config:83 - 🔧️ Initializing config from local file: /home/gpu/.xanylabelingrc
This plugin does not support propagateSizeHints()
This plugin does not support raise()

It appears to be related to PyQt5. I attempted to set the platform to qcb, but it didn’t resolve the issue:

export QT_QPA_PLATFORM=qcb

Has anyone encountered this before? Any suggestions on how to fix it?

Thanks in advance!

@CVHub520
Copy link
Owner

CVHub520 commented Jan 9, 2025

Maybe you can refer to post #761

@shahzaibkhan9
Copy link
Author

Still not working. Please see.

@CVHub520
Copy link
Owner

CVHub520 commented Jan 9, 2025

I'm sorry for your experience. Unfortunately, I don't know how to solve it. It seems to be related to missing libraries or incorrect settings in your system environment. I recommend searching for a solution on Google or Reddit. 😢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Clarified Tag for issues that are clearly agreed upon question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants