Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

See No CUDA toolset found log after -- CUDA Toolkit found. log #1917

Open
TomaszDlubis opened this issue Feb 3, 2025 · 1 comment
Open

Comments

@TomaszDlubis
Copy link

TomaszDlubis commented Feb 3, 2025

I have installed CUDA in version 12.5, which is detected by CMake and is visible in the terminal. The CUDA_PATH is also correctly set.
During trying to install, I see those logs:

pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu125
...
     -- Performing Test HAS_FMA_1 - Success
      -- Performing Test HAS_AVX512_1
      -- Performing Test HAS_AVX512_1 - Failed
      -- Performing Test HAS_AVX512_2
      -- Performing Test HAS_AVX512_2 - Failed
      -- Adding CPU backend variant ggml-cpu: /arch:AVX2 GGML_AVX2;GGML_FMA;GGML_F16C
      -- Found CUDAToolkit: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.5/include (found version "12.5.40")
      -- CUDA Toolkit found
      -- Using CUDA architectures: native
      CMake Error at C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/Common7/IDE/CommonExtensions/Microsoft/CMake/CMake/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:563 (message):
        No CUDA toolset found.
      Call Stack (most recent call first):
        C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/Common7/IDE/CommonExtensions/Microsoft/CMake/CMake/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:8 (CMAKE_DETERMINE_COMPILER_ID_BUILD)
        C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/Common7/IDE/CommonExtensions/Microsoft/CMake/CMake/share/cmake-3.29/Modules/CMakeDetermineCompilerId.cmake:53 (__determine_compiler_id_test)
        C:/Program Files (x86)/Microsoft Visual Studio/2022/BuildTools/Common7/IDE/CommonExtensions/Microsoft/CMake/CMake/share/cmake-3.29/Modules/CMakeDetermineCUDACompiler.cmake:131 (CMAKE_DETERMINE_COMPILER_ID)
        vendor/llama.cpp/ggml/src/ggml-cuda/CMakeLists.txt:25 (enable_language)


      -- Configuring incomplete, errors occurred!

      *** CMake configuration failed
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)

My environment:
Windows 11,
Cuda12.5,
Python 3.12.8,
ninja 1.11.1.3,
cmake 3.31.4

@la1ty
Copy link

la1ty commented Feb 9, 2025

Did you forget to copy the four files from CUDA MSBuildExtensions directory to VS BuildCustomizations directory after installation? everything should be useful to locate them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants