Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: command 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.8\\bin\\nvcc' failed with exit code 1 #470

Open
Zain-Razzaq opened this issue Sep 26, 2024 · 1 comment

Comments

@Zain-Razzaq
Copy link

I am facing a problem while installing tiny-cuda-nn on windows 10. I worked fine on ubuntu but now I need it on windows.
Im using CUDA 11.8 and have visual studio 2019 installed. Pytorch version is 2.0.1 and cmake version is 3.30.3. I'm using RTX 4090

      C:\Users\IML\AppData\Local\Temp\pip-req-build-baggulcg\src\cutlass_mlp.cu(277): here
                  instantiation of "void tcnn::CutlassMLP<T>::backward_impl(cudaStream_t, const tcnn::Context &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, const tcnn::GPUMatrixDynamic<T> &, tcnn::GPUMatrixDynamic<T> *, __nv_bool, tcnn::GradientMode) [with T=tcnn::network_precision_t]"
      C:\Users\IML\AppData\Local\Temp\pip-req-build-baggulcg\src\cutlass_mlp.cu(374): here
     
      43 errors detected in the compilation of "../../src/cutlass_mlp.cu".
      cutlass_mlp.cu
      error: command 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.8\\bin\\nvcc' failed with exit code 1
      [end of output]  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for tinycudann
  Running setup.py clean for tinycudann
Failed to build tinycudann
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (tinycudann)
@starfind
Copy link

me,too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants