今天遇到一个关于CUDA的问题,我要跑的深度学习代码,他里面有cuda编程,需要编译。但是你运行就报错。
代码提示我大段报错。
(score-denoise) ubuntu@GPUA10002:~/wbd/score-denoise_Transformerdepth20$ python train.py
Detected CUDA files, patching ldflags
Emitting ninja build file /home/ubuntu/wbd/score-denoise_Transformerdepth20/utils/cutils/build/build.ninja...
Building extension module cutils_...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] /usr/bin/nvcc -DTORCH_EXTENSION_NAME=cutils_ -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/TH -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/THC -isystem /data/miniconda3/envs/score-denoise/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -Xptxas -v --generate-code=arch=compute_80,code=sm_80 -std=c++14 -c /home/ubuntu/wbd/score-denoise_Transformerdepth20/utils/cutils/srcs/half_aligned_knn_sub_maxpooling.cu -o half_aligned_knn_sub_maxpooling.cuda.o
FAILED: half_aligned_knn_sub_maxpooling.cuda.o
/usr/bin/nvcc -DTORCH_EXTENSION_NAME=cutils_ -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/TH -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/THC -isystem /data/miniconda3/envs/score-denoise/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -Xptxas -v --generate-code=arch=compute_80,code=sm_80 -std=c++14 -c /home/ubuntu/wbd/score-denoise_Transformerdepth20/utils/cutils/srcs/half_aligned_knn_sub_maxpooling.cu -o half_aligned_knn_sub_maxpooling.cuda.o
nvcc fatal : Unsupported gpu architecture 'compute_80'
[2/3] /usr/bin/nvcc -DTORCH_EXTENSION_NAME=cutils_ -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/TH -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/THC -isystem /data/miniconda3/envs/score-denoise/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -Xptxas -v --generate-code=arch=compute_80,code=sm_80 -std=c++14 -c /home/ubuntu/wbd/score-denoise_Transformerdepth20/utils/cutils/srcs/aligned_knn_sub_maxpooling.cu -o aligned_knn_sub_maxpooling.cuda.o
FAILED: aligned_knn_sub_maxpooling.cuda.o
/usr/bin/nvcc -DTORCH_EXTENSION_NAME=cutils_ -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/TH -isystem /data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/include/THC -isystem /data/miniconda3/envs/score-denoise/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -Xptxas -v --generate-code=arch=compute_80,code=sm_80 -std=c++14 -c /home/ubuntu/wbd/score-denoise_Transformerdepth20/utils/cutils/srcs/aligned_knn_sub_maxpooling.cu -o aligned_knn_sub_maxpooling.cuda.o
nvcc fatal : Unsupported gpu architecture 'compute_80'
ninja: build stopped: subcommand failed.
Traceback (most recent call last):File "/data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1666, in _run_ninja_buildsubprocess.run(File "/data/miniconda3/envs/score-denoise/lib/python3.8/subprocess.py", line 516, in runraise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.The above exception was the direct cause of the following exception:Traceback (most recent call last):File "train.py", line 13, in <module>from models.denoise import *File "/home/ubuntu/wbd/score-denoise_Transformerdepth20/models/denoise.py", line 7, in <module>from .feature import FeatureExtractionWithResLFEFile "/home/ubuntu/wbd/score-denoise_Transformerdepth20/models/feature.py", line 6, in <module>from .ResLFE_block import ResLFE_BlockFile "/home/ubuntu/wbd/score-denoise_Transformerdepth20/models/ResLFE_block.py", line 8, in <module>from utils.cutils import knn_edge_maxpoolingFile "/home/ubuntu/wbd/score-denoise_Transformerdepth20/utils/cutils/__init__.py", line 14, in <module>cutils = load("cutils_", sources=sources, extra_cflags=["-O3", "-mavx2", "-funroll-loops"], extra_cuda_cflags=["-Xptxas","-v", "--generate-code=arch=compute_80,code=sm_80"],File "/data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1080, in loadreturn _jit_compile(File "/data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1293, in _jit_compile_write_ninja_file_and_build_library(File "/data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1405, in _write_ninja_file_and_build_library_run_ninja_build(File "/data/miniconda3/envs/score-denoise/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1682, in _run_ninja_buildraise RuntimeError(message) from e
RuntimeError: Error building extension 'cutils_'
然后你问ai,ai给你的建议是查看nvcc版本
nvcc --version
然后你发现,没有这个,然后会提示你
sudo apt install nvidia-cuda-toolkit
然后你又去安装,安装好后查看了一下
(base) ubuntu@GPUA10002:~/wbd/score-denoise_Transformerdepth20$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:07:56_CDT_2017
Cuda compilation tools, release 9.1, V9.1.85
发现这是很老的版本
新版本的这样的提示
(score-denoise) wu@wu:~/code/pointDenoise/score-denoise_Transformerdepth20$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:19:38_PST_2024
Cuda compilation tools, release 12.4, V12.4.99
然后老版本的,我就编译不通过。
然后我就一直在查找相关问题,一直在想,没安装nvcc前,别人也能够跑深度学习,然后感觉很奇怪。
然后我后面把这个卸载了
sudo apt remove nvidia-cuda-toolkit
这个是可以卸载的,大家放心。
然后就报错CUDA路径的问题
然后我就去
~/.bashrc
找
发现里面是有这个路径的,然后我就试着在终端输入
export PATH=/usr/local/cuda-12.1/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-12.1/lib64:$LD_LIBRARY_PATH
还是不行。
然后我就没辙了。
最后的最后,我进入/usr/local
查看,发现他下载的是cuda-12.0,不是12.1
然后我就在我要跑的代码终端,输入下面两个命令,然后执行跑代码,就能够编译通过了。
export PATH=/usr/local/cuda-12.0/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-12.0/lib64:$LD_LIBRARY_PATH
我也不知道为啥,~/.bashrc
里面为啥不是cuda-12.1,所以我也没有把他里面改成12.1,就按照原来的吧,以后要跑,直接先输入这两行命令,然后执行代码,当然,如果你不要编译,你直接运行是没有关系的。
这个问题,浪费了我一下午的时间,如果你也有相关问题,一定要注意,去看看/usr/local/
到底是多少版本的cuda。