We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
My nvcc version
nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243
My Command: BUILD_TYPE=cublas make libbinding.a
BUILD_TYPE=cublas make libbinding.a
Output:
BUILD_TYPE=cublas make libbinding.a I llama.cpp build info: I UNAME_S: Linux I UNAME_P: x86_64 I UNAME_M: x86_64 I CFLAGS: -I./llama.cpp -I. -O3 -DNDEBUG -std=c11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wno-unused-function -pthread -march=native -mtune=native I CXXFLAGS: -I./llama.cpp -I. -I./llama.cpp/examples -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -pthread I CGO_LDFLAGS: I LDFLAGS: I BUILD_TYPE: cublas I CMAKE_ARGS: -DLLAMA_CUBLAS=ON I EXTRA_TARGETS: llama.cpp/ggml-cuda.o I CC: cc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 I CXX: g++ (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 cd llama.cpp && patch -p1 < ../patches/1902-cuda.patch patching file examples/common.cpp Hunk #1 succeeded at 603 (offset 23 lines). patching file examples/main/main.cpp Hunk #1 succeeded at 124 (offset 8 lines). patching file examples/quantize-stats/quantize-stats.cpp patching file examples/save-load-state/save-load-state.cpp patching file examples/train-text-from-scratch/train-text-from-scratch.cpp patching file llama.cpp Hunk #1 succeeded at 2698 (offset 31 lines). Hunk #2 succeeded at 2723 (offset 32 lines). Hunk #3 succeeded at 2845 (offset 32 lines). patching file llama.h Hunk #1 succeeded at 173 (offset 7 lines). patching file tests/test-tokenizer-0.cpp touch prepare mkdir -p build cd build && cmake ../llama.cpp -DLLAMA_CUBLAS=ON && VERBOSE=1 cmake --build . --config Release && cp -rf CMakeFiles/ggml.dir/ggml.c.o ../llama.cpp/ggml.o -- The C compiler identification is GNU 9.4.0 -- The CXX compiler identification is GNU 9.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.25.1") -- Looking for pthread.h -- Looking for pthread.h - found -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Check if compiler accepts -pthread -- Check if compiler accepts -pthread - yes -- Found Threads: TRUE -- Found CUDAToolkit: /usr/include (found version "10.1.243") -- cuBLAS found -- The CUDA compiler identification is NVIDIA 10.1.243 -- Detecting CUDA compiler ABI info -- Detecting CUDA compiler ABI info - done -- Check for working CUDA compiler: /usr/bin/nvcc - skipped -- Detecting CUDA compile features -- Detecting CUDA compile features - done -- Using CUDA architectures: 52;61 -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Configuring done -- Generating done -- Build files have been written to: /home/bruce/proj/go-llama.cpp/build make[1]: Entering directory '/home/bruce/proj/go-llama.cpp/build' /usr/local/bin/cmake -S/home/bruce/proj/go-llama.cpp/llama.cpp -B/home/bruce/proj/go-llama.cpp/build --check-build-system CMakeFiles/Makefile.cmake 0 /usr/local/bin/cmake -E cmake_progress_start /home/bruce/proj/go-llama.cpp/build/CMakeFiles /home/bruce/proj/go-llama.cpp/build//CMakeFiles/progress.marks /usr/bin/make -f CMakeFiles/Makefile2 all make[2]: Entering directory '/home/bruce/proj/go-llama.cpp/build' /usr/bin/make -f CMakeFiles/ggml.dir/build.make CMakeFiles/ggml.dir/depend make[3]: Entering directory '/home/bruce/proj/go-llama.cpp/build' cd /home/bruce/proj/go-llama.cpp/build && /usr/local/bin/cmake -E cmake_depends "Unix Makefiles" /home/bruce/proj/go-llama.cpp/llama.cpp /home/bruce/proj/go-llama.cpp/llama.cpp /home/bruce/proj/go-llama.cpp/build /home/bruce/proj/go-llama.cpp/build /home/bruce/proj/go-llama.cpp/build/CMakeFiles/ggml.dir/DependInfo.cmake --color= make[3]: Leaving directory '/home/bruce/proj/go-llama.cpp/build' /usr/bin/make -f CMakeFiles/ggml.dir/build.make CMakeFiles/ggml.dir/build make[3]: Entering directory '/home/bruce/proj/go-llama.cpp/build' [ 2%] Building C object CMakeFiles/ggml.dir/ggml.c.o /usr/bin/cc -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_USE_CUBLAS -DGGML_USE_K_QUANTS -DK_QUANTS_PER_ITERATION=2 -I/home/bruce/proj/go-llama.cpp/llama.cpp/. -O3 -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -mf16c -mfma -mavx -mavx2 -pthread -std=gnu11 -MD -MT CMakeFiles/ggml.dir/ggml.c.o -MF CMakeFiles/ggml.dir/ggml.c.o.d -o CMakeFiles/ggml.dir/ggml.c.o -c /home/bruce/proj/go-llama.cpp/llama.cpp/ggml.c [ 4%] Building CUDA object CMakeFiles/ggml.dir/ggml-cuda.cu.o /usr/bin/nvcc -DGGML_CUDA_DMMV_X=32 -DGGML_CUDA_MMV_Y=1 -DGGML_USE_CUBLAS -DGGML_USE_K_QUANTS -DK_QUANTS_PER_ITERATION=2 -I/home/bruce/proj/go-llama.cpp/llama.cpp/. -O3 -DNDEBUG --generate-code=arch=compute_52,code=[compute_52,sm_52] --generate-code=arch=compute_61,code=[compute_61,sm_61] -mf16c -mfma -mavx -mavx2 -Xcompiler -pthread -std=c++11 -x cu -c /home/bruce/proj/go-llama.cpp/llama.cpp/ggml-cuda.cu -o CMakeFiles/ggml.dir/ggml-cuda.cu.o nvcc fatal : 'f16c': expected a number make[3]: *** [CMakeFiles/ggml.dir/build.make:90: CMakeFiles/ggml.dir/ggml-cuda.cu.o] Error 1 make[3]: Leaving directory '/home/bruce/proj/go-llama.cpp/build' make[2]: *** [CMakeFiles/Makefile2:880: CMakeFiles/ggml.dir/all] Error 2 make[2]: Leaving directory '/home/bruce/proj/go-llama.cpp/build' make[1]: *** [Makefile:146: all] Error 2 make[1]: Leaving directory '/home/bruce/proj/go-llama.cpp/build' make: *** [Makefile:181: llama.cpp/ggml.o] Error 2
Error:
nvcc fatal : 'f16c': expected a number
The text was updated successfully, but these errors were encountered:
try to upgrade nvcc
nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Sep_21_10:33:58_PDT_2022 Cuda compilation tools, release 11.8, V11.8.89 Build cuda_11.8.r11.8/compiler.31833905_0
Sorry, something went wrong.
No branches or pull requests
My nvcc version
My Command:
BUILD_TYPE=cublas make libbinding.a
Output:
Error:
The text was updated successfully, but these errors were encountered: