llama.cpp/cmake
Oliver Simons 4fea191c66 Use `FetchContent` over CPM as it's bundled with CMake
Thanks @ggerganov for the suggestion
2025-11-26 15:30:37 +01:00
..
arm64-apple-clang.cmake Add apple arm to presets (#10134) 2024-11-02 15:35:31 -07:00
arm64-windows-llvm.cmake ggml : prevent builds with -ffinite-math-only (#7726) 2024-06-04 17:01:09 +10:00
build-info.cmake build : fix build info on windows (#13239) 2025-05-01 21:48:08 +02:00
common.cmake cmake : enable building llama.cpp using system libggml (#12321) 2025-03-17 11:05:23 +02:00
git-vars.cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
llama-config.cmake.in cmake: add hints for locating ggml on Windows using Llama find-package (#11466) 2025-01-28 19:22:06 -04:00
llama.pc.in build : fix llama.pc (#11658) 2025-02-06 13:08:13 +02:00
riscv64-spacemit-linux-gnu-gcc.cmake ggml: riscv: add riscv spacemit backend (#15288) 2025-09-29 17:50:44 +03:00
x64-windows-llvm.cmake llama : build windows releases with dl backends (#13220) 2025-05-04 14:20:49 +02:00