llama.cpp/ggml
zhang hui 6b8ed41f2b fix cuda error 2025-12-13 17:52:50 +08:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include ggml-cpu : fix RISC-V Q4_0 repack select and RVV feature reporting (#17951) 2025-12-12 16:26:03 +02:00
src fix cuda error 2025-12-13 17:52:50 +08:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml-cpu: add ggml_thread_cpu_relax with Zihintpause support (#17784) 2025-12-08 10:41:34 +02:00