llama.cpp/ggml
Ruben Ortlam 9b309bbc51 fix amd workgroup size issue 2026-02-14 06:57:22 +01:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include ggml-virtgpu: make the code thread safe (#19204) 2026-02-04 10:46:18 +08:00
src fix amd workgroup size issue 2026-02-14 06:57:22 +01:00
.gitignore
CMakeLists.txt Bump cmake max version (needed for Windows on Snapdragon builds) (#19188) 2026-02-01 14:13:38 -08:00