llama.cpp/ggml
giorgiopapini 61b0baf45a Fix OS support for x86 simd architectures 2026-02-11 11:58:46 +01:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include ggml-virtgpu: make the code thread safe (#19204) 2026-02-04 10:46:18 +08:00
src Fix OS support for x86 simd architectures 2026-02-11 11:58:46 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt Bump cmake max version (needed for Windows on Snapdragon builds) (#19188) 2026-02-01 14:13:38 -08:00