llama.cpp/ggml
Antoine Viallon fc46f5ccc5
chore: remove obsolete #define GGML_CPU_CLANG_WORKAROUND
2026-01-15 13:23:10 +01:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include ggml-webgpu: Fix GGML_MEM_ALIGN to 8 for emscripten. (#18628) 2026-01-08 08:36:42 -08:00
src chore: remove obsolete #define GGML_CPU_CLANG_WORKAROUND 2026-01-15 13:23:10 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : bump version to 0.9.5 (ggml/1410) 2025-12-31 18:54:43 +02:00