llama.cpp/ggml
Daniel Bevenius cc689f9042
Merge 515bd7c9a5 into 88458164c7
2026-04-01 07:54:55 +00:00
..
cmake cmake : respect GGML_LIB_INSTALL_DIR and LLAMA_LIB_INSTALL_DIR 2026-02-20 08:53:19 +01:00
include llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
src CUDA: Add Flash Attention Support for Head Dimension 512 (#20998) 2026-04-01 09:07:24 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt Merge 515bd7c9a5 into 88458164c7 2026-04-01 07:54:55 +00:00