llama.cpp/ggml
Neha Abbas 2c577269f6 Merge branch 'master' of https://github.com/reeselevine/llama.cpp into addition
merging with addition
2025-08-04 16:01:55 -05:00
..
cmake cmake : Fix BLAS link interface (ggml/1316) 2025-07-30 17:33:11 +03:00
include ggml: Add initial WebGPU backend (#14521) 2025-07-16 18:18:51 +03:00
src Merge branch 'master' of https://github.com/reeselevine/llama.cpp into addition 2025-08-04 16:01:55 -05:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt HIP: add GGML_HIP_MMQ_MFMA option to allow disableing the MFMA path. (#14930) 2025-07-29 17:44:30 +02:00