llama.cpp/ggml
Progeny Alpha e22c2b2c85 vulkan: clean up chunked GDN shaders for PR review
Remove verbose algorithm comments, section dividers, stale inline
constant annotations, and unused extensions. Match llama.cpp codebase
style (minimal comments, no section decorators).

No functional changes. 16/16 tests pass.
2026-03-14 03:49:27 -04:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
src vulkan: clean up chunked GDN shaders for PR review 2026-03-14 03:49:27 -04:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : fix typo gmml (#20512) 2026-03-13 14:36:13 +01:00