llama.cpp/ggml/src
Ruben Ortlam 75f3bc94e6
vulkan: Flash Attention DP4A shader for quantized KV cache (#20797)
* use integer dot product for quantized KV flash attention

* small improvements

* fix SHMEM_STAGING indexing

* add missing KV type quants

* fixes

* add supported quants to FA tests

* readd fast paths for <8bit quants

* fix mmq gate and shmem checks
2026-04-13 14:21:31 +02:00
..
ggml-blas ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-cann ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-cpu ggml : fix a few instances of missing GGML_TYPE_Q1_0 cases (#21716) 2026-04-11 09:45:00 +03:00
ggml-cuda CUDA: Limit DeviceSegmentedSort to immediate mode (#21718) 2026-04-13 11:14:06 +02:00
ggml-hexagon hexagon: improved Op queuing, buffer and cache management (#21705) 2026-04-10 15:47:43 -07:00
ggml-hip ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-metal ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-musa ggml-cuda: native bf16 flash attention for vec kernel (#20525) 2026-03-22 11:05:51 +01:00
ggml-opencl opencl: add basic support for q5_k (#21593) 2026-04-11 01:46:19 -07:00
ggml-openvino ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-rpc ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-sycl sycl: disable Q1_0 in backend and cleanup unused variables (#21807) 2026-04-13 09:44:58 +08:00
ggml-virtgpu ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-vulkan vulkan: Flash Attention DP4A shader for quantized KV cache (#20797) 2026-04-13 14:21:31 +02:00
ggml-webgpu Remove extra conditional check on debug mode. (#21798) 2026-04-12 20:13:04 -07:00
ggml-zdnn ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-zendnn ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
CMakeLists.txt ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-alloc.c ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-backend-dl.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-dl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-impl.h ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-backend-meta.cpp ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-backend-reg.cpp ggml : add OpenVINO backend (#15307) 2026-03-14 07:56:55 +02:00
ggml-backend.cpp ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-common.h ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml-ext.h ggml: backend-agnostic tensor parallelism (experimental) (#19378) 2026-04-09 16:42:19 +02:00
ggml-impl.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
ggml-opt.cpp fix: free ctx_copy in ggml_opt_free to plug per-training-session leak (#21592) 2026-04-08 17:40:15 +02:00
ggml-quants.c ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml-quants.h ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00