llama.cpp/ggml/src/ggml-hexagon
Max Krasnyansky f5d1c4179f
hexagon: dma optimizations (mostly fixing regressions) (#21137)
* hex-fa: add simple dma cache for Mask

I noticed that we were refetch the mask rows over and over.
This simple cache avoids that.

* hex-dma: unset in-order desc bit which caused signficant perf regression

We don't rely on true in order processing of the DMA descriptors anywhere.
Turns out this mode caused significant regression of around 3-4 TPS during token gen.

* hex-rope: update comment to clarify that we don't need in-order DMA completions
2026-03-29 06:40:13 -07:00
..
htp hexagon: dma optimizations (mostly fixing regressions) (#21137) 2026-03-29 06:40:13 -07:00
CMakeLists.txt ggml-hexagon: flash-attention and reduce-sum optimizations (#19141) 2026-01-30 21:14:20 -08:00
ggml-hexagon.cpp hexagon: support for IQ4_NL and MXFP4 (#21018) 2026-03-27 09:22:41 -07:00
htp-drv.cpp chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
htp-drv.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
libdl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
libggml-htp.inf hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
op-desc.h ggml-hexagon: create generalized functions for cpu side op (#17500) 2025-12-22 23:13:24 -08:00