llama.cpp/tools
Tim Burke d8c9f9c7f6 ggml: MXFP flash attention with SoA layout (CPU scalar reference)
Add MXFP KV cache quantization for flash attention using Struct-of-Arrays
(SoA) memory layout exclusively. Three MX types: MXFP4 (E2M1), MXFP8
(E4M3), MXFP6 (E2M3), implementing the OCP Microscaling v1.0 spec.

SoA layout stores [qs contiguous][e8m0 contiguous] per row, enabling
aligned memory access patterns for GPU backends. All functions in the
flash attention pipeline — set_rows quantization, Q preprocessing, K/V
dequantization — use SoA end-to-end. The existing AoS block layout
remains for MUL_MAT weight quantization (untouched).

Q preprocessing applies Walsh-Hadamard rotation (block-32) before
quantize/dequant round-trip, distributing outlier energy across the
shared exponent group. This is essential for perplexity:
  MXFP8: +0.22 PPL without rotation
  MXFP6: +3.34 PPL without rotation
Hadamard is skipped for MLA models (DK != DV) where V is a view of K.

Shared infrastructure in ggml-common.h:
- Block structures (block_mxfp8: 33B, block_mxfp6: 25B per 32 elements)
- E8M0 MSE-optimal scale search with ±1 range
- Canonical element converters (FP8 E4M3/E5M2, FP6 E2M3/E3M2)
- FP6 tight packing (4 six-bit values in 3 bytes, 25% savings)
- IEEE-754 bit reconstruction constants for SIMD backends
- SoA layout macros, portable bit cast, type property queries

CPU implementation:
- Scalar reference + ARM NEON + x86 AVX2 optimized paths
- Both FA paths supported: one_chunk (scalar) and tiled (SIMD GEMM)
- Split-KV path extended for single-query decode
- Generic vec_dot via dequant-to-float for MUL_MAT compatibility
- Arch fallbacks for loongarch, powerpc, riscv, s390, wasm

KV cache integration:
- set_rows writes SoA with optional Hadamard (op_params[0] flag)
- K cache block-aligned to 16 for CUDA cp.async compatibility
- CLI: --cache-type-k/v with short aliases (mxfp4, mxfp6, mxfp8)

Tests:
- Flash attention: all 3 types at D=64/128, mixed K/V (mxfp8+mxfp4)
- SET_ROWS: Hadamard rotation for all types
- SoA-aware test initialization and comparison for MXFP tensors
- Quantize functions coverage for all types

Rename GGML_TYPE_MXFP4 → GGML_TYPE_MXFP4_E2M1 across all backends
(CPU, OpenCL, SYCL) for consistency with the MX type family naming.
2026-03-15 17:33:19 -04:00
..
batched-bench Fix locale-dependent float printing in GGUF metadata (#17331) 2026-03-04 09:30:40 +01:00
cli common/parser: handle reasoning budget (#20297) 2026-03-11 10:26:12 +01:00
completion chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
cvector-generator chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
export-lora Fix locale-dependent float printing in GGUF metadata (#17331) 2026-03-04 09:30:40 +01:00
fit-params llama-fit-params: keep explicit --ctx-size 0 (#19070) 2026-01-24 22:13:08 +01:00
gguf-split Fix locale-dependent float printing in GGUF metadata (#17331) 2026-03-04 09:30:40 +01:00
imatrix chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
llama-bench ggml: MXFP flash attention with SoA layout (CPU scalar reference) 2026-03-15 17:33:19 -04:00
mtmd mtmd: add llama-mtmd-debug binary (#20508) 2026-03-14 15:52:29 +01:00
parser Autoparser - complete refactoring of parser architecture (#18675) 2026-03-06 21:01:00 +01:00
perplexity tools : enable kvu in perplexity for hellaswag, winogrande, multiple-choice (#19954) 2026-03-13 21:25:57 +01:00
quantize llama-quant : fail early on missing imatrix, refactor type selection, code cleanup (#19770) 2026-03-10 08:16:05 +02:00
results llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00
rpc Fix locale-dependent float printing in GGUF metadata (#17331) 2026-03-04 09:30:40 +01:00
server webui: restore code preview iframe origin isolation (#20477) 2026-03-14 11:28:28 +01:00
tokenize Fix locale-dependent float printing in GGUF metadata (#17331) 2026-03-04 09:30:40 +01:00
tts Fix locale-dependent float printing in GGUF metadata (#17331) 2026-03-04 09:30:40 +01:00
CMakeLists.txt llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00