llama.cpp/ggml/src/ggml-cpu
Georgi Gerganov d28961d81e
llama : enable chunked fused GDN path (#20340)
* llama : enable chunked fused GDN path

* models : avoid Q and K repeats when using fused GDA

* cont : fix comment

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

* cont : fix the fix

Co-authored-by: Aman Gupta <amangupta052@gmail.com>

* cont : fix

* metal : add GDN kernel (#20361)

* metal : add Metal backend for GGML_OP_GATED_DELTA_NET

Add a fused Metal kernel for the gated delta net recurrence op
(#19504), enabling GPU-accelerated inference for DeltaNet-based
models (Qwen3.5, etc.) on Apple Silicon.

Supports both GDA (scalar gate) and KDA (per-row gate) modes
with head_size 64 and 128. Unsupported configurations (head_size
32, non-contiguous tensors) gracefully fall back to CPU.

Performance: Qwen3.5-0.8B Q4_K_M on M4 Max
  tg128: 170 -> 213 t/s (+25%)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* metal : validate contiguity of all input tensors in supports_op

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* metal : add algorithm equivalence comment for GDA decay path

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* cont : unslop + optimize

* cont : clean-up

---------

Co-authored-by: Paul Flynn <paul@arkavo.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* CUDA: AR gated delta net improvements (#20391)

* Add FastDiv to gated_delta_net_cuda

* Shard columns across warps

This reduces register pressure (avoids spill for S_v = 128) and gives
the warp-scheduler more CTAs to schedule (thus hiding data-access
latencies).

* Remove unneded include in gated_delta_net.cu

* Improve comments

* Apply code-formating

* Make sharding HIP-compatible

1. Use ggml_cuda_get_physical_warp_size() to determine warp size flexibly
2. Add test with partial warp to test sum reduction on CUDA

* Remove fastdiv_s64, as we can treat neqk1 and rq3 as uint32_t

* Rename variables

* Enable GDN also for prefill, move TODO for chunked_GDN

* Actually remove the TODO from 2068908975

* Get warp size at runtime

warp_size is not known at compile time in hip host code.

* Don't expose ggml_cuda_get_physical_warp_size on host

---------

Co-authored-by: uvos <devnull@uvos.xyz>

* llama : refactor llm_build_delta_net_base API

---------

Co-authored-by: Aman Gupta <amangupta052@gmail.com>
Co-authored-by: Paul Flynn <paul@arkavo.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Oliver Simons <osimons@nvidia.com>
Co-authored-by: uvos <devnull@uvos.xyz>
2026-03-11 22:46:40 +02:00
..
amx chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
arch ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
kleidiai kleidiai : support for concurrent sme and neon kernel execution (#20070) 2026-03-10 09:25:25 +02:00
llamafile ggml-cpu: Fix gcc 15 ICE on ppc64le (#20083) (#20130) 2026-03-06 23:22:39 +08:00
spacemit ggml : fix SpaceMit IME array out-of-bounds in task assignment (#16629) 2025-10-17 13:01:23 +03:00
CMakeLists.txt kleidiai : add sme fp16 compute path for q4_0 gemm on aarch64 (#20043) 2026-03-03 11:40:26 +02:00
arch-fallback.h ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
binary-ops.cpp ggml : extend bin bcast for permuted src1 (#19484) 2026-02-11 07:52:00 +02:00
binary-ops.h cpu: de-duplicate some of the operators and refactor (ggml/1144) 2025-03-30 08:33:31 +03:00
common.h ggml-cpu: FA add GEMM microkernel (#19422) 2026-02-15 11:09:24 +05:30
ggml-cpu-impl.h ggml-cpu: FA split across kv for faster TG (#19209) 2026-02-03 01:19:55 +08:00
ggml-cpu.c ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
ggml-cpu.cpp ggml-cpu: FA split across kv for faster TG (#19209) 2026-02-03 01:19:55 +08:00
hbm.cpp ggml-cpu : split arch-specific implementations (#13892) 2025-06-09 16:47:13 +02:00
hbm.h ggml-cpu : split arch-specific implementations (#13892) 2025-06-09 16:47:13 +02:00
ops.cpp llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
ops.h ggml: add GATED_DELTA_NET op (#19504) 2026-03-07 15:41:10 +08:00
quants.c ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
quants.h ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
repack.cpp ggml-cpu: add RVV repack GEMM and GEMV for quantization types (#19121) 2026-03-10 08:49:52 +02:00
repack.h ggml-cpu: add RVV repack GEMM and GEMV for quantization types (#19121) 2026-03-10 08:49:52 +02:00
simd-gemm.h ggml : avoid UB in gemm ukernel (#19642) 2026-02-15 14:56:35 +02:00
simd-mappings.h ggml-cpu: optimize ggml_vec_dot_bf16 for s390x (#19399) 2026-02-15 18:20:35 +08:00
traits.cpp ggml : fix fallback to CPU for ununsupported ops (#15118) 2025-08-06 14:37:35 +02:00
traits.h ggml : fix fallback to CPU for ununsupported ops (#15118) 2025-08-06 14:37:35 +02:00
unary-ops.cpp ggml : unary ops support non-cont src0 + metal F16 unary ops (#19511) 2026-02-11 18:58:43 +02:00
unary-ops.h ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063) 2025-11-13 20:54:47 +02:00
vec.cpp ggml-cpu: optimize ggml_vec_dot_bf16 for s390x (#19399) 2026-02-15 18:20:35 +08:00
vec.h ggml-cpu: extend support for RVV floating-point kernels (#17318) 2025-12-18 16:02:09 +02:00