llama.cpp/src
ZeroV0LT f17b3be63f
llama : fix pooling assertion crash in chunked GDN detection path (#20468)
* llama : fix pooling assertion crash in chunked GDN detection path

The chunked fused Gated Delta Net detection in sched_reserve() calls
graph_reserve(16*n_seqs, n_seqs, n_outputs, ...) where n_outputs = n_seqs.
This creates a dimension mismatch in build_pooling() for embedding models
with mean/rank pooling: build_inp_mean() creates a tensor with shape
[n_tokens=16*n_seqs, ...] while t_embd is reduced to [n_outputs=n_seqs, ...]
via out_ids, causing ggml_mul_mat to assert on ggml_can_mul_mat(a, b).

Fix: pass n_tokens as n_outputs in the chunked GDN graph reservation,
matching the pattern used by the pp/tg worst-case reservations.

Regression introduced by #20340 (d28961d).
Same class of bug as #12517, fixed by #12545.

* server : add mean pooling tests to embedding test suite

Add test_embedding_pooling_mean and test_embedding_pooling_mean_multiple
to cover the --pooling mean codepath, which was previously untested.

These tests would have caught the regression introduced by #20340 where
build_pooling() crashes with a ggml_mul_mat assertion due to mismatched
dimensions in the chunked GDN detection path.

---------

Co-authored-by: Domenico Crupi <domenico@zerovolt.it>
2026-03-13 20:53:42 +02:00
..
models graph : add optional scale parameter to build_lora_mm [no ci] (#20427) 2026-03-12 00:22:49 +01:00
CMakeLists.txt model : add Jina Embeddings v5 Nano (partial EuroBERT) support (#19826) 2026-02-26 12:14:09 +01:00
llama-adapter.cpp lora: make sure model keep track of associated adapters (#18490) 2026-01-15 10:24:28 +01:00
llama-adapter.h graph : fix KQ mask, lora, cvec reuse checks (#19644) 2026-02-16 09:21:11 +02:00
llama-arch.cpp llama : add support for Nemotron 3 Super (#20411) 2026-03-11 19:27:53 +01:00
llama-arch.h llama : add support for Nemotron 3 Super (#20411) 2026-03-11 19:27:53 +01:00
llama-batch.cpp kv-cache : fix M-RoPE checkpoints (#20132) 2026-03-06 08:46:51 +02:00
llama-batch.h batch : fix sequence id ownership (#17915) 2025-12-11 14:29:47 +02:00
llama-chat.cpp docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
llama-chat.h model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
llama-context.cpp llama : fix pooling assertion crash in chunked GDN detection path (#20468) 2026-03-13 20:53:42 +02:00
llama-context.h graph : fix KQ mask, lora, cvec reuse checks (#19644) 2026-02-16 09:21:11 +02:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-ext.h test-backend-ops: allow loading tests from file and parsing model operators into file (#19896) 2026-03-12 13:26:00 +01:00
llama-grammar.cpp grammar: Fix grammar root symbol check (#19761) 2026-03-12 12:04:56 +01:00
llama-grammar.h common/grammar : replace problematic backtracking regex `[\s\S]*` (#18342) 2026-01-03 16:02:43 -06:00
llama-graph.cpp graph : add optional scale parameter to build_lora_mm [no ci] (#20427) 2026-03-12 00:22:49 +01:00
llama-graph.h graph : add optional scale parameter to build_lora_mm [no ci] (#20427) 2026-03-12 00:22:49 +01:00
llama-hparams.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-hparams.h llama : add support for Nemotron 3 Super (#20411) 2026-03-11 19:27:53 +01:00
llama-impl.cpp impl : use 6 digits for tensor dims (#20094) 2026-03-04 09:53:38 +01:00
llama-impl.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-io.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-io.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-kv-cache-iswa.cpp model : support Step3.5-Flash (#19283) 2026-02-06 21:06:14 +01:00
llama-kv-cache-iswa.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-kv-cache.h llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-kv-cells.h llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-memory-hybrid-iswa.cpp memory : add llama_memory_hybrid_iswa (#18601) 2026-01-21 14:30:23 +02:00
llama-memory-hybrid-iswa.h memory : add llama_memory_hybrid_iswa (#18601) 2026-01-21 14:30:23 +02:00
llama-memory-hybrid.cpp graph : reuse SSM graphs (#16490) 2025-12-16 09:36:21 +02:00
llama-memory-hybrid.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp server : support multi-modal context checkpoints (#19849) 2026-02-25 15:14:27 +02:00
llama-memory-recurrent.h llama: consistent ctx <-> buf order for KV cache (#16746) 2025-10-28 11:23:54 +01:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-mmap.cpp mmap: Fix Windows handle lifetime (#19598) 2026-02-14 10:05:12 +02:00
llama-mmap.h llama : add `use_direct_io` flag for model loading (#18166) 2026-01-08 08:35:30 +02:00
llama-model-loader.cpp ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
llama-model-loader.h llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00
llama-model-saver.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-model-saver.h llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00
llama-model.cpp ggml : add NVFP4 quantization type support (#19769) 2026-03-11 21:02:54 +01:00
llama-model.h llama : whitespace cleanup (#20422) 2026-03-11 21:18:29 +01:00
llama-quant.cpp llama-quant : correct `n_attention_wv` usage (#20357) 2026-03-10 21:43:29 +02:00
llama-quant.h llama : refactor `src/llama.cpp` (#10902) 2025-01-03 10:18:53 +02:00
llama-sampler.cpp llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-sampler.h llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-vocab.cpp llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00
llama-vocab.h model : add JAIS-2 architecture support (#19488) 2026-02-19 13:30:17 +01:00
llama.cpp llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
unicode.h devops: add s390x & ppc64le CI (#15925) 2025-09-27 02:03:33 +08:00