Commit Graph

7661 Commits

Author SHA1 Message Date
Georgi Gerganov 56426673cb
scripts : add pr2wt.sh (#18644)
* scripts : add pr2wt.sh

* script : shebang

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-01-07 15:16:20 +02:00
Daniel Bevenius bb77764c2d
convert : clarify sentence-transformers-dense-modules help [no ci] (#18662)
* convert : clarify sentence-transformers-dense-modules help [no ci]

This commit updates this options help message which currently looks
like this:
```console
  --sentence-transformers-dense-modules
                        Whether to include sentence-transformers dense modules.It can be used for sentence-transformers models, like
                        google/embeddinggemma-300mDefault these modules are not included.
```
2026-01-07 13:18:53 +01:00
Sigbjørn Skjæret 9dfa8ee950
ci : run cann build unconditionally [no ci] (#18659) 2026-01-07 13:07:08 +01:00
Jeff Bolz ca4a8370bc
vulkan: reject ops when a tensor is too large to allocate (#18646) 2026-01-07 12:03:32 +01:00
virajwad 03023296cf
vulkan: Warptile tuning for Intel Xe2/Xe3 (#18178)
* modify warptile tuning for xe3

* intel vendor check w/ coopmat support

* fix back formatting

* fix formatting change 2

* move intel check to chip specific tuning part

* Change to support both windows and linux

* modify m_warptile to l_warptile for intel

* modify warptile tuning for bf16 matmuls to fix regression (m_warptile to l_warptile)

* Code style changes

* Code style changes (2)

* Code style changes (3)
2026-01-07 11:59:47 +01:00
Eve 8c77a04cc7
vulkan: more mul mat optimizations (#18533)
* q4_k

* q5_k

* q2_k

* q4_1

* q5_1

* better buf index
2026-01-07 11:13:17 +01:00
Daniel Bevenius ffba4f29e6
examples : add debug utility/example (#18464)
* examples : add debug utility/example

This commit introduces a new example named llama-debug which is a
utility that is intended to be used to assist with developing/debugging
a converted model.

The motivation for this utilitiy is to assist in model conversion work
to verify that the model produces the expected outputs. It is intended
to replace logits.cpp in examples/model-conversion.

Example usage:
```console
./build/bin/llama-debug \
    -m models/Qwen2.5-0.5B-Instruct.gguf \
    --prompt "Hello, my name is" \
    --save-logits
...
Model add_bos: false
Input prompt: "Hello, my name is"
Token ids (5):
Hello(9707) ,(11)  my(847)  name(829)  is(374)
Data saved to data/llamacpp-Qwen2.5-0.5B-Instruct.bin
Data saved to data/llamacpp-Qwen2.5-0.5B-Instruct.txt
Prompt saved to data/llamacpp-Qwen2.5-0.5B-Instruct-prompt.txt
Tokens saved to data/llamacpp-Qwen2.5-0.5B-Instruct-tokens.bin
```

For more details about the options available for this example, please
refer to examples/debug/README.md.

* throw runtime error instead of logging error

* remove params.warmup and enable the warmup/nowarmup option

* model-conversion : remove logits.cpp

This commit removes logits.cpp in favor of using llama-debug for
generating logits and embeddings.

* examples : remove model-conversion directory

This was missed in the previous commit.

* model-conversion : add support for saving prompt and token ids

This commit add support for storing the prompt and the token ids for the
prompt when running the original models.

The motivation for this is that this will allow us to compare the prompt
and the tokens generated for the prompt when verifing the converted
model. Currently it is possible that even if the same prompt is used
that the tokens generated are different if there is a difference in the
tokenization between the original and converted model which would
currently go unnoticed (the verification will most likely fail but it
might not be obvious why).

* squash! model-conversion : add support for saving prompt and token ids

fix pyright errors.

* model-conversion : add compare_tokens utility

This commit adds a script to compare token outputs between original and
converted models.

Example usage:
```console
(venv) $ ./scripts/utils/compare_tokens.py pytorch-gemma-3-270m-it llamacpp-gemma-3-270m-it-bf16

Comparing tokens between:
  Original : pytorch-gemma-3-270m-it (6 tokens)
  Converted: llamacpp-gemma-3-270m-it-bf16 (6 tokens)

 All 6 tokens match!
```
And there is a verbose flag that will also print out the prompts:
```console
(venv) $ ./scripts/utils/compare_tokens.py pytorch-gemma-3-270m-it llamacpp-gemma-3-270m-it-bf16 -v

Original model prompt (pytorch-gemma-3-270m-it):
  prompt: Hello, my name is
n_tokens: 6
token ids: 2, 9259, 236764, 1041, 1463, 563

Converted model prompt (llamacpp-gemma-3-270m-it-bf16):
  prompt: Hello, my name is
n_tokens: 6
token ids: 2, 9259, 236764, 1041, 1463, 563

Comparing tokens between:
  Original : pytorch-gemma-3-270m-it (6 tokens)
  Converted: llamacpp-gemma-3-270m-it-bf16 (6 tokens)

 All 6 tokens match!
```

* model-conversion : add token comparison to verifiction scripts

This commit add the calling of the compare_tokens function in
compare-logits.py and semantic_check.py to ensure that the token ids
that the tokenizers procoduce are the same before proceeding with
verifying the logits/embeddings.

Placing them in the existing scripts instead calling them separately
ensures that the token comparison is always done prior to the
logit/embedding verifications.

Follow up commit/pr could refactor the causal logits verification into
a single script instead of the two that exist now. This would reduce the
code and make it consistent with the embeddings verficiation which only
has a single script.

* debug : use llama_model_n_embd_out

This commit updates the debug example to use the new function
llama_model_n_embd_out instead of llama_model_n_embd.

The motivation for this change is to support late interation retriever
models, like LFM2-ColBert-350M, where the output embeddings are down
projected to a lower dimension.

* debug : add print_usage function

This commit adds a print_usage function that is passed to the
common_params_parse.

The motivation for this is that this enables a specific usage message
which will be printed after all the options, for example:
```console
example usage:

  Print tensors:

  ./build/bin/llama-debug -m model.gguf -p "Hello my name is" --verbose

  The tensors to be printed can be filtered with --tensor-filter option.

  Save logits/embeddings:

  ./build/bin/llama-debug -m model.gguf -p "Hello my name is" --save-logits

  Add --embedding to save embeddings
```
2026-01-07 10:42:19 +01:00
hipudding 3333951d86
CANN: Fix rename for get_env (#18652)
In #18624, get_env in ggml-cann was renamed to get_env_as_lowercase
to accurately reflect the function’s behavior and reduce the chance
of misuse. However, the update missed renaming call sites in other
files. This commit fixes that oversight.
2026-01-07 16:11:31 +08:00
Raul Torres 193ee38a1b
CANN: Rename `get_env` to `get_env_as_lowercase` (#18624) 2026-01-07 10:01:25 +08:00
Max Krasnyansky 95ea9e0861
Hexagon add support for f16/f32 flash attention, scale, set-rows and improve f16/32 matmul (#18611)
* hexagon: improve fp16 matmul and add fp32/fp16 flash-attention

* hexagon: add support for set-rows fp32 -> fp16 with i32/i64 row-idx

* hexagon: add support for SCALE fp32

* hexagon: replace scalar fp32 -> fp16 copy with HVX

* hexagon: optimize flash_atten_ext with aligned VTCM buffers and DMA

- Implements double-buffered DMA prefetching for K, V, and Mask tensors.
- Ensures K and V rows in VTCM are padded to 128 bytes to support aligned HVX operations.
- Correctly synchronizes DMA transfers to prevent race conditions.
- Uses `FLASH_ATTN_BLOCK_SIZE` of 128 for efficient chunking.

* hexagon: use aligned mad_f16

* hexagon: flash_atten more aligned ops

* hexagon: optimize scale_f32 hvx helpers

* hexagon: unroll fa loops

* hexagon: remove unused set-rows log

* hexagon: flash_attn_ext add support for DMAing Q

- Update `op_flash_attn_ext` to include Q row size in scratchpad allocation.
- Pad Q row size to 128 bytes for alignment.
- Implement DMA transfer for Q tensor in `flash_attn_ext_f16_thread`.
- Update dot product computations to use VTCM-buffered Q data.

* hexagon: fix handling of NANs hvx dotproducts

* hexagon: cleanup spad allocation in flash-atten

* hexagon: improve fp16/fp32 matmul

- Introduced `vec_dot_f16_f16` and `vec_dot_f16_f16_rx2` kernels using efficient HVX dot product intrinsics.
- Added `quantize_fp32_f16` to copy/convert weights from DDR to VTCM
- Updated `op_matmul` to use the optimized path when VTCM capacity allows and broadcasting requirements are compatible.
- Implemented fallback logic to the original implementation for complex broadcasting scenarios.

* hexagon: fix HVX_ARCH check

* hexagon: matmul cleanup and fp16 fixes

Use aligned vec_dot_f16 for 2d matmuls and unaligned version for 4d.

* hexagon: fix fp16 x fp16 matmuls and some minor refactoring

* hexagon: add support for GET_ROWS f32 -> f32

Also optimize SET_ROWS threading a bit when we have just a few rows to process.

* hexagon: optimize set-rows threading

* hexagon: update adb/run-bench.sh to properly support experimental and verbose options

* hexagon: flash_atten use aligned vectors for dot products
2026-01-06 17:38:29 -08:00
Tarek Dakhran ccbc84a537
mtmd: mtmd_audio_streaming_istft (#18645)
Change is decoupled from https://github.com/ggml-org/llama.cpp/pull/18641.

[LFM2.5-Audio-1.5B](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B)
needs streaming istft for generating output audio.

* add streaming ISTFT class (`mtmd_audio_streaming_istft`) with overlap-add for audio reconstruction
* replace global audio cache with per-instance cache, the model requires
  two independent caches, for preprocessing (audio input) and for istft
  (audio output).
* unified templated FFT/IFFT implementation supporting both forward and inverse transforms
2026-01-06 21:00:29 +01:00
Johannes Gäßler 68b4d516c3
llama-params-fit: fix last devices with low VRAM (#18494) 2026-01-06 20:02:30 +01:00
Aadeshveer Singh 24af22fc36
ggml : optimize cuda ssm_scan using warp-level reduction (#18505)
* ggml : optimize cuda ssm_scan using warp-level reduction

* ggml : apply code review suggestions (style, const, constexpr)

* ggml : add TODO regarding stride consistency
2026-01-07 02:24:34 +08:00
Xuan-Son Nguyen 07fbe19f1f
arg: use CSV escape style for multiple-value args (#18643)
* arg: use CSV escape style for multiple-value args

* add test
2026-01-06 17:51:08 +01:00
Jeff Bolz ea13cba850
vulkan: support buffer_from_host_ptr (#18467)
* vulkan: support buffer_from_host_ptr

* hacky use of buffer_from_host_ptr for directio

* disable buffer_from_host_ptr cap

* use external memory for ggml_vk_host_malloc, revert model loader changes

* disable external_memory_host for MoltenVK

* take buffer memory types into account

* don't use external_memory_host for ggml_vk_host_malloc
2026-01-06 17:37:07 +01:00
Aman Gupta 090b137e56
ggml-cuda: refactor cuda graph usage (#18637)
* ggml-cuda: refactor cuda graph usage

* use is_enabled() instead of enabled
2026-01-06 23:48:45 +08:00
Beinsezii 968929528c
mmq.cu: tune mmq/rocblas switching for RDNA (#18537)
* Patch perf regression for mmq kernels in ROCm

recover performance regression for https://github.com/ggml-org/llama.cpp/issues/17917

* add n_experts branch like the cdna path

* mmq.cu: tune mmq/wmma switching for RDNA

* mmq.cu: move amd wmma mmq/wmma switching behind IS_RDNA3

* Update ggml/src/ggml-cuda/mmq.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Jiacheng (Jason) Chen <76919340+jiachengjason@users.noreply.github.com>
Co-authored-by: jiachengjason <jasonchen.jiacheng@gmail.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-01-06 16:26:07 +01:00
R 3d26a09dc7
server : add thinking content blocks to Anthropic Messages API (#18551)
* server : add thinking content blocks to Anthropic Messages API

Add support for returning reasoning/thinking content in Anthropic API
responses when using models with --reasoning-format deepseek and the
thinking parameter enabled.

- Non-streaming: adds thinking block before text in content array
- Streaming: emits thinking_delta events with correct block indices
- Partial streaming: tracks reasoning state across chunks via
  anthropic_has_reasoning member variable

Tested with bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF model.

* server : fix Anthropic API streaming for thinking content blocks

Add signature field and fix duplicate content_block_start events in
Anthropic Messages API streaming responses for reasoning models.

* server: refactor Anthropic streaming state to avoid raw pointer

Replace raw pointer to task_result_state with direct field copies:
- Copy state fields in update() before processing chunk
- Use local copies in to_json_anthropic() instead of dereferencing
- Pre-compute state updates for next chunk in update()

This makes the data flow clearer and avoids unsafe pointer patterns.
2026-01-06 16:17:13 +01:00
Christian Kastner bd2a93d475
gguf-py : add requests to dependencies (#18629) 2026-01-06 08:56:38 +01:00
Adrien Gallouët e75ee11024
ggml : fix avx512bf16 build (#18623)
- include `immintrin.h` when required
- remove unused m512bh

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-01-06 08:54:10 +02:00
Raul Torres da9b8d3300
CANN: Make `valid_values` variable `static const` (#18627) 2026-01-06 11:53:28 +08:00
nwyin e443fbcfa5
ggml webgpu: add CEIL operation support (#18605)
* ggml-webgpu: add CEIL operation support

      Add support for the CEIL unary operation in the WebGPU backend:
      - Add CEIL_FUNC shader template in unary_op.wgsl
      - Add 4 shader variants (f32, f16, inplace versions)
      - Initialize CEIL pipelines in ggml-webgpu.cpp
      - Register CEIL in supports_op function

* docs: update WebGPU ops support for CEIL
2026-01-05 11:38:57 -08:00
Tarek Dakhran 73d284a250
model : add LFM2-ColBert-350M (#18607)
* model : add LFM2-ColBert-350M

* llama_model_n_embd_out() - returns `hparams.n_embd_out` if set and fallbacks to `hparams.n_embd`
2026-01-05 19:52:56 +01:00
Johannes Gäßler df17a4c94f
CUDA: fix FA FP16 accumulator overflow for Granite (#18614) 2026-01-05 19:51:13 +01:00
tt 1871f0ba56
add YoutuVLForConditionalGeneration architectures (#18620)
* Support Youtu-VL Model
---------

Co-authored-by: Xuan-Son Nguyen <son@huggingface.co>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-01-05 18:15:14 +01:00
Aman Gupta f47edb8c19
ggml-cuda: check for srcs outside the cgraph (#18583)
* ggml-cuda: check for srcs outside the cgraph

* review: use leafs instead
2026-01-05 22:46:36 +08:00
Vladislav Sayapin da143b9940
server : fix router child env in containerized environments (#18562) 2026-01-05 14:12:05 +01:00
Jeff Bolz f1768d8f03
vulkan: fix topk_moe_sigmoid_norm_bias failures in GLM-4.6 (#18582) 2026-01-05 11:51:39 +01:00
Georgi Gerganov 2da64a2f8a
models : fix backend assignment for Granite/Nemotron graphs (#18599)
* models : fix backend assignment for Granite/Nemotron graphs

* cont : add ref

* cont : move call to build_inp_embd()
2026-01-05 12:34:23 +02:00
Jeff Bolz b37124d2d2
vulkan: handle quantize_q8_1 overflowing the max workgroup count (#18515)
* vulkan: handle quantize_q8_1 overflowing the max workgroup count

* vulkan: Fix small tile size matmul on lavapipe

* fix mul_mat_id failures
2026-01-05 11:30:14 +01:00
Sigbjørn Skjæret eadc4184ca
llama : refactor rope_freq_base/scale_swa conversion and init (#18553)
* refactor rope_freq_base/scale_swa conversion and init

* safe defaults for unknowns

* update relevant models

* grammar

* add get_rope_freq_scale to modern-bert

* const

* const

* log swa info
2026-01-05 09:14:04 +01:00
Chenguang Li 67e3f6f601
CANN: add operator fusion support for ADD + RMS_NORM (#17512)
This commit implements operator fusion for ADD + RMS_NORM operations
in the CANN backend to reduce memory access overhead and improve
performance. The fusion is controlled by the GGML_CANN_OPERATOR_FUSION
environment variable (default: false).

Changes:
- Implement ggml_cann_op_add_rms_norm_fused() using ACLNN AddRmsNorm
- Add ggml_cann_can_fuse() to check fusion eligibility
- Integrate fusion logic into computation graph evaluation
- Add test cases for ADD + RMS_NORM fusion
- Update documentation with new environment variable

The fusion combines ADD and RMS_NORM into a single kernel call,
which is more efficient than executing them separately.
2026-01-05 15:38:18 +08:00
Francisco Herrera 92ac1e016b
doc: clarify that steps also apply to linux for opencl (#18002)
* Clarify setup steps for Linux 

Added note that setup steps apply to Linux as well.

* Added note for backtick replacement

* clarify that backtick replacement only applies on linux

* clarified Linux specific steps

So actually some changes are needed for Linux but they are minor.

* clarify change execution

* clarify by placing info after steps

* clarify which steps

* Make instructions consistent across OSes

* Rm whitespace

* Update docs/backend/OPENCL.md

Co-authored-by: Aaron Teo <taronaeo@gmail.com>

* Update docs/backend/OPENCL.md

Co-authored-by: Aaron Teo <taronaeo@gmail.com>

* Update docs/backend/OPENCL.md

Co-authored-by: Aaron Teo <taronaeo@gmail.com>

---------

Co-authored-by: Aaron Teo <taronaeo@gmail.com>
2026-01-04 20:39:25 -08:00
Ali Tariq 8e3a761189
ci : init git lfs in every build for RISC-V (#18590)
* Initialized git lfs in every test

* Added git-lfs in dependencies to instal
2026-01-05 02:18:33 +01:00
Daniel Bevenius d3dce4e0a5
sampling : add support for backend sampling (#17004)
* sampling : add support for backend sampling

This commit adds support for performing sampling operations on the
backend (e.g. GPU) as part of the model computation graph.

The motivation for this feature is to enable sampling to be performed
directly on the backend as part of the computation graph being executed,
allowing for some or all of the sampling to be done on the backend.

For example, the backend sampler chain might select/sample a token
directly in which case only the sampled token needs to be transferred
from device memory to host memory.

It is also possible for the backend samplers to perform filtering of
the logits, or compute and filter the probability distribution, in
which case only the filtered logits or probabilites need to be
transferred back to system memory for further processing by CPU
samplers.

Currently the backend sampling works in a similar manner to how
pooling works, it is a function that is called by build_graph and the
sampler operations become part of the models computation graph.

* llama-cli : add backend sampler configuration

* server : add backend sampling options/configuration

* webui : add backend sampling options

* ggml : add initial cumsum implementation for CUDA

* sampling : enable all backend sampler tests

This commit enables all exisiting backend sampler tests in the
test-backend-sampler. Previously, some tests were disabled because
there were missing ggml operation implementations.

* graph : do not include llama-model.h

* sampling : always expose sampled_ids

This commit precomputes and caches the full-vocab token id list in
llama_context's constructor, so llama_get_backend_sampled_token_ids_ith
always returns a valid pointer.

The motivation for this is that this enables both common/sampling.cpp
and src/llama-sampling.cpp can simplify their logic.

Not all backends samplers that process logits need to set the
sampled_tokens_id as they may not change the order of the logits, for
example the temperature sampler only scales the logits but does not
change their order. Simliar the logit bias sampler only adds bias to
specific token ids but does not change the order of the logits. In
these cases there will not be a device to host copy of the sampled
token ids, and this is the use case where having this precomputed
list is useful.

* sampling : ensure at most one output token per seq

This commit adds a check in the batch allocator to ensure that when
backend sampling is enabled, at most one output token is specified per
sequence.

* CUDA: Optimize argsort for gpu-based token sampling

Argsort is used for top-k currently. WE optimize argsort by 2 things:

1. Use `DeviceRadixSort` for single-row/sequence to parallelize it
   across our SMs
2. Use `DeviceSegmentedSort` for multi-row/sequence as this is the
   correct entrypoint (the function chooses different execution paths,
   it contains `DeviceSegmentedRadixSort` as one of the paths and will
   choose the best one according to heuristics.
   https://nvidia.github.io/cccl/cub/api/structcub_1_1DeviceSegmentedSort.html#overview

Some perf numbers for a RTX PRO 6000:

On the kernel level, tested with
`GGML_CUDA_DISABLE_GRAPHS=1 ./test-backend-ops -o ARGSORT perf`
Before:
```
  ARGSORT(type=f32,ne=[65000,16,1,1],order=0):                  4130 runs -   359.24 us/run
  ARGSORT(type=f32,ne=[200000,1,1,1],order=0):                  8192 runs -   861.34 us/run
  ARGSORT(type=f32,ne=[200000,16,1,1],order=0):                 1343 runs -  1020.01 us/run
```

After:
```
  ARGSORT(type=f32,ne=[65000,16,1,1],order=0):                  4130 runs -   312.41 us/run
  ARGSORT(type=f32,ne=[200000,1,1,1],order=0):                 16384 runs -    63.48 us/run
  ARGSORT(type=f32,ne=[200000,16,1,1],order=0):                 1343 runs -   874.36 us/run
```

---
On the model level, tested with
`llama-cli -m gpt-oss-20b-mxfp4.gguf -n 200 -p "What is
the Capital of Sweden?" -no-cnv -fa 1 --backend-sampling`

Before:
```
llama_perf_sampler_print:    sampling time =       0.25 ms /   207 runs   (    0.00 ms per token, 824701.20 tokens per second)
llama_perf_context_print:        load time =   18215.58 ms
llama_perf_context_print: prompt eval time =      28.20 ms /     7 tokens (    4.03 ms per token,   248.19 tokens per second)
llama_perf_context_print:        eval time =     714.79 ms /   199 runs   (    3.59 ms per token,   278.40 tokens per second)
llama_perf_context_print:       total time =     857.62 ms /   206 tokens
```

After
```
llama_perf_sampler_print:    sampling time =       0.25 ms /   207 runs   (    0.00 ms per token, 828000.00 tokens per second)
llama_perf_context_print:        load time =   18366.92 ms
llama_perf_context_print: prompt eval time =      35.92 ms /     7 tokens (    5.13 ms per token,   194.87 tokens per second)
llama_perf_context_print:        eval time =     532.79 ms /   199 runs   (    2.68 ms per token,   373.50 tokens per second)
llama_perf_context_print:       total time =     683.65 ms /   206 tokens
```

* sampling : remove version from sampler chain

This commit removes the version field from the sampler chain and instead
used the sampler pointer itself for change detection.

* sampling : always populate logits for sampled probs

This commit updates common/sampler.cpp set_logits and
src/llama-sampling.cpp llama_sampler_sample to always populate the
logits field when backend sampled probabilities are available.

The motivation for this is that this ensure that CPU sampler always have
access to the logits values even when probabilites have been produced by
backend samplers.

* sampling : simplify backend sampling logic decode

This commit tries to simplify the backend sampling logic in
llama_context::decode.

* squash! sampling : simplify backend sampling logic decode

Fix condition to check if backend actually sampled tokens, not just that
backend samplers are available.

* common : fix regression caused by extra memory allocations during sampling

* squash! sampling : simplify backend sampling logic decode

The commit fixes a variable shadowing issue in the
`llama_context::decode` function which was introduced in a previous
refactoring.

* squash! common : fix regression caused by extra memory allocations during sampling

Apply the same changes to llama-sampling.cpp, llama_sampler_sample as
were applied in commit 38f408c25.

* sampling : introduce sampling_info struct

This commit introduces a sampling_info struct to encapsulate all
backend sampling related data within the llama_context class.

It also updates to use more descriptive names for sampled tokens and
candidates in the backend sampler ggml data structure.

* sampling : return early if backend sampling is disabled

* sampling : use pinned memory for backend sampling buffers

* common, tools : refactor model loading to support backend samplers

This commit refactors the model loading process in common/common.cpp
to enable backend sampler to be configure prior to the llama_context
creation.

The motivation for this change is that just being able to set/reset the
backend samplers after the llama_context has been created will cause a
resize to occur in llama_context::output_reserve which we want to avoid.

* sampling : add stride variable for clarity

* sampling: clarify candidate ids usage in comments

* sampling : fix copying both sampled tokens and logits/probs from backend

This commit fixes the issue where both sampled tokens and logits/probs
were not being copied correctly from the backend to the host when
multiple backend samplers were used.

A test for this scenario has also been added to ensure that both types
of data are copied correctly when different backend samplers are
employed.

* tests : cleanup test-backend-sampler.cpp

* common : remove build-info.cpp from commit [no ci]

This file was generated during the build process and should not be
included in previous commits.

* sampling : cleanup and clarify output_reserve

* sampling : remove redundant checks for stride and size [no ci]

* sampling : add debug log when backend sampler selects token

This commit adds a debug log statement in the llama_sampler_sample
to indicate when a backend sampler has selected a token for a given
index.

The modification helps in tracing the sampling process and understanding
the flow of control when backend samplers are used.

* examples : update batched to use backend sampling

This commit updates the batched example to demonstrate how to use
backend samplers.

* llama-cli : fix dangling reference to sampler config

* common : initialize backend samplers

* samplers : add missing cont

* sampling : add assertions for contiguous tensors in async copy functions

* examples : add info about hybrid sampling in batched [no ci]

* sampling : remove backend-dist option (wip)

This commit removes the `--backend-dist` option and instead uses the
configured --samplers chain to determine which samplers run on the
backend.

Backend sampling is still enabled using With `--backend_sampling`, and
the sampler chain, either explictly specified using `--samplers` or the
default, is automatically analyzed to determine which samplers can run
on the backend. The system finds the longest contiguous chain of
backend supported samplers from the start of the sampler sequence.
For example:

* If the chain is `top-k -> temperature -> top-p`, and both `top-k` and
  `temperature` are backend-supported but `top-p` is not, then `top-k`
  and `temperature` will run on the backend, while `top-p` and
  subsequent samplers run on the CPU.

* If all configured samplers are supported, the final distribution
  sampling will also happen on the backend, transferring only the
  sampled token IDs back to the host.

* If the sampler chain starts with an unsupported sampler (e.g.,
  `penalties`), all sampling runs on the CPU. Note that this is
  currently the case with the default sampler so to use backend sampling
  it is required to specify a sampler chain. See below for an example.

The following shows how llama-cli can be run with backend sampling:
```console
$ llama-cli -m models/Qwen2.5-VL-3B-Instruct-Q8_0.gguf \
    --prompt 'What is the capital of Sweden?' \
    -n 20 \
    -no-cnv \
    --verbose-prompt \
    -ngl 40 \
    --backend-sampling \
    --samplers 'top_k;temperature'
```
In this case the all sampling will happen on the backend since both
`top_k` and `temperature` are supported backend samplers.

To enable a partial backend sampling (hybrid sampling), for example
running `top_k` and `temperature` on the backend and `typ_p` on the CPU
the following sampler chain could be specified:
```console
$ llama-cli -m models/Qwen2.5-VL-3B-Instruct-Q8_0.gguf \
    --prompt 'What is the capital of Sweden?' \
    -n 20 \
    -no-cnv \
    --verbose-prompt \
    -ngl 40 \
    --backend-sampling \
    --samplers 'top_k;temperature;top_p'
```

If this looks good then I'll follow up with updates the llama-cli and
llama-server documentation to reflect these changes.

* CUDA: Add top-k implementation

* sampling : add min-p backend sampler

* Use `FetchContent` over CPM as it's bundled with CMake

Thanks @ggerganov for the suggestion

* common : add get_active_samplers function to check enabled samplers

This commit adds a function to check if a sampler is actually enabled,
meaning that it does not have values that disables its effect. This is
then used by the backend samplers initialization to avoid considering
samplers that are not enabled when determining the split point between
them.

The motivation for this is that this allows the default sampler chain
for `--samplers` to be used and any sampler that is not enabled will not
cause the backend samplers to be skipped.
For example, before this change if the penalties sampler was included in
the samplers list but had default values that disable it, it would cause
the backend samplers to be skipped entirely.

This commit also contains some refactoring to remove some code
duplication.

* cuda : fix editorconfig-checker warning

* sampling : use argmax for min-p sampling

* sampling : fix temperature check to allow zero temperature

This commit modifies the temperature sampling check to allow a
temperature value of zero. Previously, the check only allowed
positive temperature values, which excluded the valid case of
zero temperature.

The motivation for this is to enable a zero temperature setting which is
also currently causing the following test to fail:
```console
(venv) $ cd tools/server/tests
(venv) $ ./tests.sh unit/test_basic.py::test_load_split_model
```

* cuda : fix top-k compilation when CUB is unavailable

This commit adds a macro guard around argsort_f32_i32_cuda_cub usage
in the top-k fallback path, falling back to bitonic sort when
GGML_CUDA_USE_CUB is not defined.

The motivation for this is that some environments like AMD HIP
do not have CUB available, causing compilation failure.

Refs: https://github.com/ggml-org/llama.cpp/actions/runs/19728226426/job/56523606840#step:6:208

* sampling : add comments about backend sampler [no ci]

This commit adds a comment to llama_context's constructor explaining why
backend samplers are initialized early in the process.

* sampling : remove backend sampling chain from common_sampler

This commit removes the backend sampling chain from the common_sampler
structure and related functions.

The motivation for this change is that the backend samplers are not
currently set on the context, and if they are they would cause the
a graph reallocation to occur. Instead, the intialization is handled
like it currently is by llama_context's constructor.

* Fix top-k comp & behavior for non-CUB path

Some changes were made in 5ea3be265b
which were incomplete. In the case of non-CUB, bitonic sort and its
limitations of ncols < 1024 have to apply, similar to argsort.cu

* sampling : support intermixed backend/cpu samplers

This commit updates the backend sampling implementation to support
intermixed usage of backend and CPU samplers within the same batch.

The initial implementation was developed as an all-or-nothing solution:
either perform backend sampling for the entire batch, or perform CPU
sampling for the entire batch.

The motivation for this change is to support batches with mixed
sequences. For example, we may have a backend sampler configured for
sequence 0, while sequence 1 in the same batch uses CPU sampling. This
was not supported in the initial implementation.

This issue manifested in llama-server with the webui: decoding with
backend samplers would work initially, but after changing to CPU
sampling, a slot (sequence) could still be using a backend sampler.
This meant that logits in output_reserve would not be allocated,
resulting in an error.

The solution in this commit inspects the batch to determine which
sampling modes are needed and allocates buffers accordingly. However,
there is a known inefficiency: when we have intermixed backend/CPU
samplers in the same batch, we currently copy all logits to the host,
even for sequences using backend samplers.

Added test_backend_cpu_mixed_batch to verify correct behavior with
mixed backend/CPU samplers in a single batch, including dynamic
sampler switching between decode calls.

* squash! sampling : support intermixed backend/cpu samplers

Add check that logits is not null which is can happen for embeddings.

* squash! sampling : support intermixed backend/cpu samplers

Fix llama-save-load-state which currently fails by handling the case
when batch.logits is nullptr (like when loading state) by allocating
space for all outputs as CPU logits.

* refactor : simplify and improve memory management

* Add initial version for top-p sampling

As we only support static graphs for the time and we don't know the size
of the output of top-p, we have to do value-scaling same as for min-p
operator.

Further improvements can be applied to the unit-test (i.e. check for
equivalence of top_p happening on backend with top_p happening on cpu)
and also by constructing candidates and sorting those as opposed to
reversing the sort of the logits (this would be arange +
get_rows instead of argsort + get_rows)

* sampling : use logits directly for min-p filtering

* sampling : simplify

* llama : simplify

* llama : cleanup + naming

* llama : call backend_init once

* llama : reserve graphs with samplers

* llama : naming

* cont : naming

* sampling : lower log level for output buffer reallocations [no ci]

This commit changes the logging level for output buffer reallocations
in the llama_context::output_reserve function from INFO to DEBUG.

The motivation for this is that it currently logs to info and when
enabling verbose logging for llama-cli this will get mixed with the
output, for example:

```console
What is the capital of Sweden?output_reserve: reallocating output buffer from size 0.58 MiB to 1.74 MiB
 1. Stockholm
2\. Helsinki
Based are the options
1. Stockholm
Explanation: Stockholm is the capital of
...
```

* Fix backend_top_p_sampler

softmax(softmax) will return uniform distribution, so we should not
return the softmax but the logits instead.

* Factor out `ggml_sort` into its own function

* Make backend's top_p sampler inclusive

In addition to match the algorithm proposed in the original
[paper](https://arxiv.org/abs/1904.09751), this resolves the edge-case
where `max_p is > top_p` for a single logit, where the mask would
otherwise be empty (and we thus sample from the whole vocabulary with
equal likelihood)

* common : simplify sampler chain initialization

* sampling : do not create empty samplers

* sampling : fix top_p empty condition

* examples : remove outdated backend sampling section

This commit removes the outdated section about using backend samplers
from the README.md file in the examples/batched.

* sampling : fix backend temp sampler for zero temperature

This commit fixes the implementation of the temperature-based sampler
for the case when the temperature is set to zero. This now correctly
selects the most probable token by masking out all other tokens in the
logits.

* CUDA: Move cccl fetch to after cuda has been enabled in CMakeLists.txt

This will allow cccl to set build flags for the CUDA compiler, required
e.g. for MSVC compat, see also
https://github.com/NVIDIA/cccl/pull/6791

* CUDA: Use standard-compliant preprocessor for MSVC builds

Workarounds of https://github.com/NVIDIA/cccl/pull/6791 will not be
backported to CCCL 3.2, only the diagnostics/error messages will:
https://github.com/NVIDIA/cccl/pull/6827

* CUDA: Update CCCL's rc candidate

* squash! sampling : fix backend temp sampler for zero temperature

This modifies the parent commit to simply return the most probably token
instead of masking the logits.

* sampling : implement temp_ext_backend sampling

This commit implements the apply function for the extended temperature
sampling.

* sampling : minor cleanup

* sampling : stop short if backend sampler sampled a token

This commit modifies the graph building logic to immediately continue
when a token has already been sampled by the backend sampler.

It also updates the test for backend temporary sampling to include
top-k and distribution samplers in the chain to verify that they are not
producing any logits (they are not run).

* Revert "sampling : stop short if backend sampler sampled a token"

This reverts commit 87b2719eca.

* sampling : fix backend temp sampling to use logits masking

* sampling : simplify temp sampling

* sampling : remove redundant calls to ggml_build_forward_expand

* sampling : check backend support during init

* cont : keep backend sampling disabled for now

* sampling : fix outputs and device checks

* sampling : fix candidates logic

* Add perf-tests for CUMSUM

* Readd `cub::DeviceScan::InclusiveSum`-based CumSum

For single rows and large columns doing a for-loop over the function
`cub::DeviceScan::InclusiveSum` offered by CUB outperforms the
`cumsum_cub_kernel` where `cub::BlockScan` is used.

Numbers before this change

  Backend 1/3: CUDA0
  Device description: NVIDIA RTX 6000 Ada Generation
  Device memory: 48510 MB (48039 MB free)

  CUMSUM(type=f32,ne=[128,128,4,4]):                  311258 runs -     3.26 us/run -     2048 kB/run -  599.76 GB/s
  CUMSUM(type=f32,ne=[2048,16,5,4]):                  229390 runs -     4.40 us/run -     5120 kB/run - 1110.23 GB/s
  CUMSUM(type=f32,ne=[20000,10,4,1]):                  37583 runs -    29.63 us/run -     6250 kB/run -  201.18 GB/s
  CUMSUM(type=f32,ne=[128,1,1,1]):                    892819 runs -     1.12 us/run -        1 kB/run -    0.85 GB/s
  CUMSUM(type=f32,ne=[1024,1,1,1]):                   450505 runs -     2.25 us/run -        8 kB/run -    3.39 GB/s
  CUMSUM(type=f32,ne=[4096,1,1,1]):                   155629 runs -     6.61 us/run -       32 kB/run -    4.62 GB/s
  CUMSUM(type=f32,ne=[8192,1,1,1]):                    81910 runs -    12.60 us/run -       64 kB/run -    4.85 GB/s
  CUMSUM(type=f32,ne=[16384,1,1,1]):                   49146 runs -    23.99 us/run -      128 kB/run -    5.09 GB/s
  CUMSUM(type=f32,ne=[32768,1,1,1]):                   24573 runs -    47.10 us/run -      256 kB/run -    5.18 GB/s
  CUMSUM(type=f32,ne=[65536,1,1,1]):                   16382 runs -    93.57 us/run -      512 kB/run -    5.22 GB/s
  CUMSUM(type=f32,ne=[131072,1,1,1]):                   8191 runs -   184.79 us/run -     1024 kB/run -    5.29 GB/s
  CUMSUM(type=f32,ne=[200000,1,1,1]):                   8191 runs -   280.43 us/run -     1562 kB/run -    5.31 GB/s
  CUMSUM(type=f32,ne=[2000000,1,1,1]):                  2148 runs -  2771.23 us/run -    15625 kB/run -    5.38 GB/s
  CUMSUM(type=f32,ne=[128,4,1,1]):                    458696 runs -     2.21 us/run -        4 kB/run -    1.73 GB/s
  CUMSUM(type=f32,ne=[1024,4,1,1]):                   360404 runs -     2.82 us/run -       32 kB/run -   10.83 GB/s
  CUMSUM(type=f32,ne=[4096,4,1,1]):                   147438 runs -     7.12 us/run -      128 kB/run -   17.15 GB/s
  CUMSUM(type=f32,ne=[8192,4,1,1]):                    81910 runs -    12.90 us/run -      256 kB/run -   18.92 GB/s
  CUMSUM(type=f32,ne=[16384,4,1,1]):                   49146 runs -    24.32 us/run -      512 kB/run -   20.08 GB/s
  CUMSUM(type=f32,ne=[32768,4,1,1]):                   24573 runs -    47.28 us/run -     1024 kB/run -   20.66 GB/s
  CUMSUM(type=f32,ne=[65536,4,1,1]):                   16382 runs -    93.21 us/run -     2048 kB/run -   20.96 GB/s
  CUMSUM(type=f32,ne=[131072,4,1,1]):                   8191 runs -   185.04 us/run -     4096 kB/run -   21.11 GB/s
  CUMSUM(type=f32,ne=[200000,4,1,1]):                   5369 runs -   282.08 us/run -     6250 kB/run -   21.13 GB/s
  CUMSUM(type=f32,ne=[2000000,4,1,1]):                   537 runs -  2806.46 us/run -    62500 kB/run -   21.26 GB/s
  CUMSUM(type=f32,ne=[128,8,1,1]):                    458696 runs -     2.20 us/run -        8 kB/run -    3.47 GB/s
  CUMSUM(type=f32,ne=[1024,8,1,1]):                   360404 runs -     2.82 us/run -       64 kB/run -   21.66 GB/s
  CUMSUM(type=f32,ne=[4096,8,1,1]):                   147438 runs -     7.12 us/run -      256 kB/run -   34.28 GB/s
  CUMSUM(type=f32,ne=[8192,8,1,1]):                    81910 runs -    12.90 us/run -      512 kB/run -   37.84 GB/s
  CUMSUM(type=f32,ne=[16384,8,1,1]):                   49146 runs -    24.32 us/run -     1024 kB/run -   40.15 GB/s
  CUMSUM(type=f32,ne=[32768,8,1,1]):                   24573 runs -    47.28 us/run -     2048 kB/run -   41.31 GB/s
  CUMSUM(type=f32,ne=[65536,8,1,1]):                   16382 runs -    93.20 us/run -     4096 kB/run -   41.92 GB/s
  CUMSUM(type=f32,ne=[131072,8,1,1]):                   8194 runs -   185.05 us/run -     8192 kB/run -   42.22 GB/s
  CUMSUM(type=f32,ne=[200000,8,1,1]):                   5370 runs -   282.15 us/run -    12500 kB/run -   42.26 GB/s
  CUMSUM(type=f32,ne=[2000000,8,1,1]):                   269 runs -  4067.61 us/run -   125000 kB/run -   29.36 GB/s
  CUMSUM(type=f32,ne=[128,16,1,1]):                   303067 runs -     3.32 us/run -       16 kB/run -    4.60 GB/s
  CUMSUM(type=f32,ne=[1024,16,1,1]):                  303067 runs -     3.32 us/run -      128 kB/run -   36.76 GB/s
  CUMSUM(type=f32,ne=[4096,16,1,1]):                  147438 runs -     7.17 us/run -      512 kB/run -   68.13 GB/s
  CUMSUM(type=f32,ne=[8192,16,1,1]):                   81910 runs -    12.90 us/run -     1024 kB/run -   75.68 GB/s
  CUMSUM(type=f32,ne=[16384,16,1,1]):                  49146 runs -    24.33 us/run -     2048 kB/run -   80.28 GB/s
  CUMSUM(type=f32,ne=[32768,16,1,1]):                  24573 runs -    47.30 us/run -     4096 kB/run -   82.59 GB/s
  CUMSUM(type=f32,ne=[65536,16,1,1]):                  12291 runs -    93.24 us/run -     8192 kB/run -   83.80 GB/s
  CUMSUM(type=f32,ne=[131072,16,1,1]):                  6147 runs -   185.07 us/run -    16384 kB/run -   84.45 GB/s
  CUMSUM(type=f32,ne=[200000,16,1,1]):                  4029 runs -   282.40 us/run -    25000 kB/run -   84.46 GB/s
  CUMSUM(type=f32,ne=[2000000,16,1,1]):                  270 runs -  4118.40 us/run -   250000 kB/run -   58.11 GB/s
  Backend CUDA0: OK
Backend 2/3: CUDA1
  Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
  Device memory: 97250 MB (96677 MB free)

  CUMSUM(type=f32,ne=[128,128,4,4]):                  368595 runs -     2.73 us/run -     2048 kB/run -  715.83 GB/s
  CUMSUM(type=f32,ne=[2048,16,5,4]):                  216282 runs -     4.72 us/run -     5120 kB/run - 1035.32 GB/s
  CUMSUM(type=f32,ne=[20000,10,4,1]):                  32214 runs -    34.33 us/run -     6250 kB/run -  173.64 GB/s
  CUMSUM(type=f32,ne=[128,1,1,1]):                    810909 runs -     1.24 us/run -        1 kB/run -    0.77 GB/s
  CUMSUM(type=f32,ne=[1024,1,1,1]):                   401359 runs -     2.52 us/run -        8 kB/run -    3.03 GB/s
  CUMSUM(type=f32,ne=[4096,1,1,1]):                   139247 runs -     7.44 us/run -       32 kB/run -    4.10 GB/s
  CUMSUM(type=f32,ne=[8192,1,1,1]):                    73719 runs -    14.27 us/run -       64 kB/run -    4.28 GB/s
  CUMSUM(type=f32,ne=[16384,1,1,1]):                   40955 runs -    27.24 us/run -      128 kB/run -    4.48 GB/s
  CUMSUM(type=f32,ne=[32768,1,1,1]):                   24573 runs -    53.46 us/run -      256 kB/run -    4.57 GB/s
  CUMSUM(type=f32,ne=[65536,1,1,1]):                   16382 runs -   105.29 us/run -      512 kB/run -    4.64 GB/s
  CUMSUM(type=f32,ne=[131072,1,1,1]):                   8191 runs -   210.15 us/run -     1024 kB/run -    4.65 GB/s
  CUMSUM(type=f32,ne=[200000,1,1,1]):                   8191 runs -   318.22 us/run -     1562 kB/run -    4.68 GB/s
  CUMSUM(type=f32,ne=[2000000,1,1,1]):                  2148 runs -  3142.23 us/run -    15625 kB/run -    4.74 GB/s
  CUMSUM(type=f32,ne=[128,4,1,1]):                    303067 runs -     3.34 us/run -        4 kB/run -    1.14 GB/s
  CUMSUM(type=f32,ne=[1024,4,1,1]):                   253921 runs -     4.03 us/run -       32 kB/run -    7.58 GB/s
  CUMSUM(type=f32,ne=[4096,4,1,1]):                   122865 runs -     8.20 us/run -      128 kB/run -   14.89 GB/s
  CUMSUM(type=f32,ne=[8192,4,1,1]):                    73719 runs -    14.96 us/run -      256 kB/run -   16.32 GB/s
  CUMSUM(type=f32,ne=[16384,4,1,1]):                   40955 runs -    28.66 us/run -      512 kB/run -   17.04 GB/s
  CUMSUM(type=f32,ne=[32768,4,1,1]):                   24573 runs -    54.21 us/run -     1024 kB/run -   18.01 GB/s
  CUMSUM(type=f32,ne=[65536,4,1,1]):                   16382 runs -   106.49 us/run -     2048 kB/run -   18.34 GB/s
  CUMSUM(type=f32,ne=[131072,4,1,1]):                   8191 runs -   210.88 us/run -     4096 kB/run -   18.52 GB/s
  CUMSUM(type=f32,ne=[200000,4,1,1]):                   5369 runs -   321.77 us/run -     6250 kB/run -   18.53 GB/s
  CUMSUM(type=f32,ne=[2000000,4,1,1]):                   537 runs -  3191.79 us/run -    62500 kB/run -   18.69 GB/s
  CUMSUM(type=f32,ne=[128,8,1,1]):                    376786 runs -     2.67 us/run -        8 kB/run -    2.86 GB/s
  CUMSUM(type=f32,ne=[1024,8,1,1]):                   245730 runs -     4.10 us/run -       64 kB/run -   14.90 GB/s
  CUMSUM(type=f32,ne=[4096,8,1,1]):                   122865 runs -     8.20 us/run -      256 kB/run -   29.79 GB/s
  CUMSUM(type=f32,ne=[8192,8,1,1]):                    65528 runs -    16.38 us/run -      512 kB/run -   29.82 GB/s
  CUMSUM(type=f32,ne=[16384,8,1,1]):                   40955 runs -    28.69 us/run -     1024 kB/run -   34.04 GB/s
  CUMSUM(type=f32,ne=[32768,8,1,1]):                   24573 runs -    55.28 us/run -     2048 kB/run -   35.33 GB/s
  CUMSUM(type=f32,ne=[65536,8,1,1]):                   16382 runs -   108.50 us/run -     4096 kB/run -   36.00 GB/s
  CUMSUM(type=f32,ne=[131072,8,1,1]):                   8194 runs -   213.75 us/run -     8192 kB/run -   36.55 GB/s
  CUMSUM(type=f32,ne=[200000,8,1,1]):                   5370 runs -   326.31 us/run -    12500 kB/run -   36.54 GB/s
  CUMSUM(type=f32,ne=[2000000,8,1,1]):                   538 runs -  3252.68 us/run -   125000 kB/run -   36.72 GB/s
  CUMSUM(type=f32,ne=[128,16,1,1]):                   303067 runs -     3.32 us/run -       16 kB/run -    4.60 GB/s
  CUMSUM(type=f32,ne=[1024,16,1,1]):                  253921 runs -     4.06 us/run -      128 kB/run -   30.09 GB/s
  CUMSUM(type=f32,ne=[4096,16,1,1]):                  122865 runs -     8.20 us/run -      512 kB/run -   59.57 GB/s
  CUMSUM(type=f32,ne=[8192,16,1,1]):                   65528 runs -    16.38 us/run -     1024 kB/run -   59.63 GB/s
  CUMSUM(type=f32,ne=[16384,16,1,1]):                  40955 runs -    28.69 us/run -     2048 kB/run -   68.09 GB/s
  CUMSUM(type=f32,ne=[32768,16,1,1]):                  24573 runs -    55.28 us/run -     4096 kB/run -   70.67 GB/s
  CUMSUM(type=f32,ne=[65536,16,1,1]):                  12291 runs -   108.50 us/run -     8192 kB/run -   72.02 GB/s
  CUMSUM(type=f32,ne=[131072,16,1,1]):                  6147 runs -   213.60 us/run -    16384 kB/run -   73.17 GB/s
  CUMSUM(type=f32,ne=[200000,16,1,1]):                  4029 runs -   326.04 us/run -    25000 kB/run -   73.15 GB/s
  CUMSUM(type=f32,ne=[2000000,16,1,1]):                  270 runs -  5458.69 us/run -   250000 kB/run -   43.84 GB/s

----
Numbers after:

Backend 1/3: CUDA0
  Device description: NVIDIA RTX 6000 Ada Generation
  Device memory: 48510 MB (48039 MB free)

  CUMSUM(type=f32,ne=[128,128,4,4]):                  311258 runs -     3.25 us/run -     2048 kB/run -  601.62 GB/s
  CUMSUM(type=f32,ne=[2048,16,5,4]):                  229390 runs -     4.40 us/run -     5120 kB/run - 1110.14 GB/s
  CUMSUM(type=f32,ne=[20000,10,4,1]):                  37583 runs -    29.67 us/run -     6250 kB/run -  200.89 GB/s
  CUMSUM(type=f32,ne=[128,1,1,1]):                    892819 runs -     1.12 us/run -        1 kB/run -    0.85 GB/s
  CUMSUM(type=f32,ne=[1024,1,1,1]):                   458696 runs -     2.21 us/run -        8 kB/run -    3.45 GB/s
  CUMSUM(type=f32,ne=[4096,1,1,1]):                   376786 runs -     2.66 us/run -       32 kB/run -   11.46 GB/s
  CUMSUM(type=f32,ne=[8192,1,1,1]):                   393168 runs -     2.59 us/run -       64 kB/run -   23.57 GB/s
  CUMSUM(type=f32,ne=[16384,1,1,1]):                  393168 runs -     2.59 us/run -      128 kB/run -   47.15 GB/s
  CUMSUM(type=f32,ne=[32768,1,1,1]):                  376786 runs -     2.69 us/run -      256 kB/run -   90.69 GB/s
  CUMSUM(type=f32,ne=[65536,1,1,1]):                  327640 runs -     3.06 us/run -      512 kB/run -  159.65 GB/s
  CUMSUM(type=f32,ne=[131072,1,1,1]):                 311258 runs -     3.28 us/run -     1024 kB/run -  297.77 GB/s
  CUMSUM(type=f32,ne=[200000,1,1,1]):                 270303 runs -     3.74 us/run -     1562 kB/run -  398.14 GB/s
  CUMSUM(type=f32,ne=[2000000,1,1,1]):                137472 runs -     7.35 us/run -    15625 kB/run - 2026.94 GB/s
  CUMSUM(type=f32,ne=[128,4,1,1]):                    876437 runs -     1.14 us/run -        4 kB/run -    3.33 GB/s
  CUMSUM(type=f32,ne=[1024,4,1,1]):                   442314 runs -     2.28 us/run -       32 kB/run -   13.39 GB/s
  CUMSUM(type=f32,ne=[4096,4,1,1]):                   155629 runs -     6.69 us/run -      128 kB/run -   18.24 GB/s
  CUMSUM(type=f32,ne=[8192,4,1,1]):                    81910 runs -    12.53 us/run -      256 kB/run -   19.49 GB/s
  CUMSUM(type=f32,ne=[16384,4,1,1]):                   49146 runs -    24.18 us/run -      512 kB/run -   20.20 GB/s
  CUMSUM(type=f32,ne=[32768,4,1,1]):                   65528 runs -    15.34 us/run -     1024 kB/run -   63.66 GB/s
  CUMSUM(type=f32,ne=[65536,4,1,1]):                   73719 runs -    14.76 us/run -     2048 kB/run -  132.35 GB/s
  CUMSUM(type=f32,ne=[131072,4,1,1]):                  65528 runs -    16.01 us/run -     4096 kB/run -  244.07 GB/s
  CUMSUM(type=f32,ne=[200000,4,1,1]):                  64428 runs -    16.51 us/run -     6250 kB/run -  360.97 GB/s
  CUMSUM(type=f32,ne=[2000000,4,1,1]):                 33831 runs -    29.59 us/run -    62500 kB/run - 2016.08 GB/s
  CUMSUM(type=f32,ne=[128,8,1,1]):                    868246 runs -     1.16 us/run -        8 kB/run -    6.59 GB/s
  CUMSUM(type=f32,ne=[1024,8,1,1]):                   442314 runs -     2.28 us/run -       64 kB/run -   26.76 GB/s
  CUMSUM(type=f32,ne=[4096,8,1,1]):                   155629 runs -     6.69 us/run -      256 kB/run -   36.48 GB/s
  CUMSUM(type=f32,ne=[8192,8,1,1]):                    81910 runs -    12.53 us/run -      512 kB/run -   38.97 GB/s
  CUMSUM(type=f32,ne=[16384,8,1,1]):                   49146 runs -    24.17 us/run -     1024 kB/run -   40.41 GB/s
  CUMSUM(type=f32,ne=[32768,8,1,1]):                   24573 runs -    47.53 us/run -     2048 kB/run -   41.10 GB/s
  CUMSUM(type=f32,ne=[65536,8,1,1]):                   16382 runs -    61.25 us/run -     4096 kB/run -   63.77 GB/s
  CUMSUM(type=f32,ne=[131072,8,1,1]):                  32776 runs -    31.79 us/run -     8192 kB/run -  245.82 GB/s
  CUMSUM(type=f32,ne=[200000,8,1,1]):                  32220 runs -    32.90 us/run -    12500 kB/run -  362.35 GB/s
  CUMSUM(type=f32,ne=[2000000,8,1,1]):                  6725 runs -   151.99 us/run -   125000 kB/run -  785.77 GB/s
  CUMSUM(type=f32,ne=[128,16,1,1]):                   851864 runs -     1.18 us/run -       16 kB/run -   12.97 GB/s
  CUMSUM(type=f32,ne=[1024,16,1,1]):                  442314 runs -     2.30 us/run -      128 kB/run -   53.13 GB/s
  CUMSUM(type=f32,ne=[4096,16,1,1]):                  155629 runs -     6.68 us/run -      512 kB/run -   73.13 GB/s
  CUMSUM(type=f32,ne=[8192,16,1,1]):                   81910 runs -    12.68 us/run -     1024 kB/run -   77.00 GB/s
  CUMSUM(type=f32,ne=[16384,16,1,1]):                  40955 runs -    24.56 us/run -     2048 kB/run -   79.53 GB/s
  CUMSUM(type=f32,ne=[32768,16,1,1]):                  24573 runs -    47.52 us/run -     4096 kB/run -   82.21 GB/s
  CUMSUM(type=f32,ne=[65536,16,1,1]):                  12291 runs -    93.44 us/run -     8192 kB/run -   83.62 GB/s
  CUMSUM(type=f32,ne=[131072,16,1,1]):                 16392 runs -    63.36 us/run -    16384 kB/run -  246.68 GB/s
  CUMSUM(type=f32,ne=[200000,16,1,1]):                 16116 runs -    65.25 us/run -    25000 kB/run -  365.53 GB/s
  CUMSUM(type=f32,ne=[2000000,16,1,1]):                 3375 runs -   304.46 us/run -   250000 kB/run -  785.98 GB/s
  Backend CUDA0: OK
Backend 2/3: CUDA1
  Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
  Device memory: 97250 MB (96677 MB free)

  CUMSUM(type=f32,ne=[128,128,4,4]):                  376786 runs -     2.69 us/run -     2048 kB/run -  727.04 GB/s
  CUMSUM(type=f32,ne=[2048,16,5,4]):                  216282 runs -     4.64 us/run -     5120 kB/run - 1053.30 GB/s
  CUMSUM(type=f32,ne=[20000,10,4,1]):                  32214 runs -    34.21 us/run -     6250 kB/run -  174.27 GB/s
  CUMSUM(type=f32,ne=[128,1,1,1]):                    819100 runs -     1.22 us/run -        1 kB/run -    0.78 GB/s
  CUMSUM(type=f32,ne=[1024,1,1,1]):                   409550 runs -     2.47 us/run -        8 kB/run -    3.09 GB/s
  CUMSUM(type=f32,ne=[4096,1,1,1]):                   303067 runs -     3.31 us/run -       32 kB/run -    9.21 GB/s
  CUMSUM(type=f32,ne=[8192,1,1,1]):                   237539 runs -     4.33 us/run -       64 kB/run -   14.08 GB/s
  CUMSUM(type=f32,ne=[16384,1,1,1]):                  237539 runs -     4.33 us/run -      128 kB/run -   28.17 GB/s
  CUMSUM(type=f32,ne=[32768,1,1,1]):                  188393 runs -     5.37 us/run -      256 kB/run -   45.47 GB/s
  CUMSUM(type=f32,ne=[65536,1,1,1]):                  188393 runs -     5.41 us/run -      512 kB/run -   90.20 GB/s
  CUMSUM(type=f32,ne=[131072,1,1,1]):                 188393 runs -     5.41 us/run -     1024 kB/run -  180.41 GB/s
  CUMSUM(type=f32,ne=[200000,1,1,1]):                 188393 runs -     5.41 us/run -     1562 kB/run -  275.27 GB/s
  CUMSUM(type=f32,ne=[2000000,1,1,1]):                128880 runs -     7.76 us/run -    15625 kB/run - 1920.33 GB/s
  CUMSUM(type=f32,ne=[128,4,1,1]):                    802718 runs -     1.26 us/run -        4 kB/run -    3.03 GB/s
  CUMSUM(type=f32,ne=[1024,4,1,1]):                   401359 runs -     2.51 us/run -       32 kB/run -   12.18 GB/s
  CUMSUM(type=f32,ne=[4096,4,1,1]):                   139247 runs -     7.51 us/run -      128 kB/run -   16.26 GB/s
  CUMSUM(type=f32,ne=[8192,4,1,1]):                    73719 runs -    14.17 us/run -      256 kB/run -   17.23 GB/s
  CUMSUM(type=f32,ne=[16384,4,1,1]):                   40955 runs -    27.37 us/run -      512 kB/run -   17.84 GB/s
  CUMSUM(type=f32,ne=[32768,4,1,1]):                   40955 runs -    26.33 us/run -     1024 kB/run -   37.10 GB/s
  CUMSUM(type=f32,ne=[65536,4,1,1]):                   40955 runs -    26.19 us/run -     2048 kB/run -   74.59 GB/s
  CUMSUM(type=f32,ne=[131072,4,1,1]):                  40955 runs -    26.35 us/run -     4096 kB/run -  148.26 GB/s
  CUMSUM(type=f32,ne=[200000,4,1,1]):                  42952 runs -    24.18 us/run -     6250 kB/run -  246.51 GB/s
  CUMSUM(type=f32,ne=[2000000,4,1,1]):                 32757 runs -    31.01 us/run -    62500 kB/run - 1923.68 GB/s
  CUMSUM(type=f32,ne=[128,8,1,1]):                    786336 runs -     1.28 us/run -        8 kB/run -    5.95 GB/s
  CUMSUM(type=f32,ne=[1024,8,1,1]):                   393168 runs -     2.57 us/run -       64 kB/run -   23.73 GB/s
  CUMSUM(type=f32,ne=[4096,8,1,1]):                   131056 runs -     7.67 us/run -      256 kB/run -   31.82 GB/s
  CUMSUM(type=f32,ne=[8192,8,1,1]):                    73719 runs -    14.43 us/run -      512 kB/run -   33.84 GB/s
  CUMSUM(type=f32,ne=[16384,8,1,1]):                   40955 runs -    27.90 us/run -     1024 kB/run -   35.01 GB/s
  CUMSUM(type=f32,ne=[32768,8,1,1]):                   24573 runs -    54.63 us/run -     2048 kB/run -   35.75 GB/s
  CUMSUM(type=f32,ne=[65536,8,1,1]):                   16382 runs -    72.24 us/run -     4096 kB/run -   54.08 GB/s
  CUMSUM(type=f32,ne=[131072,8,1,1]):                  20485 runs -    52.66 us/run -     8192 kB/run -  148.37 GB/s
  CUMSUM(type=f32,ne=[200000,8,1,1]):                  21480 runs -    48.00 us/run -    12500 kB/run -  248.42 GB/s
  CUMSUM(type=f32,ne=[2000000,8,1,1]):                 16140 runs -    61.99 us/run -   125000 kB/run - 1926.51 GB/s
  CUMSUM(type=f32,ne=[128,16,1,1]):                   786336 runs -     1.28 us/run -       16 kB/run -   11.90 GB/s
  CUMSUM(type=f32,ne=[1024,16,1,1]):                  393168 runs -     2.57 us/run -      128 kB/run -   47.57 GB/s
  CUMSUM(type=f32,ne=[4096,16,1,1]):                  131056 runs -     7.65 us/run -      512 kB/run -   63.83 GB/s
  CUMSUM(type=f32,ne=[8192,16,1,1]):                   73719 runs -    14.42 us/run -     1024 kB/run -   67.74 GB/s
  CUMSUM(type=f32,ne=[16384,16,1,1]):                  40955 runs -    27.87 us/run -     2048 kB/run -   70.09 GB/s
  CUMSUM(type=f32,ne=[32768,16,1,1]):                  24573 runs -    54.54 us/run -     4096 kB/run -   71.63 GB/s
  CUMSUM(type=f32,ne=[65536,16,1,1]):                  12291 runs -   107.53 us/run -     8192 kB/run -   72.66 GB/s
  CUMSUM(type=f32,ne=[131072,16,1,1]):                 10245 runs -   105.10 us/run -    16384 kB/run -  148.70 GB/s
  CUMSUM(type=f32,ne=[200000,16,1,1]):                 10744 runs -    95.36 us/run -    25000 kB/run -  250.11 GB/s
  CUMSUM(type=f32,ne=[2000000,16,1,1]):                 5400 runs -   186.97 us/run -   250000 kB/run - 1279.90 GB/s

* sampling : expand support (wip)

* tests : fix memory leaks

* cont : fixes

* tests : check temp back to 0.0

* sampling : fix top-p

* sampling : handle n_probs case

* server : handle unsupported cases

* metal : print node names for debugging

* ggml : remove redundant src in ggml_cast

* ggml-alloc : fix reuse-parent logic for misaligned sizes

* Revert "ggml : remove redundant src in ggml_cast"

This reverts commit 62d1b0082d.

* CUDA: Add Cooperative-Groups-based parallelization of ncols in softmax

Old implementation parallelizes rows across SMs, which does not fit the
needs of backend-sampling (where we have ncols >> nrows and thus want to
parallelize ncols across SMs)

* Add TODOs to and adjust heuristics of row-wise soft_max in CUDA

Heuristics were selected based on the following numbers:

```
-- Before
Backend 1/2: CUDA0
  Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
  Device memory: 97250 MB (96691 MB free)

  SOFT_MAX(type=f32,ne=[4096,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                2236 runs -   450.34 us/run -   655360 kB/run - 1401.20 GB/s
  SOFT_MAX(type=f32,ne=[12888,256,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):               17748 runs -    56.80 us/run -   128880 kB/run - 2168.19 GB/s
  SOFT_MAX(type=f32,ne=[77,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 57204 runs -    18.35 us/run -    12320 kB/run -  640.57 GB/s
  SOFT_MAX(type=f32,ne=[1024,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):               9840 runs -   102.46 us/run -    81920 kB/run -  763.45 GB/s
  SOFT_MAX(type=f32,ne=[77,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                98064 runs -    10.25 us/run -     6160 kB/run -  573.43 GB/s
  SOFT_MAX(type=f32,ne=[256,256,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                98310 runs -    10.25 us/run -    10240 kB/run -  953.20 GB/s
  SOFT_MAX(type=f32,ne=[64,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 172011 runs -     5.99 us/run -      640 kB/run -  101.84 GB/s
  SOFT_MAX(type=f32,ne=[77,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 172011 runs -     5.97 us/run -      770 kB/run -  123.02 GB/s
  SOFT_MAX(type=f32,ne=[8192,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 172011 runs -     6.00 us/run -       64 kB/run -   10.16 GB/s
  SOFT_MAX(type=f32,ne=[8192,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 163820 runs -     6.12 us/run -      256 kB/run -   39.91 GB/s
  SOFT_MAX(type=f32,ne=[8192,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                147438 runs -     6.88 us/run -     1024 kB/run -  141.92 GB/s
  SOFT_MAX(type=f32,ne=[16384,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                122865 runs -     8.20 us/run -      128 kB/run -   14.89 GB/s
  SOFT_MAX(type=f32,ne=[16384,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                114674 runs -     8.87 us/run -      512 kB/run -   55.06 GB/s
  SOFT_MAX(type=f32,ne=[16384,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                98292 runs -    10.24 us/run -     2048 kB/run -  190.82 GB/s
  SOFT_MAX(type=f32,ne=[32768,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 49146 runs -    21.37 us/run -      256 kB/run -   11.43 GB/s
  SOFT_MAX(type=f32,ne=[32768,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 49146 runs -    22.54 us/run -     1024 kB/run -   43.33 GB/s
  SOFT_MAX(type=f32,ne=[32768,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                49146 runs -    23.92 us/run -     4096 kB/run -  163.32 GB/s
  SOFT_MAX(type=f32,ne=[65536,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 32764 runs -    38.94 us/run -      512 kB/run -   12.54 GB/s
  SOFT_MAX(type=f32,ne=[65536,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 24573 runs -    41.94 us/run -     2048 kB/run -   46.57 GB/s
  SOFT_MAX(type=f32,ne=[65536,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                24582 runs -    43.09 us/run -     8192 kB/run -  181.32 GB/s
  SOFT_MAX(type=f32,ne=[131072,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                16382 runs -    74.56 us/run -     1024 kB/run -   13.10 GB/s
  SOFT_MAX(type=f32,ne=[131072,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                16382 runs -    79.85 us/run -     4096 kB/run -   48.92 GB/s
  SOFT_MAX(type=f32,ne=[131072,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):               12294 runs -    82.41 us/run -    16384 kB/run -  189.64 GB/s
  SOFT_MAX(type=f32,ne=[262144,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 8191 runs -   145.16 us/run -     2048 kB/run -   13.46 GB/s
  SOFT_MAX(type=f32,ne=[262144,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 8194 runs -   155.46 us/run -     8192 kB/run -   50.26 GB/s
  SOFT_MAX(type=f32,ne=[262144,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                7175 runs -   160.70 us/run -    32768 kB/run -  194.56 GB/s
  SOFT_MAX(type=f32,ne=[524288,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 8191 runs -   285.81 us/run -     4096 kB/run -   13.67 GB/s
  SOFT_MAX(type=f32,ne=[524288,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 4098 runs -   306.91 us/run -    16384 kB/run -   50.92 GB/s
  SOFT_MAX(type=f32,ne=[524288,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                3591 runs -   317.06 us/run -    65536 kB/run -  197.32 GB/s

-- After
Backend 1/2: CUDA0
  Device description: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
  Device memory: 97250 MB (96691 MB free)

  SOFT_MAX(type=f32,ne=[4096,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                2236 runs -   450.67 us/run -   655360 kB/run - 1400.15 GB/s
  SOFT_MAX(type=f32,ne=[12888,256,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):               17748 runs -    56.97 us/run -   128880 kB/run - 2161.50 GB/s
  SOFT_MAX(type=f32,ne=[77,4096,5,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 57204 runs -    18.35 us/run -    12320 kB/run -  640.36 GB/s
  SOFT_MAX(type=f32,ne=[1024,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):               9840 runs -   102.46 us/run -    81920 kB/run -  763.42 GB/s
  SOFT_MAX(type=f32,ne=[77,1024,10,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                98064 runs -    10.25 us/run -     6160 kB/run -  573.43 GB/s
  SOFT_MAX(type=f32,ne=[256,256,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                98310 runs -    10.25 us/run -    10240 kB/run -  953.21 GB/s
  SOFT_MAX(type=f32,ne=[64,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 147438 runs -     7.00 us/run -      640 kB/run -   87.26 GB/s
  SOFT_MAX(type=f32,ne=[77,64,20,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 147438 runs -     6.99 us/run -      770 kB/run -  105.05 GB/s
  SOFT_MAX(type=f32,ne=[8192,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 172011 runs -     6.02 us/run -       64 kB/run -   10.13 GB/s
  SOFT_MAX(type=f32,ne=[8192,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 163820 runs -     6.12 us/run -      256 kB/run -   39.87 GB/s
  SOFT_MAX(type=f32,ne=[8192,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                147438 runs -     6.91 us/run -     1024 kB/run -  141.40 GB/s
  SOFT_MAX(type=f32,ne=[16384,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                122865 runs -     8.20 us/run -      128 kB/run -   14.89 GB/s
  SOFT_MAX(type=f32,ne=[16384,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                114674 runs -     8.79 us/run -      512 kB/run -   55.54 GB/s
  SOFT_MAX(type=f32,ne=[16384,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                98292 runs -    10.24 us/run -     2048 kB/run -  190.82 GB/s
  SOFT_MAX(type=f32,ne=[32768,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                131056 runs -     8.11 us/run -      256 kB/run -   30.12 GB/s
  SOFT_MAX(type=f32,ne=[32768,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 49146 runs -    22.54 us/run -     1024 kB/run -   43.33 GB/s
  SOFT_MAX(type=f32,ne=[32768,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                49146 runs -    23.32 us/run -     4096 kB/run -  167.50 GB/s
  SOFT_MAX(type=f32,ne=[65536,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                122865 runs -     8.19 us/run -      512 kB/run -   59.63 GB/s
  SOFT_MAX(type=f32,ne=[65536,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                 40955 runs -    24.59 us/run -     2048 kB/run -   79.43 GB/s
  SOFT_MAX(type=f32,ne=[65536,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                24582 runs -    43.21 us/run -     8192 kB/run -  180.84 GB/s
  SOFT_MAX(type=f32,ne=[131072,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):               122865 runs -     8.19 us/run -     1024 kB/run -  119.25 GB/s
  SOFT_MAX(type=f32,ne=[131072,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                40955 runs -    24.59 us/run -     4096 kB/run -  158.87 GB/s
  SOFT_MAX(type=f32,ne=[131072,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):               12294 runs -    82.37 us/run -    16384 kB/run -  189.74 GB/s
  SOFT_MAX(type=f32,ne=[262144,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):               122865 runs -     8.20 us/run -     2048 kB/run -  238.28 GB/s
  SOFT_MAX(type=f32,ne=[262144,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                36873 runs -    28.66 us/run -     8192 kB/run -  272.61 GB/s
  SOFT_MAX(type=f32,ne=[262144,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                9225 runs -   108.51 us/run -    32768 kB/run -  288.13 GB/s
  SOFT_MAX(type=f32,ne=[524288,1,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                98292 runs -    10.24 us/run -     4096 kB/run -  381.65 GB/s
  SOFT_MAX(type=f32,ne=[524288,4,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                32784 runs -    31.74 us/run -    16384 kB/run -  492.43 GB/s
  SOFT_MAX(type=f32,ne=[524288,16,1,1],mask=0,sinks=0,m_prec=f32,nr23=[1,1],scale=1.000000,max_bias=0.000000,inplace=0):                8721 runs -   121.20 us/run -    65536 kB/run -  516.19 GB/s
```

* Fix compiler warnings by casting `const` away

* llama : require backend samplers to be of type llama_sampler_chain

* sampling : use host buffer type for inputs

* Try fixing HIP build errors by adding corresponding #defines

Will likely have to disable for MUSA as I didn't find any docs online

* Fix launch logic when supports_cooperative_launch=false

* Disable cooperative groups for musa

Didn't find any doc online, so I don't even know if they support this

* server : reconnect the backend_sampling setting in the WebUI

* graph : make the compute graph constant with respect to active samplers

* batch : fix sequence id ownage

* graph : respect sampler order for graph reuse

* HIP/MUSA: fix build for backend sampling

* sampling : optimize logit_bias sampler

* cont : fix build

* sampling : generic ggml op support detection

* sampling : fix greedy

* tests : run backend sampler tests always on the CPU

* Apply suggestions from code review

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* webui : fix lint

* Fix data-race in `soft_max_f32_parallelize_cols_single_row`

By using `tmp_vals` to store both max values and exponential
accumulator there was a potential data-race, where the exponential accumulator
for a given CTA may have written to `tmp_vals` before all others CTAs have
read the max value from it.

To avoid a third g.sync(), an additional temporary data-storage was
added. Given that there are syncs in place after writing to gmem, it is
guaranteed that the previous values for sums/max were read by all CTAs now.

* Apply automated code-formating to softmax.cu

* llama : clarify backend_accept/backend_set_input comments [no ci]

* llama : fix typo in comment [no ci]

* tests : use smart pointers for backend samplers

* tests : use smart pointers for model and context

* tests : remove vocab member from test_model_context

Also includes some minor cleanups related to nullptr checks.

* tests : extract batch info update to separate method

* tests : fix batch token position tracking in test_backend_sampler.cpp

* tests : add --device option support to backend sampler tests

This commit adds support for specifying a device to run the test on.

* common : disable backend sampling when grammar is involved

* Fix different RNG-states between backend-sampling and llama-sampling

By default, we perform a warm-up step where the ggml_cgraph is computed
once. For backend-sampling, this graph contains the sampler, and thus
the RNG state of the backend's dist sampler is advanced once.

Solution to this is to reset the samplers after the warmup has finished

* Make backend dist sampler use same rnd's as dist sampler

We sample in double precision and cast to float to match rnd numbers of
llama_dampler_dist which uses double precision (sampling from
std::uniform_real_distribution<double> and
std::uniform_real_distribution<float> with same rng will produce
different sequences).

* Update CCCL version to v3.2.0-rc2

* Build with CCCL 3.2 for CUDA backends

Gives best perf for backend-sampling on CUDA. Flag can be removed once
CCCL 3.2 is bundled within CTK and that CTK version is used in llama.cpp

* tests : revert server test changes (no longer needed)

* ggml : include cub/cub.cuh instead of block_scan.cuh

This commit updates the include directive in cumsum.cu to use
cub/cub.cuh instead of cub/block/block_scan.cuh.

The motivation of this change is that without it compilation fails
with the following error:
```console
/llama.cpp/ggml/src/ggml-cuda/cumsum.cu(196): error: name followed by "::" must be a class or namespace name
      cub::DeviceScan::InclusiveSum(nullptr,
           ^

/llama.cpp/ggml/src/ggml-cuda/cumsum.cu(207): error: name followed by "::" must be a class or namespace name
      cub::DeviceScan::InclusiveSum((void *) tmp_alloc.get(), tmp_size, src, dst, ne, stream);
           ^

2 errors detected in the compilation of "/llama.cpp/ggml/src/ggml-cuda/cumsum.cu".
gmake[2]: *** [ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/build.make:317: ggml/src/ggml-cuda/CMakeFiles/ggml-cuda.dir/cumsum.cu.o] Error 2
```
Commit 83b3b1c271 ("cuda: optimize
cumsum cub path (#18362)") updated the include directive replacing
device_scan.cuh which is causing this issue.

This commit uses cub/cub.cuh umbrella header which is consistent with
other files in the ggml-cuda directory like mean.cu, sum.cu, etc.

* arg : add shorthand for --backend-sampling

* ci : add server workflow with backend sampling

* sampling : fix reshapes

* server : remove printfs

* sampling : zero-initialize input buffers

* minor : add comments + some cleanup

* llama : assert at most one output token per sequence

* tests : add more top_k tests

* CUDA: Fix non-determinism of CUB-based Top-K

DeviceTopK::MaxPairs is an iterative algorithm, where `d_keys_out` is
written after every iteration. As a consequence, it must not overlap
with `d_keys_in`, or otherwise undefined behavior occurs (keys are no
longer unique in d_keys_in and may map to different values between
iterations)

* CUDA: Optimize index of top_k_cub

By using the fancy
[`counting_iterator`](https://nvidia.github.io/cccl/thrust/api/classthrust_1_1counting__iterator.html#classthrust_1_1counting__iterator)
exposed by CCCL, we can avoid materializing the index to GPU memory,
saving VRAM + 1 kernel invocation

* Apply code-formatting to top-k.cu

* CUDA: Remove obsolete temp_keys from CUB

Since we use cuda::discard_iterator to avoid writing out the keys, we
can directly pass in src instead of copying it to `temp_keys`

* minor : cleanup, TODOs, etc.

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Oliver Simons <osimons@nvidia.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2026-01-04 22:22:16 +02:00
Tarek Dakhran 4974bf53cf
model : mtmd : make input norm optional in LFM2-VL (#18594)
Upcoming LFM2-VL releases will have configurable input norm.
See https://github.com/huggingface/transformers/pull/43087 for details.
2026-01-04 18:50:02 +01:00
Aman Gupta 908a9e5a1e
CUDA: disable cuda graph when using n-cpu-moe (#18593)
* CUDA: disable cuda graph when using n-cpu-moe

* call ggml_cuda_set_device
2026-01-05 01:37:48 +08:00
Aman Gupta 5126c41c1c
ggml-cuda: remove unused params in ggml_cuda_graph (#18579) 2026-01-05 01:37:09 +08:00
Aldehir Rojas cef1d23c5a
common/grammar : replace problematic backtracking regex `[\s\S]*` (#18342)
* grammar : add support for std::regex_search() with trigger patterns

* common : update hermes2 pro trigger to search instead of match

* common : use regex_search with anchoring for partial matching

* common : adjust regex partial tests to use new pattern

* grammar : check pattern directly instead of adding a type

* common : adjust existing patterns to match new semantics
2026-01-03 16:02:43 -06:00
Georgi Gerganov c69c7ebc90
graph : fix graph reuse logic when `n_pos_per_embd > 1` (#18566) 2026-01-03 23:59:06 +02:00
Aman Gupta e57f52334b
ggml-cuda: fixes for concurrent streams (#18496) 2026-01-03 23:15:01 +08:00
Georgi Gerganov a554a1ecc7
context : fix reserve token padding to n_seqs (#18536) 2026-01-03 15:45:34 +02:00
Johannes Gäßler 0f2e42ca1d
CUDA: only allocate FA tmp buffer if needed (#18564) 2026-01-03 13:55:53 +01:00
pl752 9dba9f5352
(Bugfix, ggml-cuda) Pool alloc count fix + small size computation type adjustment (#18559)
* CUDA: Fixed obj byte size instead of obj count being passed to pool alloc (fattn-common, dst_tmp_meta)

* CUDA: Explicitly casted some of the int alloc counts before multiplication in argsort

---------

Co-authored-by: pl752 <maximpl752@gmail.com>
2026-01-03 11:13:40 +01:00
Shouyu bcfc8c3cec
ggml-hexagon: optimize activation function (#18393)
* refactor: refactor silu

* refactor: optimize swiglu

* refactor: remove unncessary if in swiglu

* refactor: refactor swiglu_oai

* chore: fix formatting issue
2026-01-02 21:24:24 -08:00
Jeff Bolz 18ddaea2ae
vulkan: Optimize GGML_OP_CUMSUM (#18417)
* vulkan: Optimize GGML_OP_CUMSUM

There are two paths: The preexisting one that does a whole row per workgroup
in a single shader, and one that splits each row into multiple blocks and does
two passes. The first pass computes partials within a block, the second adds
the block partials to compute the final result. The multipass shader is used
when there are a small number of large rows.

In the whole-row shader, handle multiple elements per invocation.

* use 2 ELEM_PER_THREAD for AMD/Intel

* address feedback
2026-01-02 15:32:30 -06:00
Jeff Bolz 706e3f93a6
vulkan: Implement mmvq for iq1_s/iq1_m (#18450) 2026-01-02 20:19:04 +01:00
Prabod 5755e52d15
model : Maincoder-1B support (#18534)
* Add Maincoder model support

* Removed SPM model vocabulary setting and MOE related GGUF parameters
Removed trailing spaces from maincoder.cpp

* removed set_vocab

* added new line

* Fix formatting

* Add a new line for PEP8
2026-01-02 20:11:59 +01:00
Georgi Gerganov f38de16341
metal : adjust extra size for FA buffer to avoid reallocations (#18545) 2026-01-02 19:02:18 +02:00
Georgi Gerganov af1e8e1a6c
graph : reduce topology branching (#18548) 2026-01-02 19:01:56 +02:00