llama.cpp/ggml/src/ggml-cuda
Oliver Simons 6028bf7435
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132)
* Factor out `reduce_rows_f32` from common.cuh

This increases iteration cycle speed by not having to recompile
every kernel all the time

* Hide memory-latency by loop unrolling in reduce_rows_f32

* Further optimizations to `reduce_rows_f32`

1. Increase threadblock size to better hide latency of memory requests.
   As a consequence of bigger threadblocks, do 2-step summation, using
   shared memory to communicate results between invocations
2. Use sum_temp array to reduce waits on sum
3. Adjust num_unroll to reflext bigger threadblock
4. Improve default block_dims, increase support for more block_dims

* Add perf tests for `reduce_rows_f32` kernel

* Add heuristic to toggle 128/512 threads based on sm count

Break even point was the minimum of the following multiples.

| GPU Model                     | Nrow SM Count Multiple |
| -----------                   | -----------            |
| RTX 4000 SFF ADA              | 2.0x                   |
| RTX 6000 ADA                  | 2.5x                   |
| RTX PRO 6000 Blackwell Max-Q  | 3.04x                  |
| RTX PRO 4500 Blackwell	| 3.15x                  |

* Ensure perf gains also for small ncols and large nrows

Alternative to this, one could have also made the number of unrollings
template-able, but that would require compiling the kernel multiple
times, increasing binary size unnecessarily

* Modify perf and unit-tests

* Apply auto-formatting by clang

* Fix CI build failure

See https://github.com/ggml-org/llama.cpp/actions/runs/16798370266/job/47573716079?pr=15132#step:7:486
Building with VS generator worked though.

* Remove sm_count property from `ggml_backend_cuda_context`

Requested by @JohannesGaessler, and should fix remaining CI issues as a
side-effect

* Add CUB-based implementation for GGML_OP_MEAN

Currently this branch is only executed for nrows==1

* Add heuristics to execute CUB branch only when it brings perf

Heuristics were determined on the following HW:

* RTX 4000 SFF ADA
* RTX 6000 ADA
* RTX PRO 6000 Blackwell Max-Q
* RTX PRO 4500 Blackwell

* Add unit-test for CUB-based mean

Tests should run with CUDA Graphs enabled per default on NVGPUs

* Rename `USE_CUB` to `GGML_CUDA_USE_CUB`

Suggested by @JohannesGaessler

* Unindent Preprocessor directives

See
https://github.com/ggml-org/llama.cpp/pull/15132#discussion_r2269213506
2025-08-13 10:04:46 +02:00
..
template-instances llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
vendors HIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h (#15273) 2025-08-12 22:15:12 +02:00
CMakeLists.txt CUDA cmake: add `-lineinfo` for easier debug (#15260) 2025-08-12 17:21:45 +08:00
acc.cu llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
acc.cuh
add-id.cu llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
add-id.cuh llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
arange.cu
arange.cuh
argmax.cu cuda : optimize argmax (#10441) 2024-11-21 18:18:50 +01:00
argmax.cuh ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-03 21:17:26 +03:00
argsort.cu
argsort.cuh
binbcast.cu Support pure float16 add/sub/mul/div operations in the CUDA (and CPU) backend (ggml/1121) 2025-03-03 18:18:11 +02:00
binbcast.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
clamp.cu cuda: unary ops as float + de-duplicate (ggml/1130) 2025-03-03 18:18:11 +02:00
clamp.cuh
common.cuh CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132) 2025-08-13 10:04:46 +02:00
concat.cu musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611) 2025-03-30 10:59:38 +02:00
concat.cuh
conv-transpose-1d.cu musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611) 2025-03-30 10:59:38 +02:00
conv-transpose-1d.cuh
conv2d-dw.cu CUDA: add conv_2d_dw (#14265) 2025-06-20 09:50:24 +08:00
conv2d-dw.cuh CUDA: add conv_2d_dw (#14265) 2025-06-20 09:50:24 +08:00
conv2d-transpose.cu CUDA: add conv_2d_transpose (#14287) 2025-06-20 22:48:24 +08:00
conv2d-transpose.cuh CUDA: add conv_2d_transpose (#14287) 2025-06-20 22:48:24 +08:00
convert.cu llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
convert.cuh CUDA: add bf16 and f32 support to cublas_mul_mat_batched (#14361) 2025-06-29 01:30:53 +08:00
count-equal.cu ggml: fix zero division in ‘dne’ calculation in CUDA COUNT_EQUAL operator when ‘ne’ is small (#10213) 2024-11-09 08:35:46 +01:00
count-equal.cuh ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980) 2024-10-03 21:17:26 +03:00
cp-async.cuh CUDA: FA support for Deepseek (Ampere or newer) (#13306) 2025-05-09 13:34:58 +02:00
cpy-utils.cuh cuda : implement bf16 cpy ops and enable bf16 cont (#14763) 2025-07-22 12:33:10 +02:00
cpy.cu musa: upgrade musa sdk to rc4.2.0 (#14498) 2025-07-24 20:05:37 +01:00
cpy.cuh ggml: Re-enable CUDA graphs in presence of CONT and DUP nodes (#12970) 2025-04-17 15:19:42 +02:00
cross-entropy-loss.cu CUDA: add dynamic shared mem to softmax, refactor general usage (#14497) 2025-07-03 07:45:11 +08:00
cross-entropy-loss.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
dequantize.cuh
diagmask.cu
diagmask.cuh
fattn-common.cuh llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
fattn-mma-f16.cuh CUDA: attention sinks for mma FlashAttention (#15157) 2025-08-08 08:19:58 +02:00
fattn-tile-f16.cu CUDA: add attention sinks for tile and wmma (#15178) 2025-08-09 20:00:24 +08:00
fattn-tile-f16.cuh
fattn-tile-f32.cu CUDA: add attention sinks for tile and wmma (#15178) 2025-08-09 20:00:24 +08:00
fattn-tile-f32.cuh
fattn-vec-f16.cuh llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
fattn-vec-f32.cuh llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
fattn-wmma-f16.cu HIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h (#15273) 2025-08-12 22:15:12 +02:00
fattn-wmma-f16.cuh CUDA: use mma PTX instructions for FlashAttention (#11583) 2025-02-02 19:31:09 +01:00
fattn.cu CUDA: add attention sinks for tile and wmma (#15178) 2025-08-09 20:00:24 +08:00
fattn.cuh
getrows.cu CUDA: add bf16 and i32 to getrows (#14529) 2025-07-07 21:45:43 +08:00
getrows.cuh CUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199) 2025-04-30 23:12:59 +02:00
ggml-cuda.cu ggml : fix field name when new ggml_backend (#14944) 2025-08-08 14:37:22 +02:00
gla.cu llama: add support for QRWKV6 model architecture (#11001) 2025-01-10 09:58:08 +08:00
gla.cuh llama: add support for QRWKV6 model architecture (#11001) 2025-01-10 09:58:08 +08:00
im2col.cu llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
im2col.cuh
mean.cu CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132) 2025-08-13 10:04:46 +02:00
mean.cuh CUDA: add mean operation (#14313) 2025-06-22 12:39:54 +08:00
mma.cuh CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (#15131) 2025-08-07 10:53:21 +02:00
mmf.cu CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (#15131) 2025-08-07 10:53:21 +02:00
mmf.cuh CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (#15131) 2025-08-07 10:53:21 +02:00
mmq.cu CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (#15131) 2025-08-07 10:53:21 +02:00
mmq.cuh CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (#15131) 2025-08-07 10:53:21 +02:00
mmvf.cu CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (#15131) 2025-08-07 10:53:21 +02:00
mmvf.cuh CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (#15131) 2025-08-07 10:53:21 +02:00
mmvq.cu llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
mmvq.cuh CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (#13014) 2025-04-22 21:27:40 +02:00
norm.cu CUDA: add fused rms norm (#14800) 2025-07-23 09:25:42 +08:00
norm.cuh CUDA: add fused rms norm (#14800) 2025-07-23 09:25:42 +08:00
opt-step-adamw.cu ggml: new optimization interface (ggml/988) 2024-11-17 08:30:29 +02:00
opt-step-adamw.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
out-prod.cu CPU/CUDA: fix (GQA) mul mat back, add CUDA support (#11380) 2025-01-24 12:38:31 +01:00
out-prod.cuh ggml/examples: add backend support for numerical optimization (ggml/949) 2024-09-20 21:15:05 +03:00
pad.cu musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611) 2025-03-30 10:59:38 +02:00
pad.cuh
pool2d.cu
pool2d.cuh
quantize.cu CUDA: fix crash on large batch size for quant. MoE (#13537) 2025-05-14 16:41:02 +02:00
quantize.cuh CUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199) 2025-04-30 23:12:59 +02:00
reduce_rows.cuh CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132) 2025-08-13 10:04:46 +02:00
roll.cu CUDA: add roll (#14919) 2025-07-29 14:45:18 +08:00
roll.cuh CUDA: add roll (#14919) 2025-07-29 14:45:18 +08:00
rope.cu cuda : fix rope with partial rotation and non-cont src (#14580) 2025-07-08 10:15:21 +03:00
rope.cuh RoPE: fix back, CUDA support for back + noncont. (#11240) 2025-01-15 12:51:37 +01:00
scale.cu ggml : add ggml_scale_bias (#14417) 2025-07-09 18:16:12 +02:00
scale.cuh
set-rows.cu musa: fix build warnings (unused variable) (#14869) 2025-07-26 10:36:02 +08:00
set-rows.cuh CUDA: add set rows for f32 and f16 (#14551) 2025-07-12 16:31:38 +03:00
softcap.cu cuda : add softcap fusion (#14907) 2025-07-29 14:22:03 +02:00
softcap.cuh cuda : add softcap fusion (#14907) 2025-07-29 14:22:03 +02:00
softmax.cu llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
softmax.cuh CUDA: backwards pass for misc. ops, add tests (#11257) 2025-01-16 16:43:38 +01:00
ssm-conv.cu model : support LiquidAI LFM2 hybrid family (#14620) 2025-07-11 20:27:01 +02:00
ssm-conv.cuh ggml : faster ssm scan (#10558) 2025-03-31 18:05:13 +02:00
ssm-scan.cu cuda: refactored ssm_scan and use CUB (#13291) 2025-08-09 20:29:43 +02:00
ssm-scan.cuh ggml : faster ssm scan (#10558) 2025-03-31 18:05:13 +02:00
sum.cu CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132) 2025-08-13 10:04:46 +02:00
sum.cuh
sumrows.cu CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132) 2025-08-13 10:04:46 +02:00
sumrows.cuh CUDA: add mean operation (#14313) 2025-06-22 12:39:54 +08:00
tsembd.cu
tsembd.cuh
unary.cu llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
unary.cuh llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
upscale.cu CUDA: add bilinear interpolation for upscale (#14563) 2025-07-08 10:11:18 +08:00
upscale.cuh
vecdotq.cuh llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
wkv.cu llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00
wkv.cuh llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00