llama.cpp/ggml/src/ggml-cuda
Aman Gupta ed32089927
ggml-cuda: reorder only relevant nodes (#17639)
2025-12-02 12:36:31 +08:00
..
template-instances ggml: CUDA: add head size 72 for flash-attn (#16962) 2025-11-03 14:29:11 +01:00
vendors CUDA: add stream-based concurrency (#16991) 2025-11-30 08:17:55 +08:00
CMakeLists.txt CUDA: skip fusion for repeating adds in bias (#17080) 2025-11-08 16:58:05 +08:00
acc.cu llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
acc.cuh
add-id.cu musa: fix build warnings (#15258) 2025-08-20 10:17:37 +08:00
add-id.cuh llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
arange.cu
arange.cuh
argmax.cu
argmax.cuh
argsort.cu cuda : add error checking for cudaMemcpyAsync in argsort (#17599) 2025-11-30 08:16:28 +08:00
argsort.cuh
binbcast.cu ggml: fix CUDA grid launch condition for large block_nums.y in binbcast (#16742) 2025-10-24 21:39:37 +02:00
binbcast.cuh CUDA: fuse adds, fuse add with rms norm (#15631) 2025-08-29 11:35:58 +08:00
clamp.cu
clamp.cuh
common.cuh ggml-cuda: reorder only relevant nodes (#17639) 2025-12-02 12:36:31 +08:00
concat.cu
concat.cuh
conv-transpose-1d.cu musa: add GGML_UNUSED_VARS (#15446) 2025-08-21 11:06:05 +08:00
conv-transpose-1d.cuh
conv2d-dw.cu CUDA: add conv_2d_dw (#14265) 2025-06-20 09:50:24 +08:00
conv2d-dw.cuh CUDA: add conv_2d_dw (#14265) 2025-06-20 09:50:24 +08:00
conv2d-transpose.cu CUDA: add conv_2d_transpose (#14287) 2025-06-20 22:48:24 +08:00
conv2d-transpose.cuh CUDA: add conv_2d_transpose (#14287) 2025-06-20 22:48:24 +08:00
conv2d.cu CUDA: fix build error from ambiguous __half conversions in conv2d (#15690) 2025-09-01 06:55:06 +05:30
conv2d.cuh CUDA: add conv2d (#15635) 2025-08-28 20:33:03 +02:00
convert.cu musa: add GGML_UNUSED_VARS (#15446) 2025-08-21 11:06:05 +08:00
convert.cuh HIP: RDNA4 tensor core support for MMF (#17077) 2025-11-22 00:03:24 +01:00
count-equal.cu
count-equal.cuh
cp-async.cuh CUDA: FA support for Deepseek (Ampere or newer) (#13306) 2025-05-09 13:34:58 +02:00
cpy-utils.cuh cuda : support non-contiguous i32 to i32 copy (#17326) 2025-11-23 11:13:34 +01:00
cpy.cu [MUSA] enable fp16/fast_fp16/bf16_mma on PH1 (#17551) 2025-11-28 14:08:29 +01:00
cpy.cuh cuda : remove legacy copy-op pointer indirection code (#16485) 2025-10-14 11:53:49 +02:00
cross-entropy-loss.cu CUDA: add dynamic shared mem to softmax, refactor general usage (#14497) 2025-07-03 07:45:11 +08:00
cross-entropy-loss.cuh
dequantize.cuh CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433) 2025-08-20 16:58:49 +02:00
diagmask.cu
diagmask.cuh
fattn-common.cuh CUDA: no FP16 arithmetic for vector FA kernel (#17558) 2025-11-28 10:29:09 +01:00
fattn-mma-f16.cuh musa: add GGML_UNUSED_VARS (#15446) 2025-08-21 11:06:05 +08:00
fattn-tile.cu ggml: CUDA: add head size 72 for flash-attn (#16962) 2025-11-03 14:29:11 +01:00
fattn-tile.cuh [MUSA] enable fp16/fast_fp16/bf16_mma on PH1 (#17551) 2025-11-28 14:08:29 +01:00
fattn-vec.cuh [MUSA] enable fp16/fast_fp16/bf16_mma on PH1 (#17551) 2025-11-28 14:08:29 +01:00
fattn-wmma-f16.cu HIP: Disable ROCWMMA fattn on CDNA when compiled against ROCWMMA 2.0.0 (#16221) 2025-10-01 23:09:25 +02:00
fattn-wmma-f16.cuh CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
fattn.cu ggml: CUDA: add head size 72 for flash-attn (#16962) 2025-11-03 14:29:11 +01:00
fattn.cuh CUDA: refactor FA support/selection code (#15454) 2025-08-20 23:14:14 +02:00
getrows.cu CUDA: fix GET_ROWS for large tensors (#15882) 2025-09-09 08:11:01 +02:00
getrows.cuh CUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199) 2025-04-30 23:12:59 +02:00
ggml-cuda.cu ggml-cuda: reorder only relevant nodes (#17639) 2025-12-02 12:36:31 +08:00
gla.cu
gla.cuh
im2col.cu CUDA: fix im2col_3d to respect non-contiguous inputs (views) (#15956) 2025-09-16 00:28:31 +02:00
im2col.cuh ggml: add ops for WAN video model (cuda && cpu) (#15669) 2025-09-04 10:38:49 +02:00
mean.cu cuda : fix GGML_CUDA_GRAPHS=OFF (#15300) 2025-08-14 13:22:07 +03:00
mean.cuh CUDA: add mean operation (#14313) 2025-06-22 12:39:54 +08:00
mma.cuh [MUSA] enable fp16/fast_fp16/bf16_mma on PH1 (#17551) 2025-11-28 14:08:29 +01:00
mmf.cu HIP: enable mul_mat_f for RDNA4 (#17437) 2025-11-28 08:24:30 +01:00
mmf.cuh HIP: RDNA4 tensor core support for MMF (#17077) 2025-11-22 00:03:24 +01:00
mmid.cu CUDA: add fp kernel for larger batch size MoE (#16512) 2025-10-14 13:15:15 +02:00
mmid.cuh CUDA: add fp kernel for larger batch size MoE (#16512) 2025-10-14 13:15:15 +02:00
mmq.cu HIP: WMMA-MMQ kernels for RDNA 4 (#17156) 2025-11-24 20:00:10 +01:00
mmq.cuh HIP: Patch failed testcase in WMMA-MMQ kernels for RDNA 4 (#17502) 2025-11-26 11:18:48 +01:00
mmvf.cu CUDA: fix should_use_mmvf for ne11 == 1 (#17085) 2025-11-07 20:53:14 +01:00
mmvf.cuh CUDA: fix crash on uneven context without FA (#16988) 2025-11-06 14:05:47 +01:00
mmvq.cu CUDA: Remove unneded bias/gate dims in fused mmvq (#16858) 2025-11-01 13:13:26 +08:00
mmvq.cuh CUDA: General GEMV fusion (#16715) 2025-10-26 19:28:04 +08:00
norm.cu CUDA: Optimize `rms_norm_f32` kernel and its fused variants, giving 1-6% perf E2E (#15715) 2025-09-03 19:59:16 +02:00
norm.cuh CUDA: fuse adds, fuse add with rms norm (#15631) 2025-08-29 11:35:58 +08:00
opt-step-adamw.cu
opt-step-adamw.cuh
opt-step-sgd.cu finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
opt-step-sgd.cuh finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
out-prod.cu
out-prod.cuh
pad.cu ggml: add ops for WAN video model (cuda && cpu) (#15669) 2025-09-04 10:38:49 +02:00
pad.cuh
pad_reflect_1d.cu musa: fix build warnings (#15611) 2025-09-26 02:56:10 +02:00
pad_reflect_1d.cuh cuda : add Pad Reflect 1D support (#14659) 2025-08-22 13:06:29 +02:00
pool2d.cu
pool2d.cuh
quantize.cu CUDA: fastdiv, launch bounds for mmvq + q8_1 quant (#15802) 2025-09-05 16:07:02 +02:00
quantize.cuh CUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199) 2025-04-30 23:12:59 +02:00
reduce_rows.cuh musa: fix build warnings (#15258) 2025-08-20 10:17:37 +08:00
roll.cu CUDA: add roll (#14919) 2025-07-29 14:45:18 +08:00
roll.cuh CUDA: add roll (#14919) 2025-07-29 14:45:18 +08:00
rope.cu CUDA: fuse rope + set_rows (#16884) 2025-11-13 08:50:01 +08:00
rope.cuh CUDA: fuse rope + set_rows (#16884) 2025-11-13 08:50:01 +08:00
scale.cu ggml: add ops for WAN video model (cuda && cpu) (#15669) 2025-09-04 10:38:49 +02:00
scale.cuh
set-rows.cu CUDA: use fastdiv in set-rows (#16834) 2025-10-29 21:11:53 +08:00
set-rows.cuh CUDA: add set rows for f32 and f16 (#14551) 2025-07-12 16:31:38 +03:00
set.cu cuda: add SET operation support (#16804) 2025-10-28 20:10:28 +01:00
set.cuh cuda: add SET operation support (#16804) 2025-10-28 20:10:28 +01:00
softcap.cu cuda : add softcap fusion (#14907) 2025-07-29 14:22:03 +02:00
softcap.cuh cuda : add softcap fusion (#14907) 2025-07-29 14:22:03 +02:00
softmax.cu llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
softmax.cuh
solve_tri.cu SOLVE_TRI CUDA kernel for small matrices (#17457) 2025-11-28 12:15:32 +08:00
solve_tri.cuh SOLVE_TRI CUDA kernel for small matrices (#17457) 2025-11-28 12:15:32 +08:00
ssm-conv.cu model : support LiquidAI LFM2 hybrid family (#14620) 2025-07-11 20:27:01 +02:00
ssm-conv.cuh
ssm-scan.cu ggml : fix SSM_SCAN for n_groups > 1 (#15625) 2025-08-28 10:11:36 -04:00
ssm-scan.cuh
sum.cu CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132) 2025-08-13 10:04:46 +02:00
sum.cuh
sumrows.cu CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132) 2025-08-13 10:04:46 +02:00
sumrows.cuh CUDA: add mean operation (#14313) 2025-06-22 12:39:54 +08:00
topk-moe.cu CUDA: support for weight clamp in top-k norm (#16702) 2025-10-27 09:06:16 +08:00
topk-moe.cuh CUDA: support for weight clamp in top-k norm (#16702) 2025-10-27 09:06:16 +08:00
tsembd.cu ggml : fix padding in timestep embedding kernels (#15932) 2025-09-16 15:25:57 +02:00
tsembd.cuh
unary.cu ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063) 2025-11-13 20:54:47 +02:00
unary.cuh ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063) 2025-11-13 20:54:47 +02:00
upscale.cu model: LFM2-VL fixes (#17577) 2025-11-30 21:57:31 +01:00
upscale.cuh
vecdotq.cuh CUDA: Accelerate MXFP4 table lookup using `__byte_perm` (#15451) 2025-08-25 23:21:22 +02:00
wkv.cu
wkv.cuh