llama.cpp/ggml/src/ggml-cuda
Sigbjørn Skjæret 4df6e859e9
cuda : add missing support check for xielu (#17895)
2025-12-10 16:16:20 +01:00
..
template-instances ggml: CUDA: add head size 72 for flash-attn (#16962) 2025-11-03 14:29:11 +01:00
vendors CUDA: add stream-based concurrency (#16991) 2025-11-30 08:17:55 +08:00
CMakeLists.txt CUDA: skip fusion for repeating adds in bias (#17080) 2025-11-08 16:58:05 +08:00
acc.cu
acc.cuh
add-id.cu musa: fix build warnings (#15258) 2025-08-20 10:17:37 +08:00
add-id.cuh llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
arange.cu
arange.cuh
argmax.cu
argmax.cuh
argsort.cu cuda : add error checking for cudaMemcpyAsync in argsort (#17599) 2025-11-30 08:16:28 +08:00
argsort.cuh
binbcast.cu ggml: fix CUDA grid launch condition for large block_nums.y in binbcast (#16742) 2025-10-24 21:39:37 +02:00
binbcast.cuh CUDA: fuse adds, fuse add with rms norm (#15631) 2025-08-29 11:35:58 +08:00
clamp.cu
clamp.cuh
common.cuh HIP: enable WMMA-MMQ INT kernels for RDNA 3 (#17576) 2025-12-05 09:17:37 +01:00
concat.cu
concat.cuh
conv-transpose-1d.cu musa: add GGML_UNUSED_VARS (#15446) 2025-08-21 11:06:05 +08:00
conv-transpose-1d.cuh
conv2d-dw.cu CUDA: add conv_2d_dw (#14265) 2025-06-20 09:50:24 +08:00
conv2d-dw.cuh CUDA: add conv_2d_dw (#14265) 2025-06-20 09:50:24 +08:00
conv2d-transpose.cu CUDA: add conv_2d_transpose (#14287) 2025-06-20 22:48:24 +08:00
conv2d-transpose.cuh CUDA: add conv_2d_transpose (#14287) 2025-06-20 22:48:24 +08:00
conv2d.cu CUDA: fix build error from ambiguous __half conversions in conv2d (#15690) 2025-09-01 06:55:06 +05:30
conv2d.cuh CUDA: add conv2d (#15635) 2025-08-28 20:33:03 +02:00
convert.cu musa: add GGML_UNUSED_VARS (#15446) 2025-08-21 11:06:05 +08:00
convert.cuh HIP: RDNA4 tensor core support for MMF (#17077) 2025-11-22 00:03:24 +01:00
count-equal.cu
count-equal.cuh
cp-async.cuh
cpy-utils.cuh cuda : support non-contiguous i32 to i32 copy (#17326) 2025-11-23 11:13:34 +01:00
cpy.cu [MUSA] enable fp16/fast_fp16/bf16_mma on PH1 (#17551) 2025-11-28 14:08:29 +01:00
cpy.cuh cuda : remove legacy copy-op pointer indirection code (#16485) 2025-10-14 11:53:49 +02:00
cross-entropy-loss.cu CUDA: add dynamic shared mem to softmax, refactor general usage (#14497) 2025-07-03 07:45:11 +08:00
cross-entropy-loss.cuh
cumsum.cu Add support for CUMSUM and TRI for CUDA. (#17584) 2025-12-04 22:19:51 +01:00
cumsum.cuh Add support for CUMSUM and TRI for CUDA. (#17584) 2025-12-04 22:19:51 +01:00
dequantize.cuh CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433) 2025-08-20 16:58:49 +02:00
diag.cu Add DIAG for CUDA (#17873) 2025-12-09 20:28:57 +01:00
diag.cuh Add DIAG for CUDA (#17873) 2025-12-09 20:28:57 +01:00
diagmask.cu
diagmask.cuh
fattn-common.cuh CUDA: fix FA VKQ accumulator overflow (#17746) 2025-12-05 09:18:10 +01:00
fattn-mma-f16.cuh CUDA: fix unpadded strides in MMA FA kernel (#17891) 2025-12-10 12:39:56 +01:00
fattn-tile.cu ggml: CUDA: add head size 72 for flash-attn (#16962) 2025-11-03 14:29:11 +01:00
fattn-tile.cuh CUDA: fix FP16 overflow in tile FA kernel (#17875) 2025-12-09 09:34:02 +01:00
fattn-vec.cuh CUDA: fix FA VKQ accumulator overflow (#17746) 2025-12-05 09:18:10 +01:00
fattn-wmma-f16.cu CUDA: fix FA VKQ accumulator overflow (#17746) 2025-12-05 09:18:10 +01:00
fattn-wmma-f16.cuh CUDA: generalized (mma) FA, add Volta support (#17505) 2025-12-03 16:57:05 +01:00
fattn.cu CUDA: fix unpadded strides in MMA FA kernel (#17891) 2025-12-10 12:39:56 +01:00
fattn.cuh CUDA: refactor FA support/selection code (#15454) 2025-08-20 23:14:14 +02:00
fill.cu ggml : allow fill node alloc inplace (#17870) 2025-12-09 12:23:47 +01:00
fill.cuh cuda : add FILL op support (#17851) 2025-12-08 21:10:12 +08:00
getrows.cu CUDA: fix GET_ROWS for large tensors (#15882) 2025-09-09 08:11:01 +02:00
getrows.cuh
ggml-cuda.cu cuda : add missing support check for xielu (#17895) 2025-12-10 16:16:20 +01:00
gla.cu
gla.cuh
im2col.cu CUDA: fix im2col_3d to respect non-contiguous inputs (views) (#15956) 2025-09-16 00:28:31 +02:00
im2col.cuh ggml: add ops for WAN video model (cuda && cpu) (#15669) 2025-09-04 10:38:49 +02:00
mean.cu cuda : fix GGML_CUDA_GRAPHS=OFF (#15300) 2025-08-14 13:22:07 +03:00
mean.cuh CUDA: add mean operation (#14313) 2025-06-22 12:39:54 +08:00
mma.cuh HIP : fix RDNA4 build (#17792) 2025-12-05 13:47:52 +01:00
mmf.cu HIP: fix RDNA3 FP16/BF16 matrix multiplication (#17817) 2025-12-06 13:45:36 +01:00
mmf.cuh CUDA: generalized (mma) FA, add Volta support (#17505) 2025-12-03 16:57:05 +01:00
mmid.cu CUDA: add fp kernel for larger batch size MoE (#16512) 2025-10-14 13:15:15 +02:00
mmid.cuh CUDA: add fp kernel for larger batch size MoE (#16512) 2025-10-14 13:15:15 +02:00
mmq.cu HIP: enable WMMA-MMQ INT kernels for RDNA 3 (#17576) 2025-12-05 09:17:37 +01:00
mmq.cuh HIP: enable WMMA-MMQ INT kernels for RDNA 3 (#17576) 2025-12-05 09:17:37 +01:00
mmvf.cu CUDA: fix should_use_mmvf for ne11 == 1 (#17085) 2025-11-07 20:53:14 +01:00
mmvf.cuh CUDA: fix crash on uneven context without FA (#16988) 2025-11-06 14:05:47 +01:00
mmvq.cu CUDA: Remove unneded bias/gate dims in fused mmvq (#16858) 2025-11-01 13:13:26 +08:00
mmvq.cuh CUDA: General GEMV fusion (#16715) 2025-10-26 19:28:04 +08:00
norm.cu CUDA: Optimize `rms_norm_f32` kernel and its fused variants, giving 1-6% perf E2E (#15715) 2025-09-03 19:59:16 +02:00
norm.cuh CUDA: fuse adds, fuse add with rms norm (#15631) 2025-08-29 11:35:58 +08:00
opt-step-adamw.cu
opt-step-adamw.cuh
opt-step-sgd.cu finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
opt-step-sgd.cuh finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
out-prod.cu
out-prod.cuh
pad.cu ggml : add circular tiling support to pad, for Vulkan, CUDA, and CPU (used for making seamless textures) (#16985) 2025-12-06 15:07:02 +01:00
pad.cuh
pad_reflect_1d.cu musa: fix build warnings (#15611) 2025-09-26 02:56:10 +02:00
pad_reflect_1d.cuh cuda : add Pad Reflect 1D support (#14659) 2025-08-22 13:06:29 +02:00
pool2d.cu
pool2d.cuh
quantize.cu CUDA: fastdiv, launch bounds for mmvq + q8_1 quant (#15802) 2025-09-05 16:07:02 +02:00
quantize.cuh
reduce_rows.cuh musa: fix build warnings (#15258) 2025-08-20 10:17:37 +08:00
roll.cu CUDA: add roll (#14919) 2025-07-29 14:45:18 +08:00
roll.cuh CUDA: add roll (#14919) 2025-07-29 14:45:18 +08:00
rope.cu CUDA: fuse rope + set_rows (#16884) 2025-11-13 08:50:01 +08:00
rope.cuh CUDA: fuse rope + set_rows (#16884) 2025-11-13 08:50:01 +08:00
scale.cu ggml: add ops for WAN video model (cuda && cpu) (#15669) 2025-09-04 10:38:49 +02:00
scale.cuh
set-rows.cu CUDA: use fastdiv in set-rows (#16834) 2025-10-29 21:11:53 +08:00
set-rows.cuh CUDA: add set rows for f32 and f16 (#14551) 2025-07-12 16:31:38 +03:00
set.cu cuda: add SET operation support (#16804) 2025-10-28 20:10:28 +01:00
set.cuh cuda: add SET operation support (#16804) 2025-10-28 20:10:28 +01:00
softcap.cu cuda : add softcap fusion (#14907) 2025-07-29 14:22:03 +02:00
softcap.cuh cuda : add softcap fusion (#14907) 2025-07-29 14:22:03 +02:00
softmax.cu llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
softmax.cuh
solve_tri.cu cuda: optimize SOLVE_TRI using registers and FMAF (#17703) 2025-12-08 10:41:08 +01:00
solve_tri.cuh SOLVE_TRI CUDA kernel for small matrices (#17457) 2025-11-28 12:15:32 +08:00
ssm-conv.cu model : support LiquidAI LFM2 hybrid family (#14620) 2025-07-11 20:27:01 +02:00
ssm-conv.cuh
ssm-scan.cu ggml : fix SSM_SCAN for n_groups > 1 (#15625) 2025-08-28 10:11:36 -04:00
ssm-scan.cuh
sum.cu CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132) 2025-08-13 10:04:46 +02:00
sum.cuh
sumrows.cu CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132) 2025-08-13 10:04:46 +02:00
sumrows.cuh CUDA: add mean operation (#14313) 2025-06-22 12:39:54 +08:00
topk-moe.cu CUDA: support for weight clamp in top-k norm (#16702) 2025-10-27 09:06:16 +08:00
topk-moe.cuh CUDA: support for weight clamp in top-k norm (#16702) 2025-10-27 09:06:16 +08:00
tri.cu Add support for CUMSUM and TRI for CUDA. (#17584) 2025-12-04 22:19:51 +01:00
tri.cuh Add support for CUMSUM and TRI for CUDA. (#17584) 2025-12-04 22:19:51 +01:00
tsembd.cu ggml : fix padding in timestep embedding kernels (#15932) 2025-09-16 15:25:57 +02:00
tsembd.cuh
unary.cu ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063) 2025-11-13 20:54:47 +02:00
unary.cuh ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063) 2025-11-13 20:54:47 +02:00
upscale.cu model: LFM2-VL fixes (#17577) 2025-11-30 21:57:31 +01:00
upscale.cuh
vecdotq.cuh CUDA: Accelerate MXFP4 table lookup using `__byte_perm` (#15451) 2025-08-25 23:21:22 +02:00
wkv.cu
wkv.cuh