..
template-instances
CUDA: Add mul_mat_id support for the mmf kernel ( #15767 )
2025-09-09 14:38:02 +08:00
vendors
CUDA: fix FA occupancy, optimize tile kernel ( #15982 )
2025-09-17 15:32:42 +02:00
CMakeLists.txt
CUDA : conditionally add cuda architectures (ggml/1341)
2025-09-20 13:02:14 +03:00
acc.cu
llama/ggml: add LLM training support ( #10544 )
2025-05-12 14:44:49 +02:00
acc.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
add-id.cu
musa: fix build warnings ( #15258 )
2025-08-20 10:17:37 +08:00
add-id.cuh
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
arange.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
arange.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
argmax.cu
cuda : optimize argmax ( #10441 )
2024-11-21 18:18:50 +01:00
argmax.cuh
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
2024-10-03 21:17:26 +03:00
argsort.cu
ggml : reduce hash table reset cost ( #8698 )
2024-07-27 04:41:55 +02:00
argsort.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
binbcast.cu
CUDA: Add `fastdiv` to `k_bin_bcast*`, giving 1-3% E2E performance ( #15872 )
2025-09-10 22:04:03 +02:00
binbcast.cuh
CUDA: fuse adds, fuse add with rms norm ( #15631 )
2025-08-29 11:35:58 +08:00
clamp.cu
cuda: unary ops as float + de-duplicate (ggml/1130)
2025-03-03 18:18:11 +02:00
clamp.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
common.cuh
CUDA: Optimize PAD_REFLECT_1D ( #15957 )
2025-09-18 20:26:03 +02:00
concat.cu
musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc ( #12611 )
2025-03-30 10:59:38 +02:00
concat.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
conv-transpose-1d.cu
musa: add GGML_UNUSED_VARS ( #15446 )
2025-08-21 11:06:05 +08:00
conv-transpose-1d.cuh
feat: cuda implementation for `ggml_conv_transpose_1d` (ggml/854)
2024-07-08 12:23:00 +03:00
conv2d-dw.cu
CUDA: add conv_2d_dw ( #14265 )
2025-06-20 09:50:24 +08:00
conv2d-dw.cuh
CUDA: add conv_2d_dw ( #14265 )
2025-06-20 09:50:24 +08:00
conv2d-transpose.cu
CUDA: add conv_2d_transpose ( #14287 )
2025-06-20 22:48:24 +08:00
conv2d-transpose.cuh
CUDA: add conv_2d_transpose ( #14287 )
2025-06-20 22:48:24 +08:00
conv2d.cu
CUDA: fix build error from ambiguous __half conversions in conv2d ( #15690 )
2025-09-01 06:55:06 +05:30
conv2d.cuh
CUDA: add conv2d ( #15635 )
2025-08-28 20:33:03 +02:00
convert.cu
musa: add GGML_UNUSED_VARS ( #15446 )
2025-08-21 11:06:05 +08:00
convert.cuh
ggml: allow casting between f32 and i32 ( #15783 )
2025-09-08 12:33:01 +02:00
count-equal.cu
ggml: fix zero division in ‘dne’ calculation in CUDA COUNT_EQUAL operator when ‘ne’ is small ( #10213 )
2024-11-09 08:35:46 +01:00
count-equal.cuh
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
2024-10-03 21:17:26 +03:00
cp-async.cuh
CUDA: FA support for Deepseek (Ampere or newer) ( #13306 )
2025-05-09 13:34:58 +02:00
cpy-utils.cuh
HIP: Cleanup hipification header ( #15285 )
2025-08-14 16:23:56 +02:00
cpy.cu
cuda : add missing F32<->I32 entries in ggml_cuda_cpy_fn ( #16060 )
2025-09-18 13:28:22 +02:00
cpy.cuh
ggml: Re-enable CUDA graphs in presence of CONT and DUP nodes ( #12970 )
2025-04-17 15:19:42 +02:00
cross-entropy-loss.cu
CUDA: add dynamic shared mem to softmax, refactor general usage ( #14497 )
2025-07-03 07:45:11 +08:00
cross-entropy-loss.cuh
ggml/examples: add backend support for numerical optimization (ggml/949)
2024-09-20 21:15:05 +03:00
dequantize.cuh
CUDA: replace GGML_CUDA_F16 with CUDA arch checks ( #15433 )
2025-08-20 16:58:49 +02:00
diagmask.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
diagmask.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
fattn-common.cuh
CUDA: fix FA occupancy, optimize tile kernel ( #15982 )
2025-09-17 15:32:42 +02:00
fattn-mma-f16.cuh
musa: add GGML_UNUSED_VARS ( #15446 )
2025-08-21 11:06:05 +08:00
fattn-tile.cu
CUDA: fix compilation on CC 6.0 ( #16091 )
2025-09-18 19:28:32 +02:00
fattn-tile.cuh
CUDA: faster tile FA (Pascal/AMD), headsize 256 ( #15769 )
2025-09-07 00:26:28 +02:00
fattn-vec-f16.cuh
musa: add GGML_UNUSED_VARS ( #15446 )
2025-08-21 11:06:05 +08:00
fattn-vec-f32.cuh
musa: add GGML_UNUSED_VARS ( #15446 )
2025-08-21 11:06:05 +08:00
fattn-wmma-f16.cu
musa: add GGML_UNUSED_VARS ( #15446 )
2025-08-21 11:06:05 +08:00
fattn-wmma-f16.cuh
CUDA: use mma PTX instructions for FlashAttention ( #11583 )
2025-02-02 19:31:09 +01:00
fattn.cu
CUDA: faster tile FA (Pascal/AMD), headsize 256 ( #15769 )
2025-09-07 00:26:28 +02:00
fattn.cuh
CUDA: refactor FA support/selection code ( #15454 )
2025-08-20 23:14:14 +02:00
getrows.cu
CUDA: fix GET_ROWS for large tensors ( #15882 )
2025-09-09 08:11:01 +02:00
getrows.cuh
CUDA: batched+noncont MMQ, refactor bs>1 MoE code ( #13199 )
2025-04-30 23:12:59 +02:00
ggml-cuda.cu
ggml : implement set_rows with i32 index ( #16159 )
2025-09-22 19:13:00 +02:00
gla.cu
llama: add support for QRWKV6 model architecture ( #11001 )
2025-01-10 09:58:08 +08:00
gla.cuh
llama: add support for QRWKV6 model architecture ( #11001 )
2025-01-10 09:58:08 +08:00
im2col.cu
CUDA: fix im2col_3d to respect non-contiguous inputs (views) ( #15956 )
2025-09-16 00:28:31 +02:00
im2col.cuh
ggml: add ops for WAN video model (cuda && cpu) ( #15669 )
2025-09-04 10:38:49 +02:00
mean.cu
cuda : fix GGML_CUDA_GRAPHS=OFF ( #15300 )
2025-08-14 13:22:07 +03:00
mean.cuh
CUDA: add mean operation ( #14313 )
2025-06-22 12:39:54 +08:00
mma.cuh
CUDA: Add mul_mat_id support for the mmf kernel ( #15767 )
2025-09-09 14:38:02 +08:00
mmf.cu
CUDA: Add mul_mat_id support for the mmf kernel ( #15767 )
2025-09-09 14:38:02 +08:00
mmf.cuh
CUDA: some micro-optimizations in mmf.cuh for mul_mat_id ( #15926 )
2025-09-15 17:35:11 +08:00
mmq.cu
CUDA: MoE helper in device code, better tile sizes ( #15525 )
2025-08-25 17:23:40 +02:00
mmq.cuh
CUDA: MoE helper in device code, better tile sizes ( #15525 )
2025-08-25 17:23:40 +02:00
mmvf.cu
musa: add GGML_UNUSED_VARS ( #15446 )
2025-08-21 11:06:05 +08:00
mmvf.cuh
CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 ( #15131 )
2025-08-07 10:53:21 +02:00
mmvq.cu
CUDA: fastdiv, launch bounds for mmvq + q8_1 quant ( #15802 )
2025-09-05 16:07:02 +02:00
mmvq.cuh
CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID ( #13014 )
2025-04-22 21:27:40 +02:00
norm.cu
CUDA: Optimize `rms_norm_f32` kernel and its fused variants, giving 1-6% perf E2E ( #15715 )
2025-09-03 19:59:16 +02:00
norm.cuh
CUDA: fuse adds, fuse add with rms norm ( #15631 )
2025-08-29 11:35:58 +08:00
opt-step-adamw.cu
ggml: new optimization interface (ggml/988)
2024-11-17 08:30:29 +02:00
opt-step-adamw.cuh
ggml/examples: add backend support for numerical optimization (ggml/949)
2024-09-20 21:15:05 +03:00
opt-step-sgd.cu
finetune: SGD optimizer, more CLI args ( #13873 )
2025-08-14 12:03:57 +02:00
opt-step-sgd.cuh
finetune: SGD optimizer, more CLI args ( #13873 )
2025-08-14 12:03:57 +02:00
out-prod.cu
CPU/CUDA: fix (GQA) mul mat back, add CUDA support ( #11380 )
2025-01-24 12:38:31 +01:00
out-prod.cuh
ggml/examples: add backend support for numerical optimization (ggml/949)
2024-09-20 21:15:05 +03:00
pad.cu
ggml: add ops for WAN video model (cuda && cpu) ( #15669 )
2025-09-04 10:38:49 +02:00
pad.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
pad_reflect_1d.cu
CUDA: Optimize PAD_REFLECT_1D ( #15957 )
2025-09-18 20:26:03 +02:00
pad_reflect_1d.cuh
cuda : add Pad Reflect 1D support ( #14659 )
2025-08-22 13:06:29 +02:00
pool2d.cu
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
pool2d.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
quantize.cu
CUDA: fastdiv, launch bounds for mmvq + q8_1 quant ( #15802 )
2025-09-05 16:07:02 +02:00
quantize.cuh
CUDA: batched+noncont MMQ, refactor bs>1 MoE code ( #13199 )
2025-04-30 23:12:59 +02:00
reduce_rows.cuh
musa: fix build warnings ( #15258 )
2025-08-20 10:17:37 +08:00
roll.cu
CUDA: add roll ( #14919 )
2025-07-29 14:45:18 +08:00
roll.cuh
CUDA: add roll ( #14919 )
2025-07-29 14:45:18 +08:00
rope.cu
cuda : fix rope with partial rotation and non-cont src ( #14580 )
2025-07-08 10:15:21 +03:00
rope.cuh
RoPE: fix back, CUDA support for back + noncont. ( #11240 )
2025-01-15 12:51:37 +01:00
scale.cu
ggml: add ops for WAN video model (cuda && cpu) ( #15669 )
2025-09-04 10:38:49 +02:00
scale.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
set-rows.cu
ggml : implement set_rows with i32 index ( #16159 )
2025-09-22 19:13:00 +02:00
set-rows.cuh
CUDA: add set rows for f32 and f16 ( #14551 )
2025-07-12 16:31:38 +03:00
softcap.cu
cuda : add softcap fusion ( #14907 )
2025-07-29 14:22:03 +02:00
softcap.cuh
cuda : add softcap fusion ( #14907 )
2025-07-29 14:22:03 +02:00
softmax.cu
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
softmax.cuh
CUDA: backwards pass for misc. ops, add tests ( #11257 )
2025-01-16 16:43:38 +01:00
ssm-conv.cu
model : support LiquidAI LFM2 hybrid family ( #14620 )
2025-07-11 20:27:01 +02:00
ssm-conv.cuh
ggml : faster ssm scan ( #10558 )
2025-03-31 18:05:13 +02:00
ssm-scan.cu
ggml : fix SSM_SCAN for n_groups > 1 ( #15625 )
2025-08-28 10:11:36 -04:00
ssm-scan.cuh
ggml : faster ssm scan ( #10558 )
2025-03-31 18:05:13 +02:00
sum.cu
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n ( #15132 )
2025-08-13 10:04:46 +02:00
sum.cuh
tests: add gradient tests for all backends (ggml/932)
2024-09-08 11:05:55 +03:00
sumrows.cu
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n ( #15132 )
2025-08-13 10:04:46 +02:00
sumrows.cuh
CUDA: add mean operation ( #14313 )
2025-06-22 12:39:54 +08:00
tsembd.cu
ggml : fix padding in timestep embedding kernels ( #15932 )
2025-09-16 15:25:57 +02:00
tsembd.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
unary.cu
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
unary.cuh
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
upscale.cu
CUDA: add bilinear interpolation for upscale ( #14563 )
2025-07-08 10:11:18 +08:00
upscale.cuh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
vecdotq.cuh
CUDA: Accelerate MXFP4 table lookup using `__byte_perm` ( #15451 )
2025-08-25 23:21:22 +02:00
wkv.cu
llama: Add support for RWKV v7 architecture ( #12412 )
2025-03-18 07:27:50 +08:00
wkv.cuh
llama: Add support for RWKV v7 architecture ( #12412 )
2025-03-18 07:27:50 +08:00