| .. |
|
template-instances
|
cuda: Q1_0 initial backend (#21629)
|
2026-04-15 18:38:38 +02:00 |
|
vendors
|
HIP: Remove unesscary NCCL_CHECK (#21914)
|
2026-04-19 12:59:44 +02:00 |
|
CMakeLists.txt
|
ggml: backend-agnostic tensor parallelism (experimental) (#19378)
|
2026-04-09 16:42:19 +02:00 |
|
acc.cu
|
llama/ggml: add LLM training support (#10544)
|
2025-05-12 14:44:49 +02:00 |
|
acc.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
add-id.cu
|
musa: fix build warnings (#15258)
|
2025-08-20 10:17:37 +08:00 |
|
add-id.cuh
|
llama : add gpt-oss (#15091)
|
2025-08-05 22:10:36 +03:00 |
|
arange.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
arange.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
argmax.cu
|
ggml : use WARP_SIZE/2 for argmax reduction offset (#18092)
|
2025-12-17 11:47:01 +08:00 |
|
argmax.cuh
|
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
|
2024-10-03 21:17:26 +03:00 |
|
argsort.cu
|
CUDA: Limit DeviceSegmentedSort to immediate mode (#21718)
|
2026-04-13 11:14:06 +02:00 |
|
argsort.cuh
|
sampling : add support for backend sampling (#17004)
|
2026-01-04 22:22:16 +02:00 |
|
binbcast.cu
|
CUDA: fuse muls (#21665)
|
2026-04-10 10:24:09 +08:00 |
|
binbcast.cuh
|
CUDA: fuse muls (#21665)
|
2026-04-10 10:24:09 +08:00 |
|
clamp.cu
|
cuda: unary ops as float + de-duplicate (ggml/1130)
|
2025-03-03 18:18:11 +02:00 |
|
clamp.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
common.cuh
|
CUDA: refactor mma data loading for AMD (#22051)
|
2026-04-19 18:26:59 +02:00 |
|
concat.cu
|
musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611)
|
2025-03-30 10:59:38 +02:00 |
|
concat.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
conv-transpose-1d.cu
|
musa: add GGML_UNUSED_VARS (#15446)
|
2025-08-21 11:06:05 +08:00 |
|
conv-transpose-1d.cuh
|
feat: cuda implementation for `ggml_conv_transpose_1d` (ggml/854)
|
2024-07-08 12:23:00 +03:00 |
|
conv2d-dw.cu
|
CUDA: add conv_2d_dw (#14265)
|
2025-06-20 09:50:24 +08:00 |
|
conv2d-dw.cuh
|
CUDA: add conv_2d_dw (#14265)
|
2025-06-20 09:50:24 +08:00 |
|
conv2d-transpose.cu
|
CUDA & CPU: support F32 kernel type for `CONV_TRANSPOSE_2D` (#17094)
|
2026-03-26 10:19:14 +08:00 |
|
conv2d-transpose.cuh
|
CUDA & CPU: support F32 kernel type for `CONV_TRANSPOSE_2D` (#17094)
|
2026-03-26 10:19:14 +08:00 |
|
conv2d.cu
|
CUDA: fix build error from ambiguous __half conversions in conv2d (#15690)
|
2025-09-01 06:55:06 +05:30 |
|
conv2d.cuh
|
CUDA: add conv2d (#15635)
|
2025-08-28 20:33:03 +02:00 |
|
convert.cu
|
cuda: Q1_0 initial backend (#21629)
|
2026-04-15 18:38:38 +02:00 |
|
convert.cuh
|
CUDA: fix BF16 FA compilation (#20865)
|
2026-03-22 17:53:33 +01:00 |
|
count-equal.cu
|
ggml: fix zero division in ‘dne’ calculation in CUDA COUNT_EQUAL operator when ‘ne’ is small (#10213)
|
2024-11-09 08:35:46 +01:00 |
|
count-equal.cuh
|
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
|
2024-10-03 21:17:26 +03:00 |
|
cp-async.cuh
|
CUDA: FA support for Deepseek (Ampere or newer) (#13306)
|
2025-05-09 13:34:58 +02:00 |
|
cpy-utils.cuh
|
cuda : support non-contiguous i32 to i32 copy (#17326)
|
2025-11-23 11:13:34 +01:00 |
|
cpy.cu
|
Fix data race in CUDA's "cpy" kernel (influences GGML's DUP, CONT operations). (#20507)
|
2026-03-14 13:19:44 +08:00 |
|
cpy.cuh
|
cuda : remove legacy copy-op pointer indirection code (#16485)
|
2025-10-14 11:53:49 +02:00 |
|
cross-entropy-loss.cu
|
CUDA: add dynamic shared mem to softmax, refactor general usage (#14497)
|
2025-07-03 07:45:11 +08:00 |
|
cross-entropy-loss.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
cumsum.cu
|
sampling : add support for backend sampling (#17004)
|
2026-01-04 22:22:16 +02:00 |
|
cumsum.cuh
|
Add support for CUMSUM and TRI for CUDA. (#17584)
|
2025-12-04 22:19:51 +01:00 |
|
dequantize.cuh
|
cuda: Q1_0 initial backend (#21629)
|
2026-04-15 18:38:38 +02:00 |
|
diag.cu
|
Add DIAG for CUDA (#17873)
|
2025-12-09 20:28:57 +01:00 |
|
diag.cuh
|
Add DIAG for CUDA (#17873)
|
2025-12-09 20:28:57 +01:00 |
|
diagmask.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
diagmask.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
fattn-common.cuh
|
[CUDA ] Write an optimized flash_attn_stream_k_fixup kernel (#21159)
|
2026-04-06 20:34:29 +02:00 |
|
fattn-mma-f16.cuh
|
CUDA: refactor mma data loading for AMD (#22051)
|
2026-04-19 18:26:59 +02:00 |
|
fattn-tile.cu
|
CUDA: Add Flash Attention Support for Head Dimension 512 (#20998)
|
2026-04-01 09:07:24 +02:00 |
|
fattn-tile.cuh
|
CUDA: Add Flash Attention Support for Head Dimension 512 (#20998)
|
2026-04-01 09:07:24 +02:00 |
|
fattn-vec.cuh
|
ggml-cuda: native bf16 flash attention for vec kernel (#20525)
|
2026-03-22 11:05:51 +01:00 |
|
fattn-wmma-f16.cu
|
Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions (#19591)
|
2026-02-16 14:46:08 +01:00 |
|
fattn-wmma-f16.cuh
|
chore : correct typos [no ci] (#20041)
|
2026-03-05 08:50:21 +01:00 |
|
fattn.cu
|
CUDA: skip compilation of superfluous FA kernels (#21768)
|
2026-04-11 18:52:11 +02:00 |
|
fattn.cuh
|
CUDA: refactor FA support/selection code (#15454)
|
2025-08-20 23:14:14 +02:00 |
|
fill.cu
|
ggml : allow fill node alloc inplace (#17870)
|
2025-12-09 12:23:47 +01:00 |
|
fill.cuh
|
cuda : add FILL op support (#17851)
|
2025-12-08 21:10:12 +08:00 |
|
gated_delta_net.cu
|
CUDA: GDN hide memory latency (#20537)
|
2026-03-16 11:41:45 +08:00 |
|
gated_delta_net.cuh
|
ggml: add GATED_DELTA_NET op (#19504)
|
2026-03-07 15:41:10 +08:00 |
|
getrows.cu
|
cuda: Q1_0 initial backend (#21629)
|
2026-04-15 18:38:38 +02:00 |
|
getrows.cuh
|
CUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199)
|
2025-04-30 23:12:59 +02:00 |
|
ggml-cuda.cu
|
ggml: add graph_reused (#21764)
|
2026-04-16 17:21:28 +08:00 |
|
gla.cu
|
llama: add support for QRWKV6 model architecture (#11001)
|
2025-01-10 09:58:08 +08:00 |
|
gla.cuh
|
llama: add support for QRWKV6 model architecture (#11001)
|
2025-01-10 09:58:08 +08:00 |
|
im2col.cu
|
CUDA: fix im2col_3d to respect non-contiguous inputs (views) (#15956)
|
2025-09-16 00:28:31 +02:00 |
|
im2col.cuh
|
ggml: add ops for WAN video model (cuda && cpu) (#15669)
|
2025-09-04 10:38:49 +02:00 |
|
mean.cu
|
ggml-cuda: enable cuda-graphs for `n-cpu-moe` (#18934)
|
2026-01-24 14:25:20 +08:00 |
|
mean.cuh
|
CUDA: add mean operation (#14313)
|
2025-06-22 12:39:54 +08:00 |
|
mma.cuh
|
CUDA: refactor mma data loading for AMD (#22051)
|
2026-04-19 18:26:59 +02:00 |
|
mmf.cu
|
HIP: add mmf for CDNA (#18896)
|
2026-01-29 11:10:53 +01:00 |
|
mmf.cuh
|
HIP: add mmf for CDNA (#18896)
|
2026-01-29 11:10:53 +01:00 |
|
mmid.cu
|
CUDA: add fp kernel for larger batch size MoE (#16512)
|
2025-10-14 13:15:15 +02:00 |
|
mmid.cuh
|
CUDA: add fp kernel for larger batch size MoE (#16512)
|
2025-10-14 13:15:15 +02:00 |
|
mmq.cu
|
cuda: Q1_0 initial backend (#21629)
|
2026-04-15 18:38:38 +02:00 |
|
mmq.cuh
|
CUDA: refactor mma data loading for AMD (#22051)
|
2026-04-19 18:26:59 +02:00 |
|
mmvf.cu
|
CUDA: use mmvq for mul-mat-id for small batch sizes (#18958)
|
2026-02-03 23:31:23 +08:00 |
|
mmvf.cuh
|
CUDA: use mmvq for mul-mat-id for small batch sizes (#18958)
|
2026-02-03 23:31:23 +08:00 |
|
mmvq.cu
|
cuda: Q1_0 initial backend (#21629)
|
2026-04-15 18:38:38 +02:00 |
|
mmvq.cuh
|
Optimize MOE GEMV kernel for BS > 1. (#20905)
|
2026-03-29 18:35:18 +02:00 |
|
norm.cu
|
CUDA: Factor out and re-use `block_reduce` function (#18785)
|
2026-01-15 10:44:54 +08:00 |
|
norm.cuh
|
CUDA: fuse adds, fuse add with rms norm (#15631)
|
2025-08-29 11:35:58 +08:00 |
|
opt-step-adamw.cu
|
ggml: new optimization interface (ggml/988)
|
2024-11-17 08:30:29 +02:00 |
|
opt-step-adamw.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
opt-step-sgd.cu
|
finetune: SGD optimizer, more CLI args (#13873)
|
2025-08-14 12:03:57 +02:00 |
|
opt-step-sgd.cuh
|
finetune: SGD optimizer, more CLI args (#13873)
|
2025-08-14 12:03:57 +02:00 |
|
out-prod.cu
|
CPU/CUDA: fix (GQA) mul mat back, add CUDA support (#11380)
|
2025-01-24 12:38:31 +01:00 |
|
out-prod.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
pad.cu
|
cuda : extend GGML_OP_PAD to work with non-cont src0 (#19429)
|
2026-02-10 08:07:16 +02:00 |
|
pad.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
pad_reflect_1d.cu
|
musa: fix build warnings (#15611)
|
2025-09-26 02:56:10 +02:00 |
|
pad_reflect_1d.cuh
|
cuda : add Pad Reflect 1D support (#14659)
|
2025-08-22 13:06:29 +02:00 |
|
pool2d.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
pool2d.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
quantize.cu
|
chore : correct typos [no ci] (#20041)
|
2026-03-05 08:50:21 +01:00 |
|
quantize.cuh
|
CUDA: experimental native mxfp4 support for blackwell (#17906)
|
2025-12-24 22:28:26 +08:00 |
|
reduce_rows.cuh
|
CUDA: Factor out and re-use `block_reduce` function (#18785)
|
2026-01-15 10:44:54 +08:00 |
|
roll.cu
|
CUDA: add roll (#14919)
|
2025-07-29 14:45:18 +08:00 |
|
roll.cuh
|
CUDA: add roll (#14919)
|
2025-07-29 14:45:18 +08:00 |
|
rope.cu
|
CUDA: Fix non-contig rope (#19338)
|
2026-02-08 15:12:51 +02:00 |
|
rope.cuh
|
CUDA: fuse rope + set_rows (#16884)
|
2025-11-13 08:50:01 +08:00 |
|
scale.cu
|
ggml: add ops for WAN video model (cuda && cpu) (#15669)
|
2025-09-04 10:38:49 +02:00 |
|
scale.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
set-rows.cu
|
CUDA: use fastdiv in set-rows (#16834)
|
2025-10-29 21:11:53 +08:00 |
|
set-rows.cuh
|
CUDA: add set rows for f32 and f16 (#14551)
|
2025-07-12 16:31:38 +03:00 |
|
set.cu
|
cuda: add SET operation support (#16804)
|
2025-10-28 20:10:28 +01:00 |
|
set.cuh
|
cuda: add SET operation support (#16804)
|
2025-10-28 20:10:28 +01:00 |
|
softcap.cu
|
cuda : add softcap fusion (#14907)
|
2025-07-29 14:22:03 +02:00 |
|
softcap.cuh
|
cuda : add softcap fusion (#14907)
|
2025-07-29 14:22:03 +02:00 |
|
softmax.cu
|
chore : correct typos [no ci] (#20041)
|
2026-03-05 08:50:21 +01:00 |
|
softmax.cuh
|
CUDA: backwards pass for misc. ops, add tests (#11257)
|
2025-01-16 16:43:38 +01:00 |
|
solve_tri.cu
|
chore : correct typos [no ci] (#20041)
|
2026-03-05 08:50:21 +01:00 |
|
solve_tri.cuh
|
SOLVE_TRI CUDA kernel for small matrices (#17457)
|
2025-11-28 12:15:32 +08:00 |
|
ssm-conv.cu
|
mtmd: add Gemma 4 audio conformer encoder support (#21421)
|
2026-04-12 14:15:26 +02:00 |
|
ssm-conv.cuh
|
CUDA: use shared mem for ssm_conv (#20128)
|
2026-03-06 23:09:59 +08:00 |
|
ssm-scan.cu
|
ggml : optimize cuda ssm_scan using warp-level reduction (#18505)
|
2026-01-07 02:24:34 +08:00 |
|
ssm-scan.cuh
|
ggml : faster ssm scan (#10558)
|
2025-03-31 18:05:13 +02:00 |
|
sum.cu
|
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132)
|
2025-08-13 10:04:46 +02:00 |
|
sum.cuh
|
tests: add gradient tests for all backends (ggml/932)
|
2024-09-08 11:05:55 +03:00 |
|
sumrows.cu
|
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132)
|
2025-08-13 10:04:46 +02:00 |
|
sumrows.cuh
|
CUDA: add mean operation (#14313)
|
2025-06-22 12:39:54 +08:00 |
|
top-k.cu
|
ggml : check return value of CUB calls used in argsort and top-k (they all return cudaError_t) (#21676)
|
2026-04-09 21:17:11 +08:00 |
|
top-k.cuh
|
sampling : add support for backend sampling (#17004)
|
2026-01-04 22:22:16 +02:00 |
|
topk-moe.cu
|
ggml-cuda: add mem check for fusion (#19916)
|
2026-03-07 00:05:43 +08:00 |
|
topk-moe.cuh
|
CUDA: refactor topk-moe to enable more models (GLM 4.7, Nemotron etc.) (#19126)
|
2026-01-29 10:31:28 +08:00 |
|
tri.cu
|
Add support for CUMSUM and TRI for CUDA. (#17584)
|
2025-12-04 22:19:51 +01:00 |
|
tri.cuh
|
Add support for CUMSUM and TRI for CUDA. (#17584)
|
2025-12-04 22:19:51 +01:00 |
|
tsembd.cu
|
ggml : fix padding in timestep embedding kernels (#15932)
|
2025-09-16 15:25:57 +02:00 |
|
tsembd.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
unary.cu
|
CUDA: use shared mem for ssm_conv (#20128)
|
2026-03-06 23:09:59 +08:00 |
|
unary.cuh
|
CUDA: use shared mem for ssm_conv (#20128)
|
2026-03-06 23:09:59 +08:00 |
|
upscale.cu
|
model: LFM2-VL fixes (#17577)
|
2025-11-30 21:57:31 +01:00 |
|
upscale.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
vecdotq.cuh
|
cuda: Q1_0 initial backend (#21629)
|
2026-04-15 18:38:38 +02:00 |
|
wkv.cu
|
llama: Add support for RWKV v7 architecture (#12412)
|
2025-03-18 07:27:50 +08:00 |
|
wkv.cuh
|
llama: Add support for RWKV v7 architecture (#12412)
|
2025-03-18 07:27:50 +08:00 |