| .. |
|
template-instances
|
ggml: CUDA: add head size 72 for flash-attn (#16962)
|
2025-11-03 14:29:11 +01:00 |
|
vendors
|
CUDA: add stream-based concurrency (#16991)
|
2025-11-30 08:17:55 +08:00 |
|
CMakeLists.txt
|
CUDA: skip fusion for repeating adds in bias (#17080)
|
2025-11-08 16:58:05 +08:00 |
|
acc.cu
|
llama/ggml: add LLM training support (#10544)
|
2025-05-12 14:44:49 +02:00 |
|
acc.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
add-id.cu
|
musa: fix build warnings (#15258)
|
2025-08-20 10:17:37 +08:00 |
|
add-id.cuh
|
llama : add gpt-oss (#15091)
|
2025-08-05 22:10:36 +03:00 |
|
arange.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
arange.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
argmax.cu
|
cuda : optimize argmax (#10441)
|
2024-11-21 18:18:50 +01:00 |
|
argmax.cuh
|
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
|
2024-10-03 21:17:26 +03:00 |
|
argsort.cu
|
cuda : add error checking for cudaMemcpyAsync in argsort (#17599)
|
2025-11-30 08:16:28 +08:00 |
|
argsort.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
binbcast.cu
|
ggml: fix CUDA grid launch condition for large block_nums.y in binbcast (#16742)
|
2025-10-24 21:39:37 +02:00 |
|
binbcast.cuh
|
CUDA: fuse adds, fuse add with rms norm (#15631)
|
2025-08-29 11:35:58 +08:00 |
|
clamp.cu
|
cuda: unary ops as float + de-duplicate (ggml/1130)
|
2025-03-03 18:18:11 +02:00 |
|
clamp.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
common.cuh
|
ggml-cuda: reorder only relevant nodes (#17639)
|
2025-12-02 12:36:31 +08:00 |
|
concat.cu
|
musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611)
|
2025-03-30 10:59:38 +02:00 |
|
concat.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
conv-transpose-1d.cu
|
musa: add GGML_UNUSED_VARS (#15446)
|
2025-08-21 11:06:05 +08:00 |
|
conv-transpose-1d.cuh
|
feat: cuda implementation for `ggml_conv_transpose_1d` (ggml/854)
|
2024-07-08 12:23:00 +03:00 |
|
conv2d-dw.cu
|
CUDA: add conv_2d_dw (#14265)
|
2025-06-20 09:50:24 +08:00 |
|
conv2d-dw.cuh
|
CUDA: add conv_2d_dw (#14265)
|
2025-06-20 09:50:24 +08:00 |
|
conv2d-transpose.cu
|
CUDA: add conv_2d_transpose (#14287)
|
2025-06-20 22:48:24 +08:00 |
|
conv2d-transpose.cuh
|
CUDA: add conv_2d_transpose (#14287)
|
2025-06-20 22:48:24 +08:00 |
|
conv2d.cu
|
CUDA: fix build error from ambiguous __half conversions in conv2d (#15690)
|
2025-09-01 06:55:06 +05:30 |
|
conv2d.cuh
|
CUDA: add conv2d (#15635)
|
2025-08-28 20:33:03 +02:00 |
|
convert.cu
|
musa: add GGML_UNUSED_VARS (#15446)
|
2025-08-21 11:06:05 +08:00 |
|
convert.cuh
|
HIP: RDNA4 tensor core support for MMF (#17077)
|
2025-11-22 00:03:24 +01:00 |
|
count-equal.cu
|
ggml: fix zero division in ‘dne’ calculation in CUDA COUNT_EQUAL operator when ‘ne’ is small (#10213)
|
2024-11-09 08:35:46 +01:00 |
|
count-equal.cuh
|
ggml/ex: calculate accuracy in graph, adapt MNIST (ggml/980)
|
2024-10-03 21:17:26 +03:00 |
|
cp-async.cuh
|
CUDA: FA support for Deepseek (Ampere or newer) (#13306)
|
2025-05-09 13:34:58 +02:00 |
|
cpy-utils.cuh
|
cuda : support non-contiguous i32 to i32 copy (#17326)
|
2025-11-23 11:13:34 +01:00 |
|
cpy.cu
|
[MUSA] enable fp16/fast_fp16/bf16_mma on PH1 (#17551)
|
2025-11-28 14:08:29 +01:00 |
|
cpy.cuh
|
cuda : remove legacy copy-op pointer indirection code (#16485)
|
2025-10-14 11:53:49 +02:00 |
|
cross-entropy-loss.cu
|
CUDA: add dynamic shared mem to softmax, refactor general usage (#14497)
|
2025-07-03 07:45:11 +08:00 |
|
cross-entropy-loss.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
dequantize.cuh
|
CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433)
|
2025-08-20 16:58:49 +02:00 |
|
diagmask.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
diagmask.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
fattn-common.cuh
|
CUDA: generalized (mma) FA, add Volta support (#17505)
|
2025-12-03 16:57:05 +01:00 |
|
fattn-mma-f16.cuh
|
CUDA: generalized (mma) FA, add Volta support (#17505)
|
2025-12-03 16:57:05 +01:00 |
|
fattn-tile.cu
|
ggml: CUDA: add head size 72 for flash-attn (#16962)
|
2025-11-03 14:29:11 +01:00 |
|
fattn-tile.cuh
|
CUDA: generalized (mma) FA, add Volta support (#17505)
|
2025-12-03 16:57:05 +01:00 |
|
fattn-vec.cuh
|
CUDA: generalized (mma) FA, add Volta support (#17505)
|
2025-12-03 16:57:05 +01:00 |
|
fattn-wmma-f16.cu
|
CUDA: generalized (mma) FA, add Volta support (#17505)
|
2025-12-03 16:57:05 +01:00 |
|
fattn-wmma-f16.cuh
|
CUDA: generalized (mma) FA, add Volta support (#17505)
|
2025-12-03 16:57:05 +01:00 |
|
fattn.cu
|
CUDA: generalized (mma) FA, add Volta support (#17505)
|
2025-12-03 16:57:05 +01:00 |
|
fattn.cuh
|
CUDA: refactor FA support/selection code (#15454)
|
2025-08-20 23:14:14 +02:00 |
|
getrows.cu
|
CUDA: fix GET_ROWS for large tensors (#15882)
|
2025-09-09 08:11:01 +02:00 |
|
getrows.cuh
|
CUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199)
|
2025-04-30 23:12:59 +02:00 |
|
ggml-cuda.cu
|
ggml-cuda: reorder only relevant nodes (#17639)
|
2025-12-02 12:36:31 +08:00 |
|
gla.cu
|
llama: add support for QRWKV6 model architecture (#11001)
|
2025-01-10 09:58:08 +08:00 |
|
gla.cuh
|
llama: add support for QRWKV6 model architecture (#11001)
|
2025-01-10 09:58:08 +08:00 |
|
im2col.cu
|
CUDA: fix im2col_3d to respect non-contiguous inputs (views) (#15956)
|
2025-09-16 00:28:31 +02:00 |
|
im2col.cuh
|
ggml: add ops for WAN video model (cuda && cpu) (#15669)
|
2025-09-04 10:38:49 +02:00 |
|
mean.cu
|
cuda : fix GGML_CUDA_GRAPHS=OFF (#15300)
|
2025-08-14 13:22:07 +03:00 |
|
mean.cuh
|
CUDA: add mean operation (#14313)
|
2025-06-22 12:39:54 +08:00 |
|
mma.cuh
|
CUDA: generalized (mma) FA, add Volta support (#17505)
|
2025-12-03 16:57:05 +01:00 |
|
mmf.cu
|
HIP: enable mul_mat_f for RDNA4 (#17437)
|
2025-11-28 08:24:30 +01:00 |
|
mmf.cuh
|
CUDA: generalized (mma) FA, add Volta support (#17505)
|
2025-12-03 16:57:05 +01:00 |
|
mmid.cu
|
CUDA: add fp kernel for larger batch size MoE (#16512)
|
2025-10-14 13:15:15 +02:00 |
|
mmid.cuh
|
CUDA: add fp kernel for larger batch size MoE (#16512)
|
2025-10-14 13:15:15 +02:00 |
|
mmq.cu
|
HIP: WMMA-MMQ kernels for RDNA 4 (#17156)
|
2025-11-24 20:00:10 +01:00 |
|
mmq.cuh
|
HIP: Patch failed testcase in WMMA-MMQ kernels for RDNA 4 (#17502)
|
2025-11-26 11:18:48 +01:00 |
|
mmvf.cu
|
CUDA: fix should_use_mmvf for ne11 == 1 (#17085)
|
2025-11-07 20:53:14 +01:00 |
|
mmvf.cuh
|
CUDA: fix crash on uneven context without FA (#16988)
|
2025-11-06 14:05:47 +01:00 |
|
mmvq.cu
|
CUDA: Remove unneded bias/gate dims in fused mmvq (#16858)
|
2025-11-01 13:13:26 +08:00 |
|
mmvq.cuh
|
CUDA: General GEMV fusion (#16715)
|
2025-10-26 19:28:04 +08:00 |
|
norm.cu
|
CUDA: Optimize `rms_norm_f32` kernel and its fused variants, giving 1-6% perf E2E (#15715)
|
2025-09-03 19:59:16 +02:00 |
|
norm.cuh
|
CUDA: fuse adds, fuse add with rms norm (#15631)
|
2025-08-29 11:35:58 +08:00 |
|
opt-step-adamw.cu
|
ggml: new optimization interface (ggml/988)
|
2024-11-17 08:30:29 +02:00 |
|
opt-step-adamw.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
opt-step-sgd.cu
|
finetune: SGD optimizer, more CLI args (#13873)
|
2025-08-14 12:03:57 +02:00 |
|
opt-step-sgd.cuh
|
finetune: SGD optimizer, more CLI args (#13873)
|
2025-08-14 12:03:57 +02:00 |
|
out-prod.cu
|
CPU/CUDA: fix (GQA) mul mat back, add CUDA support (#11380)
|
2025-01-24 12:38:31 +01:00 |
|
out-prod.cuh
|
ggml/examples: add backend support for numerical optimization (ggml/949)
|
2024-09-20 21:15:05 +03:00 |
|
pad.cu
|
ggml: add ops for WAN video model (cuda && cpu) (#15669)
|
2025-09-04 10:38:49 +02:00 |
|
pad.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
pad_reflect_1d.cu
|
musa: fix build warnings (#15611)
|
2025-09-26 02:56:10 +02:00 |
|
pad_reflect_1d.cuh
|
cuda : add Pad Reflect 1D support (#14659)
|
2025-08-22 13:06:29 +02:00 |
|
pool2d.cu
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
pool2d.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
quantize.cu
|
CUDA: fastdiv, launch bounds for mmvq + q8_1 quant (#15802)
|
2025-09-05 16:07:02 +02:00 |
|
quantize.cuh
|
CUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199)
|
2025-04-30 23:12:59 +02:00 |
|
reduce_rows.cuh
|
musa: fix build warnings (#15258)
|
2025-08-20 10:17:37 +08:00 |
|
roll.cu
|
CUDA: add roll (#14919)
|
2025-07-29 14:45:18 +08:00 |
|
roll.cuh
|
CUDA: add roll (#14919)
|
2025-07-29 14:45:18 +08:00 |
|
rope.cu
|
CUDA: fuse rope + set_rows (#16884)
|
2025-11-13 08:50:01 +08:00 |
|
rope.cuh
|
CUDA: fuse rope + set_rows (#16884)
|
2025-11-13 08:50:01 +08:00 |
|
scale.cu
|
ggml: add ops for WAN video model (cuda && cpu) (#15669)
|
2025-09-04 10:38:49 +02:00 |
|
scale.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
set-rows.cu
|
CUDA: use fastdiv in set-rows (#16834)
|
2025-10-29 21:11:53 +08:00 |
|
set-rows.cuh
|
CUDA: add set rows for f32 and f16 (#14551)
|
2025-07-12 16:31:38 +03:00 |
|
set.cu
|
cuda: add SET operation support (#16804)
|
2025-10-28 20:10:28 +01:00 |
|
set.cuh
|
cuda: add SET operation support (#16804)
|
2025-10-28 20:10:28 +01:00 |
|
softcap.cu
|
cuda : add softcap fusion (#14907)
|
2025-07-29 14:22:03 +02:00 |
|
softcap.cuh
|
cuda : add softcap fusion (#14907)
|
2025-07-29 14:22:03 +02:00 |
|
softmax.cu
|
llama : add gpt-oss (#15091)
|
2025-08-05 22:10:36 +03:00 |
|
softmax.cuh
|
CUDA: backwards pass for misc. ops, add tests (#11257)
|
2025-01-16 16:43:38 +01:00 |
|
solve_tri.cu
|
SOLVE_TRI CUDA kernel for small matrices (#17457)
|
2025-11-28 12:15:32 +08:00 |
|
solve_tri.cuh
|
SOLVE_TRI CUDA kernel for small matrices (#17457)
|
2025-11-28 12:15:32 +08:00 |
|
ssm-conv.cu
|
model : support LiquidAI LFM2 hybrid family (#14620)
|
2025-07-11 20:27:01 +02:00 |
|
ssm-conv.cuh
|
ggml : faster ssm scan (#10558)
|
2025-03-31 18:05:13 +02:00 |
|
ssm-scan.cu
|
ggml : fix SSM_SCAN for n_groups > 1 (#15625)
|
2025-08-28 10:11:36 -04:00 |
|
ssm-scan.cuh
|
ggml : faster ssm scan (#10558)
|
2025-03-31 18:05:13 +02:00 |
|
sum.cu
|
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132)
|
2025-08-13 10:04:46 +02:00 |
|
sum.cuh
|
tests: add gradient tests for all backends (ggml/932)
|
2024-09-08 11:05:55 +03:00 |
|
sumrows.cu
|
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (#15132)
|
2025-08-13 10:04:46 +02:00 |
|
sumrows.cuh
|
CUDA: add mean operation (#14313)
|
2025-06-22 12:39:54 +08:00 |
|
topk-moe.cu
|
CUDA: support for weight clamp in top-k norm (#16702)
|
2025-10-27 09:06:16 +08:00 |
|
topk-moe.cuh
|
CUDA: support for weight clamp in top-k norm (#16702)
|
2025-10-27 09:06:16 +08:00 |
|
tsembd.cu
|
ggml : fix padding in timestep embedding kernels (#15932)
|
2025-09-16 15:25:57 +02:00 |
|
tsembd.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
unary.cu
|
ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063)
|
2025-11-13 20:54:47 +02:00 |
|
unary.cuh
|
ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063)
|
2025-11-13 20:54:47 +02:00 |
|
upscale.cu
|
model: LFM2-VL fixes (#17577)
|
2025-11-30 21:57:31 +01:00 |
|
upscale.cuh
|
llama : reorganize source code + improve CMake (#8006)
|
2024-06-26 18:33:02 +03:00 |
|
vecdotq.cuh
|
CUDA: Accelerate MXFP4 table lookup using `__byte_perm` (#15451)
|
2025-08-25 23:21:22 +02:00 |
|
wkv.cu
|
llama: Add support for RWKV v7 architecture (#12412)
|
2025-03-18 07:27:50 +08:00 |
|
wkv.cuh
|
llama: Add support for RWKV v7 architecture (#12412)
|
2025-03-18 07:27:50 +08:00 |