llama.cpp/ggml/src/ggml-cuda
Johannes Gäßler 4696d56749
CUDA: fix crash on large batch size for quant. MoE (#13537)
2025-05-14 16:41:02 +02:00
..
template-instances CUDA: FA support for Deepseek (Ampere or newer) (#13306) 2025-05-09 13:34:58 +02:00
vendors CUDA/HIP: Share the same unified memory allocation logic. (#12934) 2025-04-15 11:20:38 +02:00
CMakeLists.txt CUDA: FA support for Deepseek (Ampere or newer) (#13306) 2025-05-09 13:34:58 +02:00
acc.cu llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
acc.cuh
arange.cu
arange.cuh
argmax.cu
argmax.cuh
argsort.cu
argsort.cuh
binbcast.cu Support pure float16 add/sub/mul/div operations in the CUDA (and CPU) backend (ggml/1121) 2025-03-03 18:18:11 +02:00
binbcast.cuh
clamp.cu cuda: unary ops as float + de-duplicate (ggml/1130) 2025-03-03 18:18:11 +02:00
clamp.cuh
common.cuh CUDA: FA support for Deepseek (Ampere or newer) (#13306) 2025-05-09 13:34:58 +02:00
concat.cu musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611) 2025-03-30 10:59:38 +02:00
concat.cuh
conv-transpose-1d.cu musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611) 2025-03-30 10:59:38 +02:00
conv-transpose-1d.cuh
convert.cu CUDA: fix non-cont. inputs for batched mat mul (#13155) 2025-04-29 16:00:27 +02:00
convert.cuh CUDA: fix non-cont. inputs for batched mat mul (#13155) 2025-04-29 16:00:27 +02:00
count-equal.cu
count-equal.cuh
cp-async.cuh CUDA: FA support for Deepseek (Ampere or newer) (#13306) 2025-05-09 13:34:58 +02:00
cpy.cu cuda : fix unused variable compile warning (whisper/0) 2025-05-01 09:58:44 +03:00
cpy.cuh ggml: Re-enable CUDA graphs in presence of CONT and DUP nodes (#12970) 2025-04-17 15:19:42 +02:00
cross-entropy-loss.cu MUSA: support ARM64 and enable dp4a .etc (#11843) 2025-02-21 09:46:23 +02:00
cross-entropy-loss.cuh
dequantize.cuh
diagmask.cu
diagmask.cuh
fattn-common.cuh CUDA: faster Deepseek FA, add Turing support (#13435) 2025-05-14 16:08:20 +02:00
fattn-mma-f16.cuh CUDA: faster Deepseek FA, add Turing support (#13435) 2025-05-14 16:08:20 +02:00
fattn-tile-f16.cu CUDA: FA support for Deepseek (Ampere or newer) (#13306) 2025-05-09 13:34:58 +02:00
fattn-tile-f16.cuh
fattn-tile-f32.cu CUDA: FA support for Deepseek (Ampere or newer) (#13306) 2025-05-09 13:34:58 +02:00
fattn-tile-f32.cuh
fattn-vec-f16.cuh CUDA: fix race conditions FlashAttention kernels (#13438) 2025-05-10 22:22:48 +02:00
fattn-vec-f32.cuh CUDA: FA support for Deepseek (Ampere or newer) (#13306) 2025-05-09 13:34:58 +02:00
fattn-wmma-f16.cu CUDA: FA support for Deepseek (Ampere or newer) (#13306) 2025-05-09 13:34:58 +02:00
fattn-wmma-f16.cuh
fattn.cu CUDA: faster Deepseek FA, add Turing support (#13435) 2025-05-14 16:08:20 +02:00
fattn.cuh
getrows.cu CUDA: fix crash on large batch size for MoE models (#13384) 2025-05-09 12:14:04 +02:00
getrows.cuh CUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199) 2025-04-30 23:12:59 +02:00
ggml-cuda.cu CUDA: faster Deepseek FA, add Turing support (#13435) 2025-05-14 16:08:20 +02:00
gla.cu
gla.cuh
im2col.cu
im2col.cuh
mma.cuh musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611) 2025-03-30 10:59:38 +02:00
mmq.cu CUDA: fix crash on large batch size for quant. MoE (#13537) 2025-05-14 16:41:02 +02:00
mmq.cuh cuda : remove nrows_x in mul_mat_q_process_tile (#13325) 2025-05-07 09:48:23 +02:00
mmv.cu CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (#13014) 2025-04-22 21:27:40 +02:00
mmv.cuh CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (#13014) 2025-04-22 21:27:40 +02:00
mmvq.cu CUDA: fix crash with partial offloading of MoE (#13439) 2025-05-11 16:09:33 +02:00
mmvq.cuh CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (#13014) 2025-04-22 21:27:40 +02:00
norm.cu llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00
norm.cuh llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00
opt-step-adamw.cu
opt-step-adamw.cuh
out-prod.cu
out-prod.cuh
pad.cu musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611) 2025-03-30 10:59:38 +02:00
pad.cuh
pool2d.cu
pool2d.cuh
quantize.cu CUDA: fix crash on large batch size for quant. MoE (#13537) 2025-05-14 16:41:02 +02:00
quantize.cuh CUDA: batched+noncont MMQ, refactor bs>1 MoE code (#13199) 2025-04-30 23:12:59 +02:00
rope.cu
rope.cuh
scale.cu
scale.cuh
softmax.cu
softmax.cuh
ssm-conv.cu fix MUSA compiler warning (#12704) 2025-04-03 09:32:55 +02:00
ssm-conv.cuh ggml : faster ssm scan (#10558) 2025-03-31 18:05:13 +02:00
ssm-scan.cu fix MUSA compiler warning (#12704) 2025-04-03 09:32:55 +02:00
ssm-scan.cuh ggml : faster ssm scan (#10558) 2025-03-31 18:05:13 +02:00
sum.cu llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
sum.cuh
sumrows.cu
sumrows.cuh
tsembd.cu
tsembd.cuh
unary.cu cuda: unary ops as float + de-duplicate (ggml/1130) 2025-03-03 18:18:11 +02:00
unary.cuh cuda/cpu: Increase support for fp16 unary operations (ggml/1125) 2025-03-03 18:18:11 +02:00
upscale.cu musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611) 2025-03-30 10:59:38 +02:00
upscale.cuh
vecdotq.cuh CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID (#13014) 2025-04-22 21:27:40 +02:00
wkv.cu llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00
wkv.cuh llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00