llama.cpp/ggml-cuda
Johannes Gäßler 7d1a378b8f
CUDA: refactor mmq, dmmv, mmvq (#7716)
* CUDA: refactor mmq, dmmv, mmvq

* fix out-of-bounds write

* struct for qk, qr, qi

* fix cmake build

* mmq_type_traits
2024-06-05 16:53:00 +02:00
..
template-instances CUDA: refactor mmq, dmmv, mmvq (#7716) 2024-06-05 16:53:00 +02:00
acc.cu
acc.cuh
arange.cu
arange.cuh
argsort.cu
argsort.cuh
binbcast.cu ggml : group all experts in a single ggml_mul_mat_id (#6505) 2024-04-18 15:18:48 +02:00
binbcast.cuh
clamp.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
clamp.cuh
common.cuh CUDA: refactor mmq, dmmv, mmvq (#7716) 2024-06-05 16:53:00 +02:00
concat.cu cuda : non-cont concat support (#7610) 2024-05-29 15:38:26 +03:00
concat.cuh
convert.cu ggml : drop support for QK_K=64 (#7473) 2024-05-23 10:00:21 +03:00
convert.cuh
cpy.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
cpy.cuh Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
dequantize.cuh
diagmask.cu
diagmask.cuh
dmmv.cu CUDA: refactor mmq, dmmv, mmvq (#7716) 2024-06-05 16:53:00 +02:00
dmmv.cuh
fattn-common.cuh CUDA: fix Pascal FA, deq. KV to FP16 for batch > 8 (#7681) 2024-06-01 15:47:04 +02:00
fattn-tile-f16.cu CUDA: fix Pascal FA, deq. KV to FP16 for batch > 8 (#7681) 2024-06-01 15:47:04 +02:00
fattn-tile-f16.cuh CUDA: faster large batch FA without tensor cores (#7314) 2024-05-17 18:54:52 +02:00
fattn-tile-f32.cu CUDA: fix Pascal FA, deq. KV to FP16 for batch > 8 (#7681) 2024-06-01 15:47:04 +02:00
fattn-tile-f32.cuh CUDA: faster large batch FA without tensor cores (#7314) 2024-05-17 18:54:52 +02:00
fattn-vec-f16.cuh CUDA: fix Pascal FA, deq. KV to FP16 for batch > 8 (#7681) 2024-06-01 15:47:04 +02:00
fattn-vec-f32.cuh Fix FlashAttention debug test, FP32 assert (#7684) 2024-06-01 23:26:10 +02:00
fattn-wmma-f16.cuh CUDA: fix Pascal FA, deq. KV to FP16 for batch > 8 (#7681) 2024-06-01 15:47:04 +02:00
fattn.cu CUDA: fix Pascal FA, deq. KV to FP16 for batch > 8 (#7681) 2024-06-01 15:47:04 +02:00
fattn.cuh ggml : add Flash Attention (#5021) 2024-04-30 12:16:08 +03:00
getrows.cu
getrows.cuh
im2col.cu
im2col.cuh
mmq.cu CUDA: refactor mmq, dmmv, mmvq (#7716) 2024-06-05 16:53:00 +02:00
mmq.cuh CUDA: refactor mmq, dmmv, mmvq (#7716) 2024-06-05 16:53:00 +02:00
mmvq.cu CUDA: refactor mmq, dmmv, mmvq (#7716) 2024-06-05 16:53:00 +02:00
mmvq.cuh
norm.cu ggml : fix YARN + add tests + add asserts (#7617) 2024-05-29 20:17:31 +03:00
norm.cuh
pad.cu
pad.cuh
pool2d.cu
pool2d.cuh
quantize.cu
quantize.cuh
rope.cu ggml : refactor rope norm/neox (#7634) 2024-06-05 11:29:20 +03:00
rope.cuh
scale.cu Introduction of CUDA Graphs to LLama.cpp (#6766) 2024-05-08 22:55:49 +02:00
scale.cuh
softmax.cu CUDA: deduplicate FlashAttention code (#7352) 2024-05-18 12:36:25 +02:00
softmax.cuh
sumrows.cu
sumrows.cuh
tsembd.cu
tsembd.cuh
unary.cu feat: implemented sigmoid function (ggml/806) 2024-05-11 15:38:34 +03:00
unary.cuh feat: implemented sigmoid function (ggml/806) 2024-05-11 15:38:34 +03:00
upscale.cu ggml : add `ggml_upscale_ext` (ggml/814) 2024-05-15 13:23:33 +03:00
upscale.cuh
vecdotq.cuh CUDA: refactor mmq, dmmv, mmvq (#7716) 2024-06-05 16:53:00 +02:00