llama.cpp/ggml/src/ggml-cpu
Taimur Ahmad b908baf182
ggml-cpu: add RVV vec dot kernels for quantization types (#18784)
* ggml-cpu: add rvv vec_dot for iq2_s

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: add rvv vec_dot for iq3_s

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: add rvv vec_dot for tq1_0, tq2_0

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

ggml-cpu: add rvv vec_dot for tq1_0, tq2_0

* ggml-cpu: add rvv vec_dot for iq1_s, iq1_m

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: add vlen switch for rvv vec_dot

---------

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
2026-02-20 13:30:07 +02:00
..
amx ggml : fix unaligned access in AMX code (#16315) 2025-10-06 16:05:27 +03:00
arch ggml-cpu: add RVV vec dot kernels for quantization types (#18784) 2026-02-20 13:30:07 +02:00
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
kleidiai kleidiai: add and integrate SVE 256-bit vector-length kernel (#18458) 2025-12-30 14:04:53 +02:00
llamafile llamafile: powerpc: add FP16 MMA path for Q4/Q8 matmul (#19709) 2026-02-19 14:28:53 +08:00
spacemit ggml : fix SpaceMit IME array out-of-bounds in task assignment (#16629) 2025-10-17 13:01:23 +03:00
CMakeLists.txt ggml: ggml-cpu: force-no-lto-for-cpu-feats (#19609) 2026-02-17 13:22:46 +02:00
arch-fallback.h ggml-cpu: add RVV vec dot kernels for quantization types (#18784) 2026-02-20 13:30:07 +02:00
binary-ops.cpp ggml : extend bin bcast for permuted src1 (#19484) 2026-02-11 07:52:00 +02:00
binary-ops.h cpu: de-duplicate some of the operators and refactor (ggml/1144) 2025-03-30 08:33:31 +03:00
common.h ggml-cpu: FA add GEMM microkernel (#19422) 2026-02-15 11:09:24 +05:30
ggml-cpu-impl.h ggml-cpu: FA split across kv for faster TG (#19209) 2026-02-03 01:19:55 +08:00
ggml-cpu.c ggml-cpu: FA add GEMM microkernel (#19422) 2026-02-15 11:09:24 +05:30
ggml-cpu.cpp ggml-cpu: FA split across kv for faster TG (#19209) 2026-02-03 01:19:55 +08:00
hbm.cpp ggml-cpu : split arch-specific implementations (#13892) 2025-06-09 16:47:13 +02:00
hbm.h ggml-cpu : split arch-specific implementations (#13892) 2025-06-09 16:47:13 +02:00
ops.cpp ggml-cpu: FA add GEMM microkernel (#19422) 2026-02-15 11:09:24 +05:30
ops.h ggml : add ggml_top_k (#17365) 2025-11-25 15:31:43 +02:00
quants.c llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
repack.cpp Fix wrong memcpy length for block_interleave == 4 (#19575) 2026-02-13 20:32:14 +08:00
repack.h ggml-cpu: arm64: q6_K repack gemm and gemv (and generic) implementations (dotprod) (#19360) 2026-02-10 10:47:45 +00:00
simd-gemm.h ggml : avoid UB in gemm ukernel (#19642) 2026-02-15 14:56:35 +02:00
simd-mappings.h ggml-cpu: optimize ggml_vec_dot_bf16 for s390x (#19399) 2026-02-15 18:20:35 +08:00
traits.cpp ggml : fix fallback to CPU for ununsupported ops (#15118) 2025-08-06 14:37:35 +02:00
traits.h ggml : fix fallback to CPU for ununsupported ops (#15118) 2025-08-06 14:37:35 +02:00
unary-ops.cpp ggml : unary ops support non-cont src0 + metal F16 unary ops (#19511) 2026-02-11 18:58:43 +02:00
unary-ops.h ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063) 2025-11-13 20:54:47 +02:00
vec.cpp ggml-cpu: optimize ggml_vec_dot_bf16 for s390x (#19399) 2026-02-15 18:20:35 +08:00
vec.h ggml-cpu: extend support for RVV floating-point kernels (#17318) 2025-12-18 16:02:09 +02:00