llama.cpp/ggml/src/ggml-cpu
Taimur Ahmad f716588e63
ggml-cpu: extend support for RVV floating-point kernels (#17318)
* cmake: add BF16 RVV flag for ggml-cpu

* ggml-cpu: add floating-point conversion kernels

* ggml: add floating-point kernels

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>

* ggml-cpu: fix lmul in vec_dot_bf16

* ggml-cpu: change redsum to lmul 4, fix leftover

---------

Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
2025-12-18 16:02:09 +02:00
..
amx ggml : fix unaligned access in AMX code (#16315) 2025-10-06 16:05:27 +03:00
arch ggml-cpu: ARM64: repack version of q8_0 (dotprod and i8mm) (#18096) 2025-12-17 13:39:13 +02:00
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
kleidiai kleidiai: fix zero-size array declaration (#17240) 2025-11-20 11:45:49 +02:00
llamafile Q4/Q8 Tiled Gemm Optimization. (#16999) 2025-12-05 19:41:51 +08:00
spacemit ggml : fix SpaceMit IME array out-of-bounds in task assignment (#16629) 2025-10-17 13:01:23 +03:00
CMakeLists.txt ggml-cpu: extend support for RVV floating-point kernels (#17318) 2025-12-18 16:02:09 +02:00
arch-fallback.h ggml-cpu: ARM64: repack version of q8_0 (dotprod and i8mm) (#18096) 2025-12-17 13:39:13 +02:00
binary-ops.cpp cpu: de-duplicate some of the operators and refactor (ggml/1144) 2025-03-30 08:33:31 +03:00
binary-ops.h cpu: de-duplicate some of the operators and refactor (ggml/1144) 2025-03-30 08:33:31 +03:00
common.h ggml : refactor forward_dup for cpu backend (#16062) 2025-09-19 06:31:56 +02:00
ggml-cpu-impl.h ggml : LoongArch fixes (#16958) 2025-11-03 08:40:02 +02:00
ggml-cpu.c ggml-cpu: extend support for RVV floating-point kernels (#17318) 2025-12-18 16:02:09 +02:00
ggml-cpu.cpp ggml-cpu : fix RISC-V Q4_0 repack select and RVV feature reporting (#17951) 2025-12-12 16:26:03 +02:00
hbm.cpp ggml-cpu : split arch-specific implementations (#13892) 2025-06-09 16:47:13 +02:00
hbm.h ggml-cpu : split arch-specific implementations (#13892) 2025-06-09 16:47:13 +02:00
ops.cpp ggml : add circular tiling support to pad, for Vulkan, CUDA, and CPU (used for making seamless textures) (#16985) 2025-12-06 15:07:02 +01:00
ops.h ggml : add ggml_top_k (#17365) 2025-11-25 15:31:43 +02:00
quants.c llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
repack.cpp ggml-cpu: ARM64: repack version of q8_0 (dotprod and i8mm) (#18096) 2025-12-17 13:39:13 +02:00
repack.h ggml-cpu: ARM64: repack version of q8_0 (dotprod and i8mm) (#18096) 2025-12-17 13:39:13 +02:00
simd-mappings.h ggml : remove useless and error-prone variadic macros (#17399) 2025-11-20 11:18:27 +01:00
traits.cpp ggml : fix fallback to CPU for ununsupported ops (#15118) 2025-08-06 14:37:35 +02:00
traits.h ggml : fix fallback to CPU for ununsupported ops (#15118) 2025-08-06 14:37:35 +02:00
unary-ops.cpp ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063) 2025-11-13 20:54:47 +02:00
unary-ops.h ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063) 2025-11-13 20:54:47 +02:00
vec.cpp ggml-cpu: extend support for RVV floating-point kernels (#17318) 2025-12-18 16:02:09 +02:00
vec.h ggml-cpu: extend support for RVV floating-point kernels (#17318) 2025-12-18 16:02:09 +02:00