llama.cpp/ggml/src/ggml-cpu
abhijain1204fujitsu 267ba5a1d9
ggml: aarch64: Implement SVE in Gemm q4_k 8x8 q8_k Kernel (#19132)
* Updated repack.cpp

* Updated repack.cpp

* Updated repack.cpp

* Added if condition to support only vector length 256.

* Changed the format removed comments and duplicate variable

* If SVE 256 not present then was using generic function to compute, hence slowing the performance. 

So added code if SVE 256 is not present then use NEON code.

* Code format change suggestion

---------

Co-authored-by: Vithule, Prashant <Prashant.Vithule@fujitsu.com>
2026-02-16 14:38:43 +08:00
..
amx ggml : fix unaligned access in AMX code (#16315) 2025-10-06 16:05:27 +03:00
arch ggml: aarch64: Implement SVE in Gemm q4_k 8x8 q8_k Kernel (#19132) 2026-02-16 14:38:43 +08:00
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
kleidiai kleidiai: add and integrate SVE 256-bit vector-length kernel (#18458) 2025-12-30 14:04:53 +02:00
llamafile ggml-cpu: Enable FP16 MMA kernels on PPC (#19060) 2026-01-27 11:52:34 +08:00
spacemit ggml : fix SpaceMit IME array out-of-bounds in task assignment (#16629) 2025-10-17 13:01:23 +03:00
CMakeLists.txt cmake : check if KleidiAI API has been fetched (#19640) 2026-02-15 13:59:38 +01:00
arch-fallback.h ggml-cpu: arm64: q6_K repack gemm and gemv (and generic) implementations (dotprod) (#19360) 2026-02-10 10:47:45 +00:00
binary-ops.cpp ggml : extend bin bcast for permuted src1 (#19484) 2026-02-11 07:52:00 +02:00
binary-ops.h cpu: de-duplicate some of the operators and refactor (ggml/1144) 2025-03-30 08:33:31 +03:00
common.h ggml-cpu: FA add GEMM microkernel (#19422) 2026-02-15 11:09:24 +05:30
ggml-cpu-impl.h ggml-cpu: FA split across kv for faster TG (#19209) 2026-02-03 01:19:55 +08:00
ggml-cpu.c ggml-cpu: FA add GEMM microkernel (#19422) 2026-02-15 11:09:24 +05:30
ggml-cpu.cpp ggml-cpu: FA split across kv for faster TG (#19209) 2026-02-03 01:19:55 +08:00
hbm.cpp ggml-cpu : split arch-specific implementations (#13892) 2025-06-09 16:47:13 +02:00
hbm.h ggml-cpu : split arch-specific implementations (#13892) 2025-06-09 16:47:13 +02:00
ops.cpp ggml-cpu: FA add GEMM microkernel (#19422) 2026-02-15 11:09:24 +05:30
ops.h ggml : add ggml_top_k (#17365) 2025-11-25 15:31:43 +02:00
quants.c llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
repack.cpp Fix wrong memcpy length for block_interleave == 4 (#19575) 2026-02-13 20:32:14 +08:00
repack.h ggml-cpu: arm64: q6_K repack gemm and gemv (and generic) implementations (dotprod) (#19360) 2026-02-10 10:47:45 +00:00
simd-gemm.h ggml : avoid UB in gemm ukernel (#19642) 2026-02-15 14:56:35 +02:00
simd-mappings.h ggml-cpu: optimize ggml_vec_dot_bf16 for s390x (#19399) 2026-02-15 18:20:35 +08:00
traits.cpp ggml : fix fallback to CPU for ununsupported ops (#15118) 2025-08-06 14:37:35 +02:00
traits.h ggml : fix fallback to CPU for ununsupported ops (#15118) 2025-08-06 14:37:35 +02:00
unary-ops.cpp ggml : unary ops support non-cont src0 + metal F16 unary ops (#19511) 2026-02-11 18:58:43 +02:00
unary-ops.h ggml : add ops SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM (#17063) 2025-11-13 20:54:47 +02:00
vec.cpp ggml-cpu: optimize ggml_vec_dot_bf16 for s390x (#19399) 2026-02-15 18:20:35 +08:00
vec.h ggml-cpu: extend support for RVV floating-point kernels (#17318) 2025-12-18 16:02:09 +02:00