This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
4febe1b725
llama.cpp
/
ggml
/
src
/
ggml-cpu
/
arch
History
taimur-10x
4febe1b725
ggml-cpu: add rvv ggml_quantize_mat_4x8 for q8_0
...
Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
2026-01-27 16:28:45 +05:00
..
arm
ggml-cpu: aarm64: q6_K repack gemm and gemv (and generic) implementations (i8mm)
#18860
(
#18888
)
2026-01-27 11:08:10 +02:00
loongarch
ggml : LoongArch fixes (
#16958
)
2025-11-03 08:40:02 +02:00
powerpc
ggml-cpu: add mxfp4 VSX intrinsics for Power9+ (ppc64le) hardware (
#15385
)
2025-08-19 11:54:31 +03:00
riscv
ggml-cpu: add rvv ggml_quantize_mat_4x8 for q8_0
2026-01-27 16:28:45 +05:00
s390
ggml: add s390x cpu-feats (
#16774
)
2025-11-02 08:48:23 +08:00
wasm
ggml-cpu : deduplicate scalar implementations (
#14897
)
2025-07-28 17:40:24 +02:00
x86
ggml : add missing AVX512 feature checks (
#17270
)
2025-11-17 12:12:00 +01:00