This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
f01ce7ba30
llama.cpp
/
ggml
/
src
/
ggml-cpu
/
llamafile
History
shalinib-ibm
7afdfc9b84
ggml-cpu: Enable FP16 MMA kernels on PPC (
#19060
)
2026-01-27 11:52:34 +08:00
..
sgemm-ppc.h
Q4/Q8 Tiled Gemm Optimization. (
#16999
)
2025-12-05 19:41:51 +08:00
sgemm.cpp
ggml-cpu: Enable FP16 MMA kernels on PPC (
#19060
)
2026-01-27 11:52:34 +08:00
sgemm.h
Q4/Q8 Tiled Gemm Optimization. (
#16999
)
2025-12-05 19:41:51 +08:00