llama.cpp/ggml/src/ggml-cpu/llamafile
shalinib-ibm 7afdfc9b84
ggml-cpu: Enable FP16 MMA kernels on PPC (#19060)
2026-01-27 11:52:34 +08:00
..
sgemm-ppc.h Q4/Q8 Tiled Gemm Optimization. (#16999) 2025-12-05 19:41:51 +08:00
sgemm.cpp ggml-cpu: Enable FP16 MMA kernels on PPC (#19060) 2026-01-27 11:52:34 +08:00
sgemm.h Q4/Q8 Tiled Gemm Optimization. (#16999) 2025-12-05 19:41:51 +08:00