llama.cpp/ggml/src/ggml-cpu
Diego Devesa 3420909dff
ggml : automatic selection of best CPU backend (#10606)
* ggml : automatic selection of best CPU backend

* amx : minor opt

* add GGML_AVX_VNNI to enable avx-vnni, fix checks
2024-12-01 16:12:41 +01:00
..
amx ggml : automatic selection of best CPU backend (#10606) 2024-12-01 16:12:41 +01:00
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
llamafile ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
CMakeLists.txt ggml : automatic selection of best CPU backend (#10606) 2024-12-01 16:12:41 +01:00
cpu-feats-x86.cpp ggml : automatic selection of best CPU backend (#10606) 2024-12-01 16:12:41 +01:00
ggml-cpu-aarch64.c ggml : automatic selection of best CPU backend (#10606) 2024-12-01 16:12:41 +01:00
ggml-cpu-aarch64.h ggml-cpu: support IQ4_NL_4_4 by runtime repack (#10541) 2024-11-28 13:52:03 +01:00
ggml-cpu-impl.h ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
ggml-cpu-quants.c ggml : fix I8MM Q4_1 scaling factor conversion (#10562) 2024-11-29 16:25:39 +02:00
ggml-cpu-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu.c ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
ggml-cpu.cpp ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00