llama.cpp/ggml
Rehan Qasim 4b12d409ff ggml-cpu: add 128-bit impls for iq2_xs, iq3_s, iq3_xxs, tq2_0
Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
2026-03-18 17:02:12 +05:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include ggml : add OpenVINO backend (#15307) 2026-03-14 07:56:55 +02:00
src ggml-cpu: add 128-bit impls for iq2_xs, iq3_s, iq3_xxs, tq2_0 2026-03-18 17:02:12 +05:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : add OpenVINO backend (#15307) 2026-03-14 07:56:55 +02:00