llama.cpp/ggml
Reese Levine 77f8b96515 Try manually unrolled q4_0 quant 2025-09-12 14:54:32 -07:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type (#15797) 2025-09-11 22:47:38 +02:00
src Try manually unrolled q4_0 quant 2025-09-12 14:54:32 -07:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml-cpu: drop support for nnpa intrinsics (#15821) 2025-09-06 11:27:28 +08:00