llama.cpp/ggml
zhanmyz 9b9d51dddf * Configure the device(default CPU) that uses OpenVINO to compile the model
* Add OpenVINO ADD operator to Llama.cpp. The output is somewhat abnormal and needs further debugging.
2026-01-15 10:05:40 -08:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include add openvino as optional backend for Llama.cpp ggml 2026-01-15 10:05:40 -08:00
src * Configure the device(default CPU) that uses OpenVINO to compile the model 2026-01-15 10:05:40 -08:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : bump version to 0.9.5 (ggml/1410) 2025-12-31 18:54:43 +02:00