llama.cpp/ggml
Yu, Zijun d757849741 Put kvcache on GPU 2026-01-15 11:39:08 -08:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include Add ov_backend_host_buffer; Use cached remote context 2026-01-15 11:39:08 -08:00
src Put kvcache on GPU 2026-01-15 11:39:08 -08:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt Refactor: clean, fix warning 2026-01-15 10:20:18 -08:00