llama.cpp/ggml
hongruichen 15f5cc450c bug: fix allocation size overflow at log 2024-07-18 19:44:05 +08:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include register qnn backend 2024-07-17 21:25:55 +08:00
src bug: fix allocation size overflow at log 2024-07-18 19:44:05 +08:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt add build step of QNN backend at ggml 2024-07-17 19:43:01 +08:00