llama.cpp/include
Ruben Ortlam 24f461b66d use no_alloc to get memory requirements for model load 2026-04-03 10:12:20 +02:00
..
llama-cpp.h llama : re-enable manual LoRA adapter free (#19983) 2026-03-18 12:03:26 +02:00
llama.h use no_alloc to get memory requirements for model load 2026-04-03 10:12:20 +02:00