llama.cpp/tools
Ed Addario c81f7cdf9f
Merge branch 'master' into imatrix
2025-10-20 21:00:11 +01:00
..
batched-bench
cvector-generator
export-lora
gguf-split
imatrix
llama-bench llama : add --no-host to disable host buffers (#16310) 2025-10-06 19:55:53 +02:00
main llama-cli: prevent spurious assistant token (#16202) 2025-09-29 10:03:12 +03:00
mtmd mtmd : support home-cooked Mistral Small Omni (#14928) 2025-10-16 19:00:31 +02:00
perplexity
quantize
rpc rpc : report actual free memory (#16616) 2025-10-17 18:02:52 +03:00
run common: introduce http.h for httplib-based client (#16373) 2025-10-01 20:22:18 +03:00
server Handle legacy 'context' attachments (#16687) 2025-10-20 19:49:02 +02:00
tokenize
tts model : Apertus model implementation (#15852) 2025-10-02 20:43:22 +03:00
CMakeLists.txt