llama.cpp/tools
Oleksandr Kuvshynov c5fef0fcea
server: update readme to mention n_past_max metric (#16436)
https://github.com/ggml-org/llama.cpp/pull/15361 added new metric
exported, but I've missed this doc.
2025-10-06 10:53:31 +03:00
..
batched-bench cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
cvector-generator cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
export-lora cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
gguf-split ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
imatrix cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
llama-bench rpc : add support for multiple devices (#16276) 2025-10-04 12:49:16 +03:00
main llama-cli: prevent spurious assistant token (#16202) 2025-09-29 10:03:12 +03:00
mtmd model : Granite docling + Idefics3 preprocessing (SmolVLM) (#16206) 2025-10-05 14:57:47 +02:00
perplexity perplexity : show more kl-divergence data (#16321) 2025-09-29 09:30:45 +03:00
quantize ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
rpc rpc : add support for multiple devices (#16276) 2025-10-04 12:49:16 +03:00
run common: introduce http.h for httplib-based client (#16373) 2025-10-01 20:22:18 +03:00
server server: update readme to mention n_past_max metric (#16436) 2025-10-06 10:53:31 +03:00
tokenize cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
tts model : Apertus model implementation (#15852) 2025-10-02 20:43:22 +03:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00