llama.cpp/tools
Pascal 47eb12b953
server: fix query params lost when proxying requests in multi-model router mode (#19854)
* server: fix query params lost when proxying requests in multi-model router mode

* server: re-encode query params using httplib::encode_query_component in proxy
2026-02-24 21:46:06 +01:00
..
batched-bench tool/ex/tests: consistently free ctx, then model (#18168) 2025-12-22 11:00:37 +01:00
cli cli : provide model with text filename (#19783) 2026-02-22 22:33:49 +01:00
completion llama : remove write/read of output ids/logits/embeddings (#18862) 2026-02-23 07:04:30 +01:00
cvector-generator docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
export-lora docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
fit-params llama-fit-params: keep explicit --ctx-size 0 (#19070) 2026-01-24 22:13:08 +01:00
gguf-split cli: new CLI experience (#17824) 2025-12-10 15:28:59 +01:00
imatrix common : refactor common_sampler + grammar logic changes (#17937) 2025-12-14 10:11:13 +02:00
llama-bench Setting mmap and direct_io to false as default in llama-bench.cpp (#18841) 2026-01-16 09:46:51 +01:00
mtmd model: Add PaddleOCR-VL model support (#18825) 2026-02-19 17:05:25 +01:00
perplexity perplexity: add proper batching (#19661) 2026-02-16 18:44:44 +02:00
quantize quantize : add --dry-run option (#19526) 2026-02-20 09:20:16 +01:00
rpc NetBSD build support (#19589) 2026-02-14 09:47:01 +01:00
server server: fix query params lost when proxying requests in multi-model router mode (#19854) 2026-02-24 21:46:06 +01:00
tokenize cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
tts model : fix wavtokenizer embedding notions (#19479) 2026-02-11 07:52:20 +02:00
CMakeLists.txt cmake: only build cli when server is enabled (#18670) 2026-01-09 16:43:26 +01:00