llama.cpp/tools/server/tests/unit
Georgi Gerganov b931f81b5a
server : adjust spec tests to generate up to 16 tokens (#19093)
2026-01-28 09:11:40 +02:00
..
test_basic.py server: add router multi-model tests (#17704) (#17722) 2025-12-03 15:10:37 +01:00
test_chat_completion.py server: improve slots scheduling for n_cmpl (#18789) 2026-01-15 17:10:28 +01:00
test_compat_anthropic.py server : add thinking content blocks to Anthropic Messages API (#18551) 2026-01-06 16:17:13 +01:00
test_compat_oai_responses.py server: /v1/responses (partial) (#18486) 2026-01-21 17:47:23 +01:00
test_completion.py server : adjust unified KV cache tests (#18716) 2026-01-10 17:51:56 +02:00
test_ctx_shift.py
test_embedding.py
test_infill.py
test_lora.py
test_rerank.py
test_router.py server: add router multi-model tests (#17704) (#17722) 2025-12-03 15:10:37 +01:00
test_security.py server: add --media-path for local media files (#17697) 2025-12-02 22:49:20 +01:00
test_sleep.py server: add auto-sleep after N seconds of idle (#18228) 2025-12-21 02:24:42 +01:00
test_slot_save.py
test_speculative.py server : adjust spec tests to generate up to 16 tokens (#19093) 2026-01-28 09:11:40 +02:00
test_template.py
test_tokenize.py
test_tool_call.py
test_vision_api.py