llama.cpp/tools/server/tests/unit
Xuan-Son Nguyen 3c3635d2f2
server : speed up tests (#15836)
* server : speed up tests

* clean up

* restore timeout_seconds in some places

* flake8

* explicit offline
2025-09-06 14:45:24 +02:00
..
test_basic.py server : speed up tests (#15836) 2025-09-06 14:45:24 +02:00
test_chat_completion.py server : implement prompt processing progress report in stream mode (#15827) 2025-09-06 13:35:04 +02:00
test_completion.py server : Support multimodal completion and embeddings prompts in JSON format (#15108) 2025-08-22 10:10:14 +02:00
test_ctx_shift.py llama: use FA + max. GPU layers by default (#15434) 2025-08-30 16:32:10 +02:00
test_embedding.py server : disable context shift by default (#15416) 2025-08-19 16:46:37 +03:00
test_infill.py server : disable context shift by default (#15416) 2025-08-19 16:46:37 +03:00
test_lora.py server : disable context shift by default (#15416) 2025-08-19 16:46:37 +03:00
test_rerank.py server : disable context shift by default (#15416) 2025-08-19 16:46:37 +03:00
test_security.py server : disable context shift by default (#15416) 2025-08-19 16:46:37 +03:00
test_slot_save.py server : disable context shift by default (#15416) 2025-08-19 16:46:37 +03:00
test_speculative.py llama: use FA + max. GPU layers by default (#15434) 2025-08-30 16:32:10 +02:00
test_template.py server : speed up tests (#15836) 2025-09-06 14:45:24 +02:00
test_tokenize.py server : disable context shift by default (#15416) 2025-08-19 16:46:37 +03:00
test_tool_call.py server : speed up tests (#15836) 2025-09-06 14:45:24 +02:00
test_vision_api.py server : speed up tests (#15836) 2025-09-06 14:45:24 +02:00