llama.cpp/tools/server/tests
Daniel Bevenius c8ac02fa1b
requirements : update transformers to 5.5.1 (#21617)
* requirements : update transformers to 5.5.0

This commit updates the transformers dependency to version 5.5.0.

The motivation for this is that transformers 5.5.0 includes support for
Gemma4 and is required to be able to convert Gemma4 models. This is also
causing issues for user of gguf-my-repo.

Refs: https://huggingface.co/spaces/ggml-org/gguf-my-repo/discussions/202

* fix huggingface_hub version

* set version of transformers to 5.5.0

* convert : add ty ignore directives to convert_hf_to_gguf.py

This commit adds `ty: ignore` directives to transformers tokenizers
field/methods to avoid type check errors. There might be better ways to
handle this and perhaps this can be done in a follow up commit.

The motivation for this is that it looks like in transformers 5.5.0
AutoTokenizer.from_pretrained can return generic tokenizer types or None
and the type checker now produces an error when the conversion script
accesses field like tokenizer.vocab.

* convert : add ty ignore to suppress type check errors

* convert : remove incorrect type ignores

* convert : fix remaining python checks

I was running a newer version of ty locally but I've switched to
version 0.0.26 which is what CI uses and I was then able to reproduce
the errors. Sorry about the noise.

* update transformers version to 5.5.1
2026-04-09 12:36:29 +02:00
..
unit server: respect the ignore eos flag (#21203) 2026-04-08 17:12:15 +02:00
.gitignore llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
README.md chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
conftest.py server : add Anthropic Messages API support (#17570) 2025-11-28 12:57:04 +01:00
pytest.ini llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
requirements.txt requirements : update transformers to 5.5.1 (#21617) 2026-04-09 12:36:29 +02:00
tests.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
utils.py server: save and clear idle slots on new task (`--clear-idle`) (#20993) 2026-04-03 19:02:27 +02:00

README.md

Server tests

Python based server tests scenario using pytest.

Tests target GitHub workflows job runners with 4 vCPU.

Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.

Install dependencies

pip install -r requirements.txt

Run tests

  1. Build the server
cd ../../..
cmake -B build
cmake --build build --target llama-server
  1. Start the test: ./tests.sh

It's possible to override some scenario steps values with environment variables:

variable description
PORT context.server_port to set the listening port of the server during scenario, default: 8080
LLAMA_SERVER_BIN_PATH to change the server binary path, default: ../../../build/bin/llama-server
DEBUG to enable steps and server verbose mode --verbose
N_GPU_LAYERS number of model layers to offload to VRAM -ngl --n-gpu-layers
LLAMA_CACHE by default server tests re-download models to the tmp subfolder. Set this to your cache (e.g. $HOME/Library/Caches/llama.cpp on Mac or $HOME/.cache/llama.cpp on Unix) to avoid this

To run slow tests (will download many models, make sure to set LLAMA_CACHE if needed):

SLOW_TESTS=1 ./tests.sh

To run with stdout/stderr display in real time (verbose output, but useful for debugging):

DEBUG=1 ./tests.sh -s -v -x

To run all the tests in a file:

./tests.sh unit/test_chat_completion.py -v -x

To run a single test:

./tests.sh unit/test_chat_completion.py::test_invalid_chat_completion_req

Hint: You can compile and run test in single command, useful for local development:

cmake --build build -j --target llama-server && ./tools/server/tests/tests.sh

To see all available arguments, please refer to pytest documentation

Debugging external llama-server

It can sometimes be useful to run the server in a debugger when invesigating test failures. To do this, the environment variable DEBUG_EXTERNAL=1 can be set which will cause the test to skip starting a llama-server itself. Instead, the server can be started in a debugger.

Example using gdb:

$ gdb --args ../../../build/bin/llama-server \
    --host 127.0.0.1 --port 8080 \
    --temp 0.8 --seed 42 \
    --hf-repo ggml-org/models --hf-file tinyllamas/stories260K.gguf \
    --batch-size 32 --no-slots --alias tinyllama-2 --ctx-size 512 \
    --parallel 2 --n-predict 64

And a break point can be set in before running:

(gdb) br server.cpp:4604
(gdb) r
main: server is listening on http://127.0.0.1:8080 - starting the main loop
srv  update_slots: all slots are idle

And then the test in question can be run in another terminal:

(venv) $ env DEBUG_EXTERNAL=1 ./tests.sh unit/test_chat_completion.py -v -x

And this should trigger the breakpoint and allow inspection of the server state in the debugger terminal.