llama.cpp/examples/llama-eval
Georgi Gerganov 62b04cef54
examples: add threading support and model parameter to llama-eval-new.py
- Add ThreadPoolExecutor for parallel request processing controlled by --threads
- Add --model argument to specify model name in request data
- Refactor process() to use thread-safe _process_single_case() method
- Update progress tracking to work with concurrent execution
2026-02-15 21:08:23 +02:00
..
llama-eval-discussion.md docs: update llama-eval-discussion.md with session work summary 2026-02-15 21:08:23 +02:00
llama-eval-new.py examples: add threading support and model parameter to llama-eval-new.py 2026-02-15 21:08:23 +02:00
llama-eval.py add checkpointing 2026-02-15 21:08:22 +02:00
llama-server-simulator-plan.md examples: add llama-server simulator for testing eval scripts 2026-02-15 21:08:22 +02:00
llama-server-simulator.py examples: use cached dataset path in simulator to avoid HF Hub requests 2026-02-15 21:08:23 +02:00
simulator-summary.md examples: add llama-server simulator for testing eval scripts 2026-02-15 21:08:22 +02:00
test-grader.py examples: implement flexible grader system for answer validation 2026-02-15 21:08:23 +02:00
test-simulator.sh examples: refactor test-simulator.sh for better readability 2026-02-15 21:08:22 +02:00