llama.cpp/examples/llama-eval
gatbontonpc 979299a32f add checkpointing 2026-01-16 17:58:31 -05:00
..
README.md add checkpointing 2026-01-16 17:58:31 -05:00
llama-eval.py add checkpointing 2026-01-16 17:58:31 -05:00

README.md

llama.cpp/example/llama-eval

llama-eval.py is a single-script evaluation runner that sends prompt/response pairs to any OpenAI-compatible HTTP server (the default llama-server).

./llama-server -m model.gguf --port 8033
python examples/llama-eval/llama-eval.py --path_server http://localhost:8033 --n_prompts 100 --prompt_source arc

The supported tasks are:

  • GSM8K — grade-school math
  • AIME — competition math (integer answers)
  • MMLU — multi-domain multiple choice
  • HellaSwag — commonsense reasoning multiple choice
  • ARC — grade-school science multiple choice
  • WinoGrande — commonsense coreference multiple choice