llama.cpp/examples/training
Georgi Gerganov 254098a279
common : refactor common_sampler + grammar logic changes (#17937)
* common : refactor common_sampler + grammar logic changes

* tests : increase max_tokens to get needed response

* batched : fix uninitialized samplers
2025-12-14 10:11:13 +02:00
..
CMakeLists.txt llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
README.md examples/training: Fix file name in README (#13803) 2025-05-26 16:55:24 +02:00
finetune.cpp common : refactor common_sampler + grammar logic changes (#17937) 2025-12-14 10:11:13 +02:00

README.md

llama.cpp/examples/training

This directory contains examples related to language model training using llama.cpp/GGML. So far finetuning is technically functional (for FP32 models and limited hardware setups) but the code is very much WIP. Finetuning of Stories 260K and LLaMA 3.2 1b seems to work with 24 GB of memory. For CPU training, compile llama.cpp without any additional backends such as CUDA. For CUDA training, use the maximum number of GPU layers.

Proof of concept:

export model_name=llama_3.2-1b && export quantization=f32
./build/bin/llama-finetune --file wikitext-2-raw/wiki.test.raw -ngl 999 --model models/${model_name}-${quantization}.gguf -c 512 -b 512 -ub 512
./build/bin/llama-perplexity --file wikitext-2-raw/wiki.test.raw -ngl 999 --model finetuned-model.gguf

The perplexity value of the finetuned model should be lower after training on the test set for 2 epochs.