llama.cpp/tools/quantize
Ed Addario efe9c8b933
Merge branch 'master' into quantize
2026-01-01 13:48:02 +00:00
..
CMakeLists.txt cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
README.md Update README.md 2025-12-25 17:47:38 +00:00
quantize.cpp Merge branch 'master' into quantize 2026-01-01 13:48:02 +00:00
tests.sh cli: new CLI experience (#17824) 2025-12-10 15:28:59 +01:00

README.md

quantize

This tool takes a GGUF input model file, typically in a high-precision format like F32 or BF16, and converts it to a quantized format. Quantization reduces the precision of model weights (e.g., from 32-bit floats to 4-bit integers), which shrinks the model's size and can speed up inference. This process however, may introduce some accuracy loss which is usually measured in Perplexity (ppl) and/or KullbackLeibler Divergence (kld). This can be minimized by using a suitable imatrix file.

You can also use the GGUF-my-repo space on Hugging Face to build your own quants without any setup.

Note: It is synced from llama.cpp main every 6 hours.

Example usage:

./llama-quantize [options] input-model-f32.gguf [output-model-quant.gguf] type [threads]

# from Hugginface, obtain the official meta-llama/Llama-3.1-8B model weights and place them in ./models
ls ./models
config.json             model-00001-of-00004.safetensors  model-00004-of-00004.safetensors  README.md                tokenizer.json
generation_config.json  model-00002-of-00004.safetensors  model.safetensors.index.json      special_tokens_map.json  USE_POLICY.md
LICENSE                 model-00003-of-00004.safetensors  original                          tokenizer_config.json

# [Optional] for PyTorch .bin models like Mistral-7B
ls ./models
<folder containing weights and tokenizer json>

# install Python dependencies
python3 -m pip install -r requirements.txt

# convert the model to ggml FP16 format
python3 convert_hf_to_gguf.py ./models/mymodel/

# quantize the model to 4-bits (using Q4_K_M method)
./llama-quantize ./models/mymodel/ggml-model-f16.gguf ./models/mymodel/ggml-model-Q4_K_M.gguf Q4_K_M

# update the gguf filetype to current version if older version is now unsupported
./llama-quantize ./models/mymodel/ggml-model-Q4_K_M.gguf ./models/mymodel/ggml-model-Q4_K_M-v2.gguf COPY

Run the quantized model:

# start inference on a gguf model
./llama-cli -m ./models/mymodel/ggml-model-Q4_K_M.gguf -cnv -p "You are a helpful assistant"

Options:

  • --allow-requantize allows requantizing tensors that have already been quantized. Warning: This can severely reduce quality compared to quantizing from 16bit or 32bit
  • --leave-output-tensor will leave output.weight un(re)quantized. Increases model size but may also increase quality, especially when requantizing
  • --pure disables k-quant mixtures and quantizes all tensors to the same type
  • --imatrix uses data in file generated by llama-imatrix as importance matrix for quant optimizations (highly recommended)
  • --include-weights use an importance matrix for tensor(s) in the list. Cannot be used with --exclude-weights
  • --exclude-weights use an importance matrix for tensor(s) in the list. Cannot be used with --include-weights
  • --output-tensor-type use a specific quant type for the output.weight tensor
  • --token-embedding-type use a specific quant type for the token embeddings tensor
  • --keep-split will generate the quantized model in the same shards as the input file otherwise it will produce a single quantized file

Advanced options:

  • --tensor-type quantize specific tensor(s) to specific quant types. Supports regex syntax. May be specified multiple times
  • --prune-layers prune (remove) the layers in the list
  • --target-bpw automatically choose quant types so that the overall model size matches a given bits per weight (bpw) average
  • --no-importance during bpw computation, treat each tensor equally instead of prioritizing some. It may yield better quality for some models
  • --override-kv option to override model metadata by key in the quantized model. May be specified multiple times

Examples:

# naive Q4_K_M quantization using default settings and 8 CPU threads. Output will be "ggml-model-Q4_K_M.gguf"
./llama-quantize input-model-f32.gguf q4_k_m 8
#  quantize model enabling re-quantization, leaving the output tensor unquantized and all others quantized at the same level (Q4_K)
./llama-quantize --allow-requantize --leave-output-tensor --pure input-model-f32.gguf q4_k_m 8
# quantize model using an importance matrix for specified tensors only (attn_v and ffn_down)
./llama-quantize --imatrix imatrix.gguf --include-weights attn_v --include-weights ffn_down input-model-f32.gguf q4_k_m 8
# quantize model setting output tensor to Q5_K_M, token embeddings to Q3_K_M, and keeping the input file's shards
./llama-quantize --imatrix imatrix.gguf --output-tensor-type q5_k --token-embedding-type q3_k --keep-split input-model-f32.gguf q4_k_m 8
# quantize model using a regex to quantize attn_k tensors in odd layers to Q5_K_M and attn_q tensors in even layers to Q3_K_M
./llama-quantize --imatrix imatrix.gguf --tensor-type "\.(\d*[13579])\.attn_k=q5_k" --tensor-type "\.(\d*[02468])\.attn_q=q3_k" input-model-f32.gguf q4_k_m 8
# quantize model setting tensors attn_v and ffn_down to Q5_K_M and pruning layers 20, 21, and 22
./llama-quantize --imatrix imatrix.gguf --tensor-type attn_v=q5_k --tensor-type ffn_down=q5_k --prune-layers 20,21,22 input-model-f32.gguf q4_k_m 8
# override expert used count metadata to 16, prune layers 20, 21, and 22 without quantizing the model (copy tensors) and use specified name for the output file
./llama-quantize --imatrix imatrix.gguf --override-kv qwen3moe.expert_used_count=int:16 --prune-layers 20,21,22 input-model-f32.gguf pruned-model-f32.gguf copy 8
# quantize model targeting a specific bpw average and save the bpw computations to the default file. Model type is optional and can be omitted
./llama-quantize --target-bpw 4.567 --keep-bpw-state --imatrix imatrix.gguf input-model-f32.gguf 8

Memory/Disk Requirements

When running the larger models, make sure you have enough disk space to store all the intermediate files. As the models are currently fully loaded into memory, you will need adequate disk space to save them and sufficient RAM to load them. At the moment, memory and disk requirements are the same. For exmaple (Llama 3.1):

Model Original size Quantized size (Q4_K_M)
8B 32.1 GB 4.9 GB
70B 280.9 GB 43.1 GB
405B 1,625.1 GB 249.1 GB

Quantization

Several quantization methods are supported. They differ in the resulting model disk size and inference speed. For example,

meta-llama/Llama-3.1-8B

Quant Type bits/weight size (GiB) prompt processing t/s @ 512 text generation t/s @ 128
IQ1_S 2.0042 1.87 858.88 ±1.22 79.73 ±0.79
IQ1_M 2.1460 2.01 847.99 ±0.47 72.92 ±0.14
IQ2_XXS 2.3824 2.23 852.39 ±0.85 79.86 ±0.22
IQ2_XS 2.5882 2.42 826.99 ±12.51 78.04 ±0.46
IQ2_S 2.7403 2.56 783.55 ±13.73 77.30 ±2.47
IQ2_M 2.9294 2.74 787.68 ±7.00 74.44 ±0.15
IQ3_XXS 3.2548 3.04 813.88 ±6.53 73.95 ±0.20
IQ3_XS 3.4977 3.27 708.71 ±1.26 71.67 ±0.54
IQ3_S 3.6606 3.42 798.78 ±8.81 69.31 ±0.63
IQ3_M 3.7628 3.52 768.70 ±13.73 70.15 ±0.33
IQ4_XS 4.4597 4.17 771.80 ±11.38 77.51 ±0.20
IQ4_NL 4.6818 4.38 818.55 ±9.58 76.71 ±0.20
Q2_K_S 2.9697 2.78 798.91 ±6.40 90.01 ±0.12
Q2_K 3.1593 2.95 784.45 ±7.85 79.85 ±0.20
Q3_K_S 3.6429 3.41 752.17 ±7.94 71.68 ±0.22
Q3_K_L 4.2979 4.02 761.17 ±7.55 69.38 ±0.49
Q4_K_S 4.6672 4.36 818.55 ±9.58 76.71 ±0.20
Q4_K_S 4.6672 4.36 818.55 ±9.58 76.71 ±0.20
Q4_K_M 4.8944 4.58 821.81 ±21.44 71.93 ±1.52
Q5_K_S 5.5704 5.21 752.52 ±0.99 69.53 ±0.18
Q5_K_M 5.7036 5.33 758.69 ±7.43 67.23 ±1.08
Q6_K 6.5633 6.14 812.01 ±10.82 58.67 ±3.13
Q8_0 8.5008 7.95 865.09 ±8.30 50.93 ±0.08
F16 16.0005 14.96 923.49 ±0.53 29.17 ±0.04

Background information on llama-quantize