llama.cpp/models/templates
Xuan-Son Nguyen c15395f73c
common : implement new jinja template engine (#18462)
* jinja vm

* lexer

* add vm types

* demo

* clean up

* parser ok

* binary_expression::execute

* shadow naming

* bin ops works!

* fix map object

* add string builtins

* add more builtins

* wip

* use mk_val

* eval with is_user_input

* render gemma tmpl ok

* track input string even after transformations

* support binded functions

* keyword arguments and slicing array

* use shared_ptr for values

* add mk_stmt

* allow print source on exception

* fix negate test

* testing more templates

* mostly works

* add filter_statement

* allow func to access ctx

* add jinja-value.cpp

* impl global_from_json

* a lot of fixes

* more tests

* more fix, more tests

* more fixes

* rm workarounds

* demo: type inferrence

* add placeholder for tojson

* improve function args handling

* rm type inference

* no more std::regex

* trailing spaces

* make testing more flexible

* make output a bit cleaner

* (wip) redirect minja calls

* test: add --output

* fix crash on macro kwargs

* add minimal caps system

* add some workarounds

* rm caps_apply_workarounds

* get rid of preprocessing

* more fixes

* fix test-chat-template

* move test-chat-jinja into test-chat-template

* rm test-chat-jinja from cmake

* test-chat-template: use common

* fix build

* fix build (2)

* rename vm --> interpreter

* improve error reporting

* correct lstrip behavior

* add tojson

* more fixes

* disable tests for COMMON_CHAT_FORMAT_GENERIC

* make sure tojson output correct order

* add object.length

* fully functional selectattr / rejectattr

* improve error reporting

* more builtins added, more fixes

* create jinja rendering tests

* fix testing.h path

* adjust whitespace rules

* more fixes

* temporary disable test for ibm-granite

* r/lstrip behavior matched with hf.js

* minimax, glm4.5 ok

* add append and pop

* kimi-k2 ok

* test-chat passed

* fix lstrip_block

* add more jinja tests

* cast to unsigned char

* allow dict key to be numeric

* nemotron: rm windows newline

* tests ok

* fix test

* rename interpreter --> runtime

* fix build

* add more checks

* bring back generic format support

* fix Apertus

* [json.exception.out_of_range.403] key 'content' not found

* rm generic test

* refactor input marking

* add docs

* fix windows build

* clarify error message

* improved tests

* split/rsplit with maxsplit

* non-inverse maxsplit

forgot to change after simplifying

* implement separators for tojson and fix indent

* i like to move it move it

* rename null -- > none

* token::eof

* some nits + comments

* add exception classes for lexer and parser

* null -> none

* rename global -> env

* rm minja

* update docs

* docs: add input marking caveats

* imlement missing jinja-tests functions

* oops

* support trim filter with args, remove bogus to_json reference

* numerous argument fixes

* updated tests

* implement optional strip chars parameter

* use new chars parameter

* float filter also has default

* always leave at least one decimal in float string

* jinja : static analysis + header cleanup + minor fixes

* add fuzz test

* add string.cpp

* fix chat_template_kwargs

* nits

* fix build

* revert

* unrevert

sorry :)

* add fuzz func_args, refactor to be safer

* fix array.map()

* loosen ensure_vals max count condition, add not impl for map(int)

* hopefully fix windows

* check if empty first

* normalize newlines

---------

Co-authored-by: Alde Rojas <hello@alde.dev>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-01-16 11:22:06 +01:00
..
Apertus-8B-Instruct.jinja model : Apertus model implementation (#15852) 2025-10-02 20:43:22 +03:00
ByteDance-Seed-OSS.jinja chat : Seed OSS thinking + tool call support (#15552) 2025-08-29 14:53:41 +02:00
CohereForAI-c4ai-command-r-plus-tool_use.jinja
CohereForAI-c4ai-command-r7b-12-2024-tool_use.jinja
GLM-4.6.jinja common : Generalized XML-style tool-call parsing with streaming support (GLM 4.5/4.6 + MiniMax M2 + SeedOSS + Kimi-K2 + Qwen3-Coder + Apriel-1.5 + Xiaomi-MiMo) (#16932) 2025-11-18 18:54:15 +01:00
Kimi-K2-Instruct.jinja Fix Kimi-K2 tool-call parsing issues (#17376) 2025-12-08 14:32:04 +01:00
Kimi-K2-Thinking.jinja Fix Kimi-K2 tool-call parsing issues (#17376) 2025-12-08 14:32:04 +01:00
MiMo-VL.jinja common : Generalized XML-style tool-call parsing with streaming support (GLM 4.5/4.6 + MiniMax M2 + SeedOSS + Kimi-K2 + Qwen3-Coder + Apriel-1.5 + Xiaomi-MiMo) (#16932) 2025-11-18 18:54:15 +01:00
MiniMax-M2.jinja common : Generalized XML-style tool-call parsing with streaming support (GLM 4.5/4.6 + MiniMax M2 + SeedOSS + Kimi-K2 + Qwen3-Coder + Apriel-1.5 + Xiaomi-MiMo) (#16932) 2025-11-18 18:54:15 +01:00
Mistral-Small-3.2-24B-Instruct-2506.jinja jinja : Add Mistral-Small-3.2-24B-Instruct-2506.jinja (#14349) 2025-06-24 09:17:58 +03:00
NVIDIA-Nemotron-3-Nano-30B-A3B-BF16.jinja common : implement new jinja template engine (#18462) 2026-01-16 11:22:06 +01:00
NVIDIA-Nemotron-Nano-v2.jinja chat : nemotron thinking & toolcalling support (#15676) 2025-09-05 01:22:22 +02:00
NousResearch-Hermes-2-Pro-Llama-3-8B-tool_use.jinja
NousResearch-Hermes-3-Llama-3.1-8B-tool_use.jinja
Qwen-QwQ-32B.jinja `server`: streaming of tool calls and thoughts when `--jinja` is on (#12379) 2025-05-25 01:48:08 +01:00
Qwen-Qwen2.5-7B-Instruct.jinja
Qwen-Qwen3-0.6B.jinja `server`: add `--reasoning-budget 0` to disable thinking (incl. qwen3 w/ enable_thinking:false) (#13771) 2025-05-26 00:30:51 +01:00
Qwen3-Coder.jinja common : Generalized XML-style tool-call parsing with streaming support (GLM 4.5/4.6 + MiniMax M2 + SeedOSS + Kimi-K2 + Qwen3-Coder + Apriel-1.5 + Xiaomi-MiMo) (#16932) 2025-11-18 18:54:15 +01:00
README.md chat : Deepseek V3.1 reasoning and tool calling support (OpenAI Style) (#15533) 2025-09-08 16:59:48 +02:00
deepseek-ai-DeepSeek-R1-Distill-Llama-8B.jinja `server`: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB & DeepSeek R1) unless `--reasoning-format none` (#11607) 2025-02-13 10:05:16 +00:00
deepseek-ai-DeepSeek-R1-Distill-Qwen-32B.jinja `server`: fix tool-call of DeepSeek R1 Qwen, return reasoning_content (Command 7RB & DeepSeek R1) unless `--reasoning-format none` (#11607) 2025-02-13 10:05:16 +00:00
deepseek-ai-DeepSeek-V3.1.jinja chat : Deepseek V3.1 reasoning and tool calling support (OpenAI Style) (#15533) 2025-09-08 16:59:48 +02:00
fireworks-ai-llama-3-firefunction-v2.jinja
google-gemma-2-2b-it.jinja
ibm-granite-granite-3.3-2B-Instruct.jinja chat : support Granite model reasoning and tool call (#14864) 2025-08-06 20:27:30 +02:00
llama-cpp-deepseek-r1.jinja common : default content to an empty string (#18485) 2025-12-30 12:00:57 -06:00
llama-cpp-lfm2.jinja chat: Add LFM2 tool handling (#16763) 2025-10-27 23:54:01 +01:00
llama-cpp-rwkv-world.jinja llama : add jinja template for rwkv-world (#14665) 2025-07-14 07:43:43 +08:00
meetkai-functionary-medium-v3.1.jinja
meetkai-functionary-medium-v3.2.jinja
meta-llama-Llama-3.1-8B-Instruct.jinja
meta-llama-Llama-3.2-3B-Instruct.jinja
meta-llama-Llama-3.3-70B-Instruct.jinja
microsoft-Phi-3.5-mini-instruct.jinja
mistralai-Ministral-3-14B-Reasoning-2512.jinja common : add parser for ministral/mistral large 3/devstral 2 (#17713) 2025-12-09 17:31:04 -06:00
mistralai-Mistral-Nemo-Instruct-2407.jinja
moonshotai-Kimi-K2.jinja model : add Kimi-K2 support (#14654) 2025-07-15 21:54:22 +02:00
openai-gpt-oss-120b.jinja gpt-oss: implement harmony parsing (#15181) 2025-08-14 17:23:11 +03:00
unsloth-Apriel-1.5.jinja common : Generalized XML-style tool-call parsing with streaming support (GLM 4.5/4.6 + MiniMax M2 + SeedOSS + Kimi-K2 + Qwen3-Coder + Apriel-1.5 + Xiaomi-MiMo) (#16932) 2025-11-18 18:54:15 +01:00
unsloth-mistral-Devstral-Small-2507.jinja mtmd : add support for Voxtral (#14862) 2025-07-28 15:01:48 +02:00

README.md

These templates can be updated with the following commands:

./scripts/get_chat_template.py CohereForAI/c4ai-command-r-plus tool_use      > models/templates/CohereForAI-c4ai-command-r-plus-tool_use.jinja
./scripts/get_chat_template.py CohereForAI/c4ai-command-r7b-12-2024 default  > models/templates/CohereForAI-c4ai-command-r7b-12-2024-default.jinja
./scripts/get_chat_template.py CohereForAI/c4ai-command-r7b-12-2024 rag      > models/templates/CohereForAI-c4ai-command-r7b-12-2024-rag.jinja
./scripts/get_chat_template.py CohereForAI/c4ai-command-r7b-12-2024 tool_use > models/templates/CohereForAI-c4ai-command-r7b-12-2024-tool_use.jinja
./scripts/get_chat_template.py deepseek-ai/DeepSeek-R1-Distill-Llama-8B      > models/templates/deepseek-ai-DeepSeek-R1-Distill-Llama-8B.jinja
./scripts/get_chat_template.py deepseek-ai/DeepSeek-R1-Distill-Qwen-32B      > models/templates/deepseek-ai-DeepSeek-R1-Distill-Qwen-32B.jinja
./scripts/get_chat_template.py fireworks-ai/llama-3-firefunction-v2          > models/templates/fireworks-ai-llama-3-firefunction-v2.jinja
./scripts/get_chat_template.py google/gemma-2-2b-it                          > models/templates/google-gemma-2-2b-it.jinja
./scripts/get_chat_template.py meetkai/functionary-medium-v3.1               > models/templates/meetkai-functionary-medium-v3.1.jinja
./scripts/get_chat_template.py meetkai/functionary-medium-v3.2               > models/templates/meetkai-functionary-medium-v3.2.jinja
./scripts/get_chat_template.py meta-llama/Llama-3.1-8B-Instruct              > models/templates/meta-llama-Llama-3.1-8B-Instruct.jinja
./scripts/get_chat_template.py meta-llama/Llama-3.2-3B-Instruct              > models/templates/meta-llama-Llama-3.2-3B-Instruct.jinja
./scripts/get_chat_template.py meta-llama/Llama-3.3-70B-Instruct             > models/templates/meta-llama-Llama-3.3-70B-Instruct.jinja
./scripts/get_chat_template.py microsoft/Phi-3.5-mini-instruct               > models/templates/microsoft-Phi-3.5-mini-instruct.jinja
./scripts/get_chat_template.py mistralai/Mistral-Nemo-Instruct-2407          > models/templates/mistralai-Mistral-Nemo-Instruct-2407.jinja
./scripts/get_chat_template.py NousResearch/Hermes-2-Pro-Llama-3-8B tool_use > models/templates/NousResearch-Hermes-2-Pro-Llama-3-8B-tool_use.jinja
./scripts/get_chat_template.py NousResearch/Hermes-3-Llama-3.1-8B tool_use   > models/templates/NousResearch-Hermes-3-Llama-3.1-8B-tool_use.jinja
./scripts/get_chat_template.py Qwen/Qwen2.5-7B-Instruct                      > models/templates/Qwen-Qwen2.5-7B-Instruct.jinja
./scripts/get_chat_template.py Qwen/QwQ-32B                                  > models/templates/Qwen-QwQ-32B.jinja
./scripts/get_chat_template.py Qwen/Qwen3-0.6B                               > models/templates/Qwen-Qwen3-0.6B.jinja
./scripts/get_chat_template.py zai-org/GLM-4.5                               > models/templates/zai-org-GLM-4.5.jinja
./scripts/get_chat_template.py deepseek-ai/DeepSeek-V3.1                     > models/templates/deepseek-ai-DeepSeek-V3.1.jinja