llama.cpp/tools/mtmd/models
Kwa Jie Hao 98d2d2884e
mtmd: Add support for Reka Edge 2603 (#21616)
* feat: (vocab) fix stray text appended in llama_decode_text

Remove accidental concatenation of the full `text` string when
formatting UNK_BYTE hex escapes. Only the closing "]" should be appended.

* feat(mtmd): add Yasa2 vision encoder support

Add a Yasa2 (ConvNeXtV2-based) vision encoder for reka-edge:
- Register PROJECTOR_TYPE_YASA2 and tensor name definitions
- Add yasa2_block/yasa2_stage model structs
- Implement graph builder with ConvNeXt stages, GRN, adaptive pooling
- Wire into clip.cpp switch statements and mtmd.cpp init_vision
- Use mtmd_image_preprocessor_fixed_size for image preprocessing

* feat(chat): add reka-edge template handler (tools, thinking)

- Add chat-reka.cpp/h implementing PEG-based parser for reka-edge format
- Add Reka-Edge.jinja chat template
- Detect reka-edge template in try_specialized_template()
- Add LLAMA_EXAMPLE_MTMD to chat-template-file arg

* feat: add reka vlm to gguf conversion script

Converts Reka Yasa2 hf checkpoints to GGUF format:
- Text decoder: Llama-arch with tiktoken/BPE vocab
- Mmproj (--mmproj): ConvNeXt vision backbone + language_projection
- Generates 2D sincos positional embeddings for vision encoder

* test: add Reka Edge chat template and parser tests

- test-chat-template: oracle tests comparing Jinja engine output vs
  common_chat_templates_apply for text, tools, thinking, images, video
- test-chat: PEG parser tests for Reka Edge format, round-trip tests
  for image/video content parts, common path integration tests

* scripts: add Reka Edge mixed quantization helper

Q4_0 base quantization with Q8_0 override for the last 8 transformer
blocks (layers 24-31) via --tensor-type regex.

* fix: adapt chat-reka and tests to upstream API

- Use autoparser::generation_params (not templates_params)
- Add p.prefix(generation_prompt) to PEG parser
- Simplify reasoning parser to match LFM2 pattern
- Remove image/video oracle tests (unsupported by oaicompat parser;
  no other multimodal models test this path)

* fix: avoid duplicate tensor loading in yasa2 vision encoder

TN_YASA_PATCH_W and TN_PATCH_EMBD both resolve to "v.patch_embd.weight",
causing the same tensor to be loaded twice into ctx_data and overflowing
the memory pool. Reuse the tensors already loaded by the common section.

* chore: update image pre-processing settings

The reka-edge model depends on the following settings in an older
fork of llama.cpp:
1. Fixed square resize
2. BICUBIC
3. add_padding=false

In current llama.cpp, this means setting:
- image_resize_algo = RESIZE_ALGO_BICUBIC
- image_resize_pad = false

* chore: remove reka gguf conversion script

* chore: remove reka quantization script

* chore: remove unnecessary changes from PR scope

This commit removes a couple of unnecessary changes for the PR scope:
1. BPE decoder bug fix - this affects reka edge because there's a bug
in our tokenization that doesn't represent <think> tokens as special
tokens. However this isn't meant to be a thinking model so when run
with --reasoning off the edge case does not affect us

2. --chat-template-file support from llama-mtmd-cli - the focus is on
llama-server and the reka edge gguf contains the necessary metadata
to detect the chat template

3. reka edge oracle test cases - no other model has similar test cases,
so I removed it for standardization

* chore: remove unnecessary ggml_cast

This commit removes unnecessary ggml_cast after updating the
reka vlm -> gguf conversion script on hugging face.

* chore: remove redundant code

* chore: remove unnecessary ggml_cont calls

This commit removes all ggml_cont calls except the four that
precede ggml_reshape_3d/ggml_reshape_4d. Those are necessary
because ggml_reshape recomputes strides assuming contiguous
layout and asserts ggml_is_contiguous.

Other operations (ggml_mean, ggml_add, ggml_mul etc.) use
stride-based indexing and handle non-contiguous inputs
correctly and so we are ok to remove ggml_cont for those.

* chore: remove unnecessary ggml_repeat calls

This commit removes unnecessary ggml_repeat calls because the underlying
ops already broadcast automatically.

Every ggml_repeat in yasa2.cpp was expanding a smaller tensor to match
a larger one's shape before passing both to an elementwise op (ggml_add,
ggml_sub, ggml_mul, or ggml_div). This is unnecessary because all four
of these ops already support broadcasting internally.

* chore: restore ggml_cont needed for cpu operations

* refactor: locate reka chat template handler in chat.cpp

* chore: remove unnecessary warmup tokens

* chore: add code comments on image_resize_pad

* chore: remove custom reka parsing code

* chore: revert common/chat.cpp

* Uncomment debug logging for PEG input parsing

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
2026-04-21 20:02:49 +02:00
..
cogvlm.cpp mtmd: add clip_graph::build_mm() (#20751) 2026-03-19 13:11:39 +01:00
conformer.cpp mtmd: add clip_graph::build_mm() (#20751) 2026-03-19 13:11:39 +01:00
deepseekocr.cpp mtmd: Add DeepSeekOCR Support (#17400) 2026-03-25 19:57:40 +01:00
dotsocr.cpp mtmd: support dots.ocr (#17575) 2026-04-09 12:16:38 +02:00
gemma4a.cpp mtmd: add Gemma 4 audio conformer encoder support (#21421) 2026-04-12 14:15:26 +02:00
gemma4v.cpp model, mtmd: fix gguf conversion for audio/vision mmproj (#21309) 2026-04-02 17:10:32 +02:00
glm4v.cpp mtmd: Add DeepSeekOCR Support (#17400) 2026-03-25 19:57:40 +01:00
hunyuanocr.cpp model : add HunyuanOCR support (#21395) 2026-04-05 23:32:14 +02:00
internvl.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
kimik25.cpp model: Add Kimi-K2.5 support (#19170) 2026-02-11 16:47:30 +01:00
kimivl.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
llama4.cpp mtmd: add clip_graph::build_mm() (#20751) 2026-03-19 13:11:39 +01:00
llava.cpp mtmd: add clip_graph::build_mm() (#20751) 2026-03-19 13:11:39 +01:00
minicpmv.cpp mtmd: add clip_graph::build_mm() (#20751) 2026-03-19 13:11:39 +01:00
mobilenetv5.cpp mtmd: add clip_graph::build_mm() (#20751) 2026-03-19 13:11:39 +01:00
models.h mtmd: Add support for Reka Edge 2603 (#21616) 2026-04-21 20:02:49 +02:00
nemotron-v2-vl.cpp mtmd : Add Nemotron Nano 12B v2 VL support (#19547) 2026-02-14 14:07:00 +01:00
paddleocr.cpp model: Add PaddleOCR-VL model support (#18825) 2026-02-19 17:05:25 +01:00
pixtral.cpp mtmd: add clip_graph::build_mm() (#20751) 2026-03-19 13:11:39 +01:00
qwen2vl.cpp mtmd: add clip_graph::build_mm() (#20751) 2026-03-19 13:11:39 +01:00
qwen3a.cpp mtmd: qwen3 audio support (qwen3-omni and qwen3-asr) (#19441) 2026-04-12 23:57:25 +02:00
qwen3vl.cpp mtmd: add clip_graph::build_mm() (#20751) 2026-03-19 13:11:39 +01:00
siglip.cpp mtmd: Add DeepSeekOCR Support (#17400) 2026-03-25 19:57:40 +01:00
step3vl.cpp model : support step3-vl-10b (#21287) 2026-04-08 09:51:31 +02:00
whisper-enc.cpp mtmd : add MERaLiON-2 multimodal audio support (#21756) 2026-04-11 14:15:48 +02:00
yasa2.cpp mtmd: Add support for Reka Edge 2603 (#21616) 2026-04-21 20:02:49 +02:00
youtuvl.cpp mtmd: add clip_graph::build_mm() (#20751) 2026-03-19 13:11:39 +01:00