llama.cpp/tools/mtmd/models
Simranjeet Singh a61c8bc3bf
mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256)
* Add Gemma3nVisionModel - MobileNetV5 vision encoder convertor to convert_hf_to_gguf.py. Add gemma3n to vision projectors in gguf-py/gguf/constants.py.

* Add mobilenetv5 impl

* Fix comments, remove unused vars

* Fix permute and remove transpose of projection weights

* Fix comments, remove debugging prints from hf_to_gguf

* 1. Hard-code image_mean = 0 and image_std = 1
2. Use available tensor mapping logic
3. Remove redundant chat template replacement of soft tokens placeholder with media placeholder

* 1. Move mobilenetv5 helpers declarations to `clip_graph_mobilenetv5` struct and definitions to mobilenetv5.cpp
2.Remove unused `clip_is_gemma3n` func declarations and definitions
3. Remove redundant `rescale_image_u8_to_f32` func and use `normalize_image_u8_to_f32` with zero mean and unit std
4. Calculate n_patches using image_size / patch_size

* Remove obsolete comments

* - convert_hf_to_gguf.py & constants.py & tensor_mapping.py: Use explicit mapping: Custom map for double indexed blocks and tensor_mapping.py for rest
- convert_hf_to_gguf.py: Unsqueeze Stem Bias and Layer scale tensors to correct shape while converting to gguf
- mobilenetv5.cpp: Remove explicit reshaping of Stem Bias and Layer scale which are now handled while converting to gguf, replace fprintf with LOG_*
- clip.cpp: Remove unused embedding and hard_emb_norm tensor loading

* - Rename tensors to v.conv..., v.blk..., v.msfa... to better align with already existing terminology

* Fix stem conv bias name

* Remove explicit handling of bias term for stem conv

* - Change order of addition in "project_per_layer_inputs" to support broadcasting of vision inp_per_layer
- Simplify the vision embeddings path of "get_per_layer_inputs" to output [n_embd_altup, n_layer, 1], broadcastable

* clean up conversion script

* fix code style

* also preserve audio tensors

* trailing space

* split arch A and V

* rm unused gemma3 func

* fix alignment

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2026-01-09 23:42:38 +01:00
..
cogvlm.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
conformer.cpp model : add ASR support for LFM2-Audio-1.5B (conformer) (#18106) 2025-12-19 00:18:01 +01:00
glm4v.cpp model: support GLM4V vision encoder (#18042) 2025-12-16 11:25:26 +01:00
internvl.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
kimivl.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
llama4.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
llava.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
minicpmv.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
mobilenetv5.cpp mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256) 2026-01-09 23:42:38 +01:00
models.h mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256) 2026-01-09 23:42:38 +01:00
pixtral.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
qwen2vl.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
qwen3vl.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
siglip.cpp model : mtmd : make input norm optional in LFM2-VL (#18594) 2026-01-04 18:50:02 +01:00
whisper-enc.cpp mtmd : Adding support for Nvidia Music Flamingo Model (#18470) 2025-12-31 12:13:23 +01:00
youtuvl.cpp model: support youtu-vl model (#18479) 2026-01-01 19:25:54 +01:00