llama.cpp/tools/mtmd
Simranjeet Singh a61c8bc3bf
mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256)
* Add Gemma3nVisionModel - MobileNetV5 vision encoder convertor to convert_hf_to_gguf.py. Add gemma3n to vision projectors in gguf-py/gguf/constants.py.

* Add mobilenetv5 impl

* Fix comments, remove unused vars

* Fix permute and remove transpose of projection weights

* Fix comments, remove debugging prints from hf_to_gguf

* 1. Hard-code image_mean = 0 and image_std = 1
2. Use available tensor mapping logic
3. Remove redundant chat template replacement of soft tokens placeholder with media placeholder

* 1. Move mobilenetv5 helpers declarations to `clip_graph_mobilenetv5` struct and definitions to mobilenetv5.cpp
2.Remove unused `clip_is_gemma3n` func declarations and definitions
3. Remove redundant `rescale_image_u8_to_f32` func and use `normalize_image_u8_to_f32` with zero mean and unit std
4. Calculate n_patches using image_size / patch_size

* Remove obsolete comments

* - convert_hf_to_gguf.py & constants.py & tensor_mapping.py: Use explicit mapping: Custom map for double indexed blocks and tensor_mapping.py for rest
- convert_hf_to_gguf.py: Unsqueeze Stem Bias and Layer scale tensors to correct shape while converting to gguf
- mobilenetv5.cpp: Remove explicit reshaping of Stem Bias and Layer scale which are now handled while converting to gguf, replace fprintf with LOG_*
- clip.cpp: Remove unused embedding and hard_emb_norm tensor loading

* - Rename tensors to v.conv..., v.blk..., v.msfa... to better align with already existing terminology

* Fix stem conv bias name

* Remove explicit handling of bias term for stem conv

* - Change order of addition in "project_per_layer_inputs" to support broadcasting of vision inp_per_layer
- Simplify the vision embeddings path of "get_per_layer_inputs" to output [n_embd_altup, n_layer, 1], broadcastable

* clean up conversion script

* fix code style

* also preserve audio tensors

* trailing space

* split arch A and V

* rm unused gemma3 func

* fix alignment

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2026-01-09 23:42:38 +01:00
..
legacy-models requirements : update transformers/torch for Embedding Gemma (#15828) 2025-09-09 06:06:52 +02:00
models mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256) 2026-01-09 23:42:38 +01:00
CMakeLists.txt mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256) 2026-01-09 23:42:38 +01:00
README.md mtmd : remove libllava, remove clip-quantize-cli (⚠️ breaking change) (#13460) 2025-05-13 15:33:58 +02:00
clip-graph.h model: support GLM4V vision encoder (#18042) 2025-12-16 11:25:26 +01:00
clip-impl.h mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256) 2026-01-09 23:42:38 +01:00
clip-model.h mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256) 2026-01-09 23:42:38 +01:00
clip.cpp mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256) 2026-01-09 23:42:38 +01:00
clip.h mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256) 2026-01-09 23:42:38 +01:00
deprecation-warning.cpp mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00
mtmd-audio.cpp mtmd: mtmd_audio_streaming_istft (#18645) 2026-01-06 21:00:29 +01:00
mtmd-audio.h mtmd: mtmd_audio_streaming_istft (#18645) 2026-01-06 21:00:29 +01:00
mtmd-cli.cpp model : add ASR support for LFM2-Audio-1.5B (conformer) (#18106) 2025-12-19 00:18:01 +01:00
mtmd-helper.cpp mtmd: explicitly forbidden inclusion of private header and libcommon (#17946) 2025-12-12 15:16:06 +01:00
mtmd-helper.h mtmd: add mtmd_log_set (#17268) 2025-11-14 15:56:19 +01:00
mtmd.cpp mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256) 2026-01-09 23:42:38 +01:00
mtmd.h mtmd: clarify that we no longer accept AI-generated PRs (#18406) 2025-12-28 09:57:04 +01:00
requirements.txt requirements : update transformers/torch for Embedding Gemma (#15828) 2025-09-09 06:06:52 +02:00
test-1.jpeg mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00
test-2.mp3 mtmd : support Qwen 2.5 Omni (input audio+vision, no audio output) (#13784) 2025-05-27 14:06:10 +02:00
tests.sh model : add ASR support for LFM2-Audio-1.5B (conformer) (#18106) 2025-12-19 00:18:01 +01:00

README.md

Multimodal Support in llama.cpp

This directory provides multimodal capabilities for llama.cpp. Initially intended as a showcase for running LLaVA models, its scope has expanded significantly over time to include various other vision-capable models. As a result, LLaVA is no longer the only multimodal architecture supported.

[!IMPORTANT]

Multimodal support can be viewed as a sub-project within llama.cpp. It is under very heavy development, and breaking changes are expected.

The naming and structure related to multimodal support have evolved, which might cause some confusion. Here's a brief timeline to clarify:

  • #3436: Initial support for LLaVA 1.5 was added, introducing llava.cpp and clip.cpp. The llava-cli binary was created for model interaction.
  • #4954: Support for MobileVLM was added, becoming the second vision model supported. This built upon the existing llava.cpp, clip.cpp, and llava-cli infrastructure.
  • Expansion & Fragmentation: Many new models were subsequently added (e.g., #7599, #10361, #12344, and others). However, llava-cli lacked support for the increasingly complex chat templates required by these models. This led to the creation of model-specific binaries like qwen2vl-cli, minicpmv-cli, and gemma3-cli. While functional, this proliferation of command-line tools became confusing for users.
  • #12849: libmtmd was introduced as a replacement for llava.cpp. Its goals include providing a single, unified command-line interface, improving the user/developer experience (UX/DX), and supporting both audio and image inputs.
  • #13012: mtmd-cli was added, consolidating the various model-specific CLIs into a single tool powered by libmtmd.

Pre-quantized models

See the list of pre-quantized model here

How it works and what is mmproj?

Multimodal support in llama.cpp works by encoding images into embeddings using a separate model component, and then feeding these embeddings into the language model.

This approach keeps the multimodal components distinct from the core libllama library. Separating these allows for faster, independent development cycles. While many modern vision models are based on Vision Transformers (ViTs), their specific pre-processing and projection steps can vary significantly. Integrating this diverse complexity directly into libllama is currently challenging.

Consequently, running a multimodal model typically requires two GGUF files:

  1. The standard language model file.
  2. A corresponding multimodal projector (mmproj) file, which handles the image encoding and projection.

What is libmtmd?

As outlined in the history, libmtmd is the modern library designed to replace the original llava.cpp implementation for handling multimodal inputs.

Built upon clip.cpp (similar to llava.cpp), libmtmd offers several advantages:

  • Unified Interface: Aims to consolidate interaction for various multimodal models.
  • Improved UX/DX: Features a more intuitive API, inspired by the Processor class in the Hugging Face transformers library.
  • Flexibility: Designed to support multiple input types (text, audio, images) while respecting the wide variety of chat templates used by different models.

How to obtain mmproj

Multimodal projector (mmproj) files are specific to each model architecture.

For the following models, you can use convert_hf_to_gguf.py with --mmproj flag to get the mmproj file:

  • Gemma 3 ; See the guide here - Note: 1B variant does not have vision support
  • SmolVLM (from HuggingFaceTB)
  • SmolVLM2 (from HuggingFaceTB)
  • Pixtral 12B - only works with transformers-compatible checkpoint
  • Qwen 2 VL and Qwen 2.5 VL (from Qwen)
  • Mistral Small 3.1 24B
  • InternVL 2.5 and InternVL 3 from OpenGVLab (note: we don't support conversion of InternVL3-*-hf model, only non-HF version is supported ; InternLM2Model text model is not supported)

For older models, please refer to the relevant guide for instructions on how to obtain or create them:

NOTE: conversion scripts are located under tools/mtmd/legacy-models