llama.cpp/tools/mtmd/models
liyang 6e89a8b296 Refactor JinaCLIP vision mmproj mapping to use tensor_mapping table 2026-01-28 13:06:10 +00:00
..
cogvlm.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
conformer.cpp mtmd : Fix ASR for LFM2.5-Audio-1.5B (#18876) 2026-01-16 11:23:08 +01:00
glm4v.cpp model: support GLM4V vision encoder (#18042) 2025-12-16 11:25:26 +01:00
internvl.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
jinaclip2.cpp Refactor JinaCLIP vision mmproj mapping to use tensor_mapping table 2026-01-28 13:06:10 +00:00
kimivl.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
llama4.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
llava.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
minicpmv.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
mobilenetv5.cpp mtmd: Add Gemma3n multimodal support with MobileNetV5 vision encoder (#18256) 2026-01-09 23:42:38 +01:00
models.h address #16574; fold CLI into mtmd-cli; use ggml_rope_ext + bicubic;switch to 'jinaclip2'; fix converter constants 2026-01-26 08:44:46 +00:00
pixtral.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
qwen2vl.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
qwen3vl.cpp clip: move model cgraphs into their own files (#17965) 2025-12-12 21:14:48 +01:00
siglip.cpp model : mtmd : make input norm optional in LFM2-VL (#18594) 2026-01-04 18:50:02 +01:00
whisper-enc.cpp mtmd : Adding support for Nvidia Music Flamingo Model (#18470) 2025-12-31 12:13:23 +01:00
youtuvl.cpp model: support youtu-vl model (#18479) 2026-01-01 19:25:54 +01:00