llama.cpp/src/models
Ryan Mangeno dfc959b886
model : Granite Embedding support (#15641)
ModernBERT but without `head.norm` so will currently fail to convert and run any other ModernBERT models, PRs with `head.norm` support welcome!

* constants and tensor mappings for modern bert support, model not supported yet but working on getting conversion to work for encoder only

* conversion now working, hf -> gguf

* working on support, now working on building graph

* some cleanup

* cleanup

* continuing

* correct tensor shape for qkv

* fixed tensor mappings and working on buildin graph

* tensor debugging now works -> (llama-eval-callback), instead of simulated gate split with views, GEGLU is now used which does exactly this

* cleanup

* cleanup

* cleanup

* more cleanup

* ubatch issues, the assert for checking equal seqs in llama-graph.cpp when building attention  keeps failing, setting ubatch size to 1 when running llama-embedding with --ubatch-size 1 makes it work, but needs to be looked into more

* added cls token per previous modern bert attempt, still working on checking out the rest

* fixed pre tokenizer and still working through previous pr

* working through previous attemp, implimented more accurate conversion per previous attempt, added local sliding window attention that alternates every third layer

* fixed pre tokenizer

* working on swa with local and global alternating attention

* some cleanup and now fails on build attn

* starting to work, and some cleanup, currently failing on last layer construction in graph build

* alternating rope implemented and modern bert graph build succeeds

* fixed asser for equal ubatch seq

* cleanup

* added mask check in vocab

* fixed alternating rope, the hparams.rope_freq_base_train and hparams.rope_freq_base_train_swa were the same and i set them to correct values

* reuse variable

* removed repeat

* standard swa method can be used instead of a new enum being LLAMA_SWA_TYPE_LOCAL

* correct swa layer indexing, is supposed to be 0, 3, 6 ... instead of 1, 4, 7 ...

* more modular hparam setting

* replaced attn out norm with ffn_norm and cosine similarity between hf embds and llama.cpp embds went way up, from 0.05 to 0.24, replaced the cacheless kv with swa todo per the previous conversion

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update convert_hf_to_gguf_update.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-vocab.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/tensor_mapping.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-graph.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-arch.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* removed redundant hparam set

* enums for model sizes

* conversion for modern-bert model supported rather than just granite-small

* Update src/llama-model.cpp

Co-authored-by: Gabe Goodhart <ghart@us.ibm.com>

* Update src/llama-model.cpp

Co-authored-by: Gabe Goodhart <ghart@us.ibm.com>

* fixed ordering of enum for freq_base_swa

* fixed where I added residual, now gives much much better embeddings~

* readded cacheless logic

* removing whitespace

* conversion now working for swa pattern - dense every n layers

* modern bert put into seperate src file

* removing whitespace

* fixed whitespace and newline errors in editorconfig job

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* better naming convention, n_swa_pattern -> swa_period

* reusing sliding_window_pattern key rather than making new dense_every_n_layers key, and adding writing and reading support

* fixing pyright type-check fail

* Update convert_hf_to_gguf.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/gguf_writer.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-hparams.h

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model-saver.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/models/modern-bert.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/models/modern-bert.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/models/modern-bert.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update gguf-py/gguf/gguf_writer.py

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/models/modern-bert.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/models/modern-bert.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model-loader.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model-loader.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model-loader.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* added descriptions in llama-model

* fixed tensor mappings for conversion

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update src/llama-model.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* mapping name for size

* nits

* unused

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Gabe Goodhart <ghart@us.ibm.com>
2025-12-23 00:28:19 +01:00
..
afmoe.cpp model : add AfmoeForCausalLM support (#16477) 2025-11-14 13:54:10 +01:00
apertus.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
arcee.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
arctic.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
arwkv7.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
baichuan.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
bailingmoe.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
bailingmoe2.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
bert.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
bitnet.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
bloom.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
chameleon.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
chatglm.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
codeshell.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
cogvlm.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
cohere2-iswa.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
command-r.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
dbrx.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
deci.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
deepseek.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
deepseek2.cpp models : fix the attn_factor for mistral3 graphs + improve consistency (#17945) 2025-12-12 17:12:40 +02:00
dots1.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
dream.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
ernie4-5-moe.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
ernie4-5.cpp models : move build_inp_out_ids outside loop (#17151) 2025-11-10 22:55:30 +01:00
exaone.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
exaone4.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
falcon-h1.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
falcon.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
gemma-embedding.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
gemma.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
gemma2-iswa.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
gemma3.cpp model : support Rnj-1 (#17811) 2025-12-09 04:49:03 +01:00
gemma3n-iswa.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
glm4-moe.cpp model: support GLM4V vision encoder (#18042) 2025-12-16 11:25:26 +01:00
glm4.cpp model: support GLM4V vision encoder (#18042) 2025-12-16 11:25:26 +01:00
gpt2.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
gptneox.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
granite-hybrid.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
granite.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
graph-context-mamba.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
grok.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
grovemoe.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
hunyuan-dense.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
hunyuan-moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
internlm2.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
jais.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
jamba.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
lfm2.cpp models : fix LFM2 tensors (#17548) 2025-11-27 16:04:29 +02:00
llada-moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
llada.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
llama-iswa.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
llama.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
mamba.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
minicpm3.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
minimax-m2.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
mistral3.cpp model: support Ministral3 (#17644) 2025-12-01 12:26:52 +01:00
models.h model : Granite Embedding support (#15641) 2025-12-23 00:28:19 +01:00
modern-bert.cpp model : Granite Embedding support (#15641) 2025-12-23 00:28:19 +01:00
mpt.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
nemotron-h.cpp llama : add support for NVIDIA Nemotron 3 Nano (#18058) 2025-12-16 07:19:26 +01:00
nemotron.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
neo-bert.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
olmo.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
olmo2.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
olmoe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
openai-moe-iswa.cpp models : move build_inp_out_ids outside loop (#17151) 2025-11-10 22:55:30 +01:00
openelm.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
orion.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
pangu-embedded.cpp model : add openPangu-Embedded (#16941) 2025-11-05 10:28:58 +01:00
phi2.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
phi3.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
plamo.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
plamo2.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
plm.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
qwen2.cpp model : add KORMo model (#18032) 2025-12-15 18:51:43 +01:00
qwen2moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen2vl.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen3.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen3moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen3next.cpp Optimization: Qwen3 next autoregressive pass (#17996) 2025-12-16 11:59:53 +01:00
qwen3vl-moe.cpp hparams : add n_embd_inp() to support extended embed (#16928) 2025-11-07 19:27:58 +01:00
qwen3vl.cpp hparams : add n_embd_inp() to support extended embed (#16928) 2025-11-07 19:27:58 +01:00
refact.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
rnd1.cpp models : Added support for RND1 Diffusion Language Model (#17433) 2025-11-24 14:16:56 +08:00
rwkv6-base.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
rwkv6.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
rwkv6qwen2.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
rwkv7-base.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
rwkv7.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
seed-oss.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
smallthinker.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
smollm3.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
stablelm.cpp refactor : llama-model.cpp (#16252) 2025-10-31 23:40:23 +01:00
starcoder.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
starcoder2.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
t5-dec.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
t5-enc.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
wavtokenizer-dec.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
xverse.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00