llama.cpp/src/models
ymcki 3688c4f504
Kimi-Linear support (backend agnostic + MLA KV cache) (#18755)
* kimi linear model implementation

* kimi linear convert_hf_to_gguf

* kimi linear constants.py tensor_mapping.py

* Kimi Linear ggml.h

* kimi linear ggml-cpu

* Kimi Linear ggml-cuda

* Kimi Linear ggml.c

* kimi linear src/llama

* remove "const int64_t n_seq_tokens = q->ne[2];" to get rid of unused variable warning

* remove type mismatch warning

* read MoE params

* removed some hard coded code

* removed all hard code

* use DeepseekV2 tokenizer

* removed unnecessary internal methods called by the old set_vocab of KimiLinear

* rewrite get_vocab for KimiLinear. Removed all kda_scan code

* removed all traces of kda_scan

* reduce OP count by 1 due to removal of kda_scan

* Move KIMI_LINEAR to llm_arch_is_hybrid to enable KV cache

* set n_embd_head_k/v to ensure kv cache works

* don't quantize conv1d of Kimi Linear

* Kimi Linear backend agnostic

* removed LOG_INFO

* naive chunking form implemented

* fixed some comments

* add Kimi-K2 specific tokens to be recognized as EOG

* build_kda_autoregressive is implemented to replace build_kda_recurrent for faster inference. sync'd to b7682

* replaced Akk and Aqk with mul_mat and clamp

* no clamp version

* Moved Aqk computation out of the loop

* fixed typo and split wkv_b into wk_b and wv_b

* MLA KV cache support

* fix trailing spaces

* moved const llama_model & model; around to follow qwen3next format and see if it cna pass the -Wunused-private-field error

* fix trailing whitespace

* removed traling whitespaces in empty line + make sure indentation is multiple of 4

* try to make lint happy

* remove blank lines to make lint happy

* removed at least blank line containing white space

* fixed flake8 complaints locally

* return ggml_tensor * pair in kda_autoregressive and kda_chunking as in ngxson's Qwen3Next improvement

* removed Kimi-Linear specific change that causes failure at server-windows

* removed private: from kimi_linear to make build checks happy

* removed unnecessary ggml_cont before ggml_reshape

* created static function causal_conv1d to abtract similar code for q/k/v

* merged dt_bias to SSM_DT. Do -exp(log_A) in convert_hf_to_gguf.py.

* reverted to original

* fixed find_hparam calls. Fixed e_score_correction_bias to use bias instead of weight. Removed all ssm_conv bias terms.

* remove DT_B from constants.py. remove one comment line in llama-model.cpp

* new class llm_graph_input_mem_hybrid_k to get around the new MLA change. switch the concat order of ggml_concat calls in kimi-linear.cpp to accommodate MLA changes. Removed support for exp_probs_b.weight

* remove ssm_o_norm_b

* remove ssm_o_norm_b

* changed hparams.kda_head_dim to hparams.n_embd_head_kda. added TODO comment for class llama_graph_mem_hybrid_k

* removed all ggml_cont b4 ggml_reshape_4d

* Whitespace

* replaced all hparams.get with find_hparams

* added new names for n_experts, n_experts_used and score_func in TextModel and removed their code in KimiLinear in convert_hf_to_gguf.py. Removed unnecessary ggml_cont and GGML_ASSERT in kimi-linear.cpp

* use is_mla to switch between different mem_hybrid types

* fixed logical errors in convert_hf_to_gguf.py pointed out by CISC

* removed if else for required parameters kv_lora_rank and qk_rope_head_dim

* add back ggml_cont for Vcur

* minor changes

* removed extra line in llama-vocab.cpp. Added back the comment in llama-graph.cpp

* f16 gguf cannot run without context length

* made a mistake of adding back n_ctx parsing

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
2026-02-06 11:39:58 +01:00
..
afmoe.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
apertus.cpp
arcee.cpp
arctic.cpp
arwkv7.cpp
baichuan.cpp
bailingmoe.cpp
bailingmoe2.cpp
bert.cpp model : add support for JinaBertModel with non-gated ffn (#18475) 2026-01-01 18:38:51 +01:00
bitnet.cpp
bloom.cpp
chameleon.cpp
chatglm.cpp
codeshell.cpp
cogvlm.cpp graph : reduce topology branching (#18548) 2026-01-02 19:01:56 +02:00
cohere2-iswa.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
command-r.cpp
dbrx.cpp
deci.cpp
deepseek.cpp
deepseek2.cpp docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
dots1.cpp
dream.cpp
ernie4-5-moe.cpp
ernie4-5.cpp models : move build_inp_out_ids outside loop (#17151) 2025-11-10 22:55:30 +01:00
exaone-moe.cpp model : add EXAONE MoE (#18543) 2026-01-13 23:28:38 +01:00
exaone.cpp
exaone4.cpp
falcon-h1.cpp
falcon.cpp
gemma-embedding.cpp graph : reduce topology branching (#18548) 2026-01-02 19:01:56 +02:00
gemma.cpp
gemma2-iswa.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
gemma3.cpp graph : reduce topology branching (#18548) 2026-01-02 19:01:56 +02:00
gemma3n-iswa.cpp graph : utilize `ggml_build_forward_select()` to avoid reallocations (#18898) 2026-01-23 18:22:34 +02:00
glm4-moe.cpp model: support GLM4V vision encoder (#18042) 2025-12-16 11:25:26 +01:00
glm4.cpp model: support GLM4V vision encoder (#18042) 2025-12-16 11:25:26 +01:00
gpt2.cpp
gptneox.cpp
granite-hybrid.cpp
granite.cpp
graph-context-mamba.cpp
grok.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
grovemoe.cpp
hunyuan-dense.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
hunyuan-moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
internlm2.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
jais.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
jamba.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
kimi-linear.cpp Kimi-Linear support (backend agnostic + MLA KV cache) (#18755) 2026-02-06 11:39:58 +01:00
lfm2.cpp models : fix LFM2 tensors (#17548) 2025-11-27 16:04:29 +02:00
llada-moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
llada.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
llama-iswa.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
llama.cpp model : support for LlamaBidirectionalModel architecture (#18220) 2025-12-24 14:02:36 +01:00
maincoder.cpp model : Maincoder-1B support (#18534) 2026-01-02 20:11:59 +01:00
mamba.cpp
mimo2-iswa.cpp model: support MiMo-V2-Flash (#18328) 2025-12-24 23:07:08 +01:00
minicpm3.cpp mla : make the V tensor a view of K (#18986) 2026-01-22 22:09:01 +02:00
minimax-m2.cpp
mistral3.cpp model: support Ministral3 (#17644) 2025-12-01 12:26:52 +01:00
models.h Kimi-Linear support (backend agnostic + MLA KV cache) (#18755) 2026-02-06 11:39:58 +01:00
modern-bert.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
mpt.cpp
nemotron-h.cpp llama : clarify nemotron-h.cpp comment about RoPE [no ci] (#18997) 2026-01-21 18:31:34 +01:00
nemotron.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
neo-bert.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
olmo.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
olmo2.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
olmoe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
openai-moe-iswa.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
openelm.cpp models : remove unnecessary cont in openelm (#19289) 2026-02-03 14:20:57 +01:00
orion.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
pangu-embedded.cpp model : add openPangu-Embedded (#16941) 2025-11-05 10:28:58 +01:00
phi2.cpp
phi3.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
plamo.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
plamo2.cpp
plamo3.cpp model : Plamo3 support (#17304) 2025-12-28 17:28:31 +01:00
plm.cpp mla : make the V tensor a view of K (#18986) 2026-01-22 22:09:01 +02:00
qwen.cpp
qwen2.cpp model : add KORMo model (#18032) 2025-12-15 18:51:43 +01:00
qwen2moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen2vl.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen3.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen3moe.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
qwen3next.cpp debug: make common_debug_print_tensor readable (#19331) 2026-02-04 17:55:31 +01:00
qwen3vl-moe.cpp graph : utilize `ggml_build_forward_select()` to avoid reallocations (#18898) 2026-01-23 18:22:34 +02:00
qwen3vl.cpp graph : utilize `ggml_build_forward_select()` to avoid reallocations (#18898) 2026-01-23 18:22:34 +02:00
refact.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
rnd1.cpp models : Added support for RND1 Diffusion Language Model (#17433) 2025-11-24 14:16:56 +08:00
rwkv6-base.cpp
rwkv6.cpp
rwkv6qwen2.cpp
rwkv7-base.cpp
rwkv7.cpp
seed-oss.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
smallthinker.cpp llama : refactor rope_freq_base/scale_swa conversion and init (#18553) 2026-01-05 09:14:04 +01:00
smollm3.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
stablelm.cpp
starcoder.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
starcoder2.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
t5-dec.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
t5-enc.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
wavtokenizer-dec.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00
xverse.cpp chore : fix models indent after refactor (#16992) 2025-11-04 12:29:15 +01:00