..
models
fix: correct misspellings in code comments ( #21217 )
2026-03-31 13:50:51 +02:00
CMakeLists.txt
model : add Jina Embeddings v5 Nano (partial EuroBERT) support ( #19826 )
2026-02-26 12:14:09 +01:00
llama-adapter.cpp
fix: correct misspellings in code comments ( #21217 )
2026-03-31 13:50:51 +02:00
llama-adapter.h
llama : re-enable manual LoRA adapter free ( #19983 )
2026-03-18 12:03:26 +02:00
llama-arch.cpp
add missing ROPE_FACTORS_LONG/SHORT for MiniCPM ( #21150 )
2026-03-29 19:45:40 +02:00
llama-arch.h
mtmd: Add DeepSeekOCR Support ( #17400 )
2026-03-25 19:57:40 +01:00
llama-batch.cpp
kv-cache : fix M-RoPE checkpoints ( #20132 )
2026-03-06 08:46:51 +02:00
llama-batch.h
fix: correct misspellings in code comments ( #21217 )
2026-03-31 13:50:51 +02:00
llama-chat.cpp
mtmd: Add DeepSeekOCR Support ( #17400 )
2026-03-25 19:57:40 +01:00
llama-chat.h
mtmd: Add DeepSeekOCR Support ( #17400 )
2026-03-25 19:57:40 +01:00
llama-context.cpp
Use internal cb_eval for attention extraction to eliminate graph splits
2026-03-31 22:13:17 +02:00
llama-context.h
Use internal cb_eval for attention extraction to eliminate graph splits
2026-03-31 22:13:17 +02:00
llama-cparams.cpp
cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ ( #14188 )
2025-06-15 10:08:58 +03:00
llama-cparams.h
llama : add attention weights extraction API [EXPERIMENTAL]
2026-03-31 22:13:17 +02:00
llama-ext.h
test-backend-ops: allow loading tests from file and parsing model operators into file ( #19896 )
2026-03-12 13:26:00 +01:00
llama-grammar.cpp
common/grammar: fix grammar parsing issues to prevent stack overflow and hangs ( #18604 )
2026-03-21 18:43:35 +01:00
llama-grammar.h
common/grammar : replace problematic backtracking regex `[\s\S]*` ( #18342 )
2026-01-03 16:02:43 -06:00
llama-graph.cpp
Use internal cb_eval for attention extraction to eliminate graph splits
2026-03-31 22:13:17 +02:00
llama-graph.h
llama : add attention weights extraction API [EXPERIMENTAL]
2026-03-31 22:13:17 +02:00
llama-hparams.cpp
llama: dynamic head_dim and n_rot for SWA ( #20301 )
2026-03-09 22:22:39 +01:00
llama-hparams.h
llama : add support for Nemotron 3 Super ( #20411 )
2026-03-11 19:27:53 +01:00
llama-impl.cpp
impl : use 6 digits for tensor dims ( #20094 )
2026-03-04 09:53:38 +01:00
llama-impl.h
llama : enable chunked fused GDN path ( #20340 )
2026-03-11 22:46:40 +02:00
llama-io.cpp
…
llama-io.h
…
llama-kv-cache-iswa.cpp
model : support Step3.5-Flash ( #19283 )
2026-02-06 21:06:14 +01:00
llama-kv-cache-iswa.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp
mtmd: Add DeepSeekOCR Support ( #17400 )
2026-03-25 19:57:40 +01:00
llama-kv-cache.h
fix: correct misspellings in code comments ( #21217 )
2026-03-31 13:50:51 +02:00
llama-kv-cells.h
llama: store mrope data in KV cell ( #16825 )
2025-10-29 18:09:18 +01:00
llama-memory-hybrid-iswa.cpp
memory : add llama_memory_hybrid_iswa ( #18601 )
2026-01-21 14:30:23 +02:00
llama-memory-hybrid-iswa.h
memory : add llama_memory_hybrid_iswa ( #18601 )
2026-01-21 14:30:23 +02:00
llama-memory-hybrid.cpp
graph : reuse SSM graphs ( #16490 )
2025-12-16 09:36:21 +02:00
llama-memory-hybrid.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp
memory : fix seq_id bounds in llama_memory_recurrent::state_read_meta() ( #20887 )
2026-03-23 14:08:46 +02:00
llama-memory-recurrent.h
llama: consistent ctx <-> buf order for KV cache ( #16746 )
2025-10-28 11:23:54 +01:00
llama-memory.cpp
memory : correctly handle failure in apply() ( #14438 )
2025-06-30 18:03:03 +03:00
llama-memory.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-mmap.cpp
llama: fix llama-model-saver ( #20503 )
2026-03-25 12:53:16 +02:00
llama-mmap.h
llama: fix llama-model-saver ( #20503 )
2026-03-25 12:53:16 +02:00
llama-model-loader.cpp
llama-model-loader: print warning when using overrides with mmap ( #20978 )
2026-03-30 17:40:17 +08:00
llama-model-loader.h
llama: fix llama-model-saver ( #20503 )
2026-03-25 12:53:16 +02:00
llama-model-saver.cpp
llama: fix llama-model-saver ( #20503 )
2026-03-25 12:53:16 +02:00
llama-model-saver.h
llama: fix llama-model-saver ( #20503 )
2026-03-25 12:53:16 +02:00
llama-model.cpp
convert : support Qwen3.5/Qwen3.5 Moe NVFP4 and add input scales ( #20505 )
2026-03-26 16:52:06 +01:00
llama-model.h
convert : support Qwen3.5/Qwen3.5 Moe NVFP4 and add input scales ( #20505 )
2026-03-26 16:52:06 +01:00
llama-quant.cpp
mtmd: fix "v.patch_embd" quant and unsupported im2col ops on Metal for deepseek-ocr ( #21027 )
2026-03-27 00:07:55 +01:00
llama-quant.h
…
llama-sampler.cpp
llama : rename llama-sampling to llama-sampler ( #19363 )
2026-02-06 07:26:54 +01:00
llama-sampler.h
llama : rename llama-sampling to llama-sampler ( #19363 )
2026-02-06 07:26:54 +01:00
llama-vocab.cpp
mtmd: Add DeepSeekOCR Support ( #17400 )
2026-03-25 19:57:40 +01:00
llama-vocab.h
model : add JAIS-2 architecture support ( #19488 )
2026-02-19 13:30:17 +01:00
llama.cpp
llama: fix llama-model-saver ( #20503 )
2026-03-25 12:53:16 +02:00
unicode-data.cpp
…
unicode-data.h
…
unicode.cpp
chore : correct typos [no ci] ( #20041 )
2026-03-05 08:50:21 +01:00
unicode.h
devops: add s390x & ppc64le CI ( #15925 )
2025-09-27 02:03:33 +08:00