| .. |
|
models
|
model: Add PaddleOCR-VL model support (#18825)
|
2026-02-19 17:05:25 +01:00 |
|
CMakeLists.txt
|
model: Add PaddleOCR-VL model support (#18825)
|
2026-02-19 17:05:25 +01:00 |
|
llama-adapter.cpp
|
lora: make sure model keep track of associated adapters (#18490)
|
2026-01-15 10:24:28 +01:00 |
|
llama-adapter.h
|
graph : fix KQ mask, lora, cvec reuse checks (#19644)
|
2026-02-16 09:21:11 +02:00 |
|
llama-arch.cpp
|
model: Add PaddleOCR-VL model support (#18825)
|
2026-02-19 17:05:25 +01:00 |
|
llama-arch.h
|
model: Add PaddleOCR-VL model support (#18825)
|
2026-02-19 17:05:25 +01:00 |
|
llama-batch.cpp
|
batch : fix sequence id ownership (#17915)
|
2025-12-11 14:29:47 +02:00 |
|
llama-batch.h
|
batch : fix sequence id ownership (#17915)
|
2025-12-11 14:29:47 +02:00 |
|
llama-chat.cpp
|
docs : Minor cleanups (#19252)
|
2026-02-02 08:38:55 +02:00 |
|
llama-chat.h
|
model : add EXAONE MoE (#18543)
|
2026-01-13 23:28:38 +01:00 |
|
llama-context.cpp
|
llama : use output_resolve_row() in get_logits_ith/get_embeddings_ith (#19663)
|
2026-02-19 09:48:08 +01:00 |
|
llama-context.h
|
graph : fix KQ mask, lora, cvec reuse checks (#19644)
|
2026-02-16 09:21:11 +02:00 |
|
llama-cparams.cpp
|
cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188)
|
2025-06-15 10:08:58 +03:00 |
|
llama-cparams.h
|
context : reserve new scheduler when graph topology changes (#18547)
|
2026-01-15 16:39:17 +02:00 |
|
llama-grammar.cpp
|
llama : rename llama-sampling to llama-sampler (#19363)
|
2026-02-06 07:26:54 +01:00 |
|
llama-grammar.h
|
common/grammar : replace problematic backtracking regex `[\s\S]*` (#18342)
|
2026-01-03 16:02:43 -06:00 |
|
llama-graph.cpp
|
model : add JAIS-2 architecture support (#19488)
|
2026-02-19 13:30:17 +01:00 |
|
llama-graph.h
|
model : add tokenizer from LFM2.5-Audio-1.5B (#19687)
|
2026-02-19 09:54:48 +01:00 |
|
llama-hparams.cpp
|
Kimi-Linear support (backend agnostic + MLA KV cache) (#18755)
|
2026-02-06 11:39:58 +01:00 |
|
llama-hparams.h
|
model: support GLM MoE DSA arch (NOTE: indexer is not yet supported) (#19460)
|
2026-02-13 14:56:53 +01:00 |
|
llama-impl.cpp
|
quantize : add --dry-run option (#19526)
|
2026-02-20 09:20:16 +01:00 |
|
llama-impl.h
|
llama : refactor sampling_info to use buffer_view template (#19368)
|
2026-02-11 05:38:13 +01:00 |
|
llama-io.cpp
|
…
|
|
|
llama-io.h
|
…
|
|
|
llama-kv-cache-iswa.cpp
|
model : support Step3.5-Flash (#19283)
|
2026-02-06 21:06:14 +01:00 |
|
llama-kv-cache-iswa.h
|
llama: print memory breakdown on exit (#15860)
|
2025-09-24 16:53:48 +02:00 |
|
llama-kv-cache.cpp
|
model : support Step3.5-Flash (#19283)
|
2026-02-06 21:06:14 +01:00 |
|
llama-kv-cache.h
|
kv-cache : optimize KQ mask construction (#18842)
|
2026-01-17 15:42:42 +02:00 |
|
llama-kv-cells.h
|
llama: store mrope data in KV cell (#16825)
|
2025-10-29 18:09:18 +01:00 |
|
llama-memory-hybrid-iswa.cpp
|
memory : add llama_memory_hybrid_iswa (#18601)
|
2026-01-21 14:30:23 +02:00 |
|
llama-memory-hybrid-iswa.h
|
memory : add llama_memory_hybrid_iswa (#18601)
|
2026-01-21 14:30:23 +02:00 |
|
llama-memory-hybrid.cpp
|
graph : reuse SSM graphs (#16490)
|
2025-12-16 09:36:21 +02:00 |
|
llama-memory-hybrid.h
|
llama: print memory breakdown on exit (#15860)
|
2025-09-24 16:53:48 +02:00 |
|
llama-memory-recurrent.cpp
|
memory : clarify comments for r_l and s_l tensors [no ci] (#19203)
|
2026-01-30 15:18:41 +01:00 |
|
llama-memory-recurrent.h
|
llama: consistent ctx <-> buf order for KV cache (#16746)
|
2025-10-28 11:23:54 +01:00 |
|
llama-memory.cpp
|
memory : correctly handle failure in apply() (#14438)
|
2025-06-30 18:03:03 +03:00 |
|
llama-memory.h
|
llama: print memory breakdown on exit (#15860)
|
2025-09-24 16:53:48 +02:00 |
|
llama-mmap.cpp
|
mmap: Fix Windows handle lifetime (#19598)
|
2026-02-14 10:05:12 +02:00 |
|
llama-mmap.h
|
llama : add `use_direct_io` flag for model loading (#18166)
|
2026-01-08 08:35:30 +02:00 |
|
llama-model-loader.cpp
|
llama : disable Direct IO by default (#19109)
|
2026-01-28 09:11:13 +02:00 |
|
llama-model-loader.h
|
llama : add `use_direct_io` flag for model loading (#18166)
|
2026-01-08 08:35:30 +02:00 |
|
llama-model-saver.cpp
|
model : full modern bert support (#18330)
|
2026-02-19 08:52:21 +01:00 |
|
llama-model-saver.h
|
…
|
|
|
llama-model.cpp
|
model: Add PaddleOCR-VL model support (#18825)
|
2026-02-19 17:05:25 +01:00 |
|
llama-model.h
|
model : add tokenizer from LFM2.5-Audio-1.5B (#19687)
|
2026-02-19 09:54:48 +01:00 |
|
llama-quant.cpp
|
quantize : add --dry-run option (#19526)
|
2026-02-20 09:20:16 +01:00 |
|
llama-quant.h
|
…
|
|
|
llama-sampler.cpp
|
llama : rename llama-sampling to llama-sampler (#19363)
|
2026-02-06 07:26:54 +01:00 |
|
llama-sampler.h
|
llama : rename llama-sampling to llama-sampler (#19363)
|
2026-02-06 07:26:54 +01:00 |
|
llama-vocab.cpp
|
model: Add PaddleOCR-VL model support (#18825)
|
2026-02-19 17:05:25 +01:00 |
|
llama-vocab.h
|
model : add JAIS-2 architecture support (#19488)
|
2026-02-19 13:30:17 +01:00 |
|
llama.cpp
|
llama: fix integer type consistency in split helpers (#18894)
|
2026-01-25 09:10:52 +02:00 |
|
unicode-data.cpp
|
…
|
|
|
unicode-data.h
|
…
|
|
|
unicode.cpp
|
model: Add support for Tiny Aya Models (#19611)
|
2026-02-16 16:28:46 +01:00 |
|
unicode.h
|
devops: add s390x & ppc64le CI (#15925)
|
2025-09-27 02:03:33 +08:00 |