..
models
Optimization: Qwen3 next autoregressive pass ( #17996 )
2025-12-16 11:59:53 +01:00
CMakeLists.txt
cmake: fix Mach-O current version number ( #17877 )
2025-12-09 13:17:41 +02:00
llama-adapter.cpp
aLoRA Support ( #15327 )
2025-09-05 17:32:39 -06:00
llama-adapter.h
aLoRA Support ( #15327 )
2025-09-05 17:32:39 -06:00
llama-arch.cpp
model: fix LFM2_MOE missing tensors ( #18132 )
2025-12-17 12:17:11 +01:00
llama-arch.h
arch: refactor LLM_TENSOR_NAMES ( #18051 )
2025-12-16 13:22:30 +01:00
llama-batch.cpp
Merge branch 'master' into HEAD
2025-12-11 14:42:56 +02:00
llama-batch.h
Merge branch 'master' into HEAD
2025-12-11 14:42:56 +02:00
llama-chat.cpp
model : add openPangu-Embedded ( #16941 )
2025-11-05 10:28:58 +01:00
llama-chat.h
model : add openPangu-Embedded ( #16941 )
2025-11-05 10:28:58 +01:00
llama-context.cpp
common : disable backend sampling when grammar is involved
2025-12-18 10:52:21 +02:00
llama-context.h
Merge remote-tracking branch 'upstream/master' into backend-sampling
2025-12-16 09:45:08 +01:00
llama-cparams.cpp
cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ ( #14188 )
2025-06-15 10:08:58 +03:00
llama-cparams.h
server : support unified cache across slots ( #16736 )
2025-11-02 18:14:04 +02:00
llama-grammar.cpp
llama : add token matching support to llama-grammar ( #17816 )
2025-12-09 00:32:57 -06:00
llama-grammar.h
llama : add token matching support to llama-grammar ( #17816 )
2025-12-09 00:32:57 -06:00
llama-graph.cpp
llama : fix typo in comment [no ci]
2025-12-17 09:02:30 +01:00
llama-graph.h
Merge remote-tracking branch 'upstream/master' into backend-sampling
2025-12-16 09:45:08 +01:00
llama-hparams.cpp
model: support GLM4V vision encoder ( #18042 )
2025-12-16 11:25:26 +01:00
llama-hparams.h
model: support GLM4V vision encoder ( #18042 )
2025-12-16 11:25:26 +01:00
llama-impl.cpp
llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization ( #16653 )
2025-12-15 09:24:59 +01:00
llama-impl.h
ggml, llama : use defaulted constructors/destructors ( #17649 )
2025-12-03 07:12:18 +01:00
llama-io.cpp
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llama-io.h
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llama-kv-cache-iswa.cpp
kv-cache : pad the cache size to 256 for performance ( #17046 )
2025-11-07 20:03:25 +02:00
llama-kv-cache-iswa.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp
kv-cache: Fix state restore fragmented cache ( #17982 )
2025-12-15 19:28:35 +02:00
llama-kv-cache.h
kv-cache: Fix state restore fragmented cache ( #17982 )
2025-12-15 19:28:35 +02:00
llama-kv-cells.h
llama: store mrope data in KV cell ( #16825 )
2025-10-29 18:09:18 +01:00
llama-memory-hybrid.cpp
graph : reuse SSM graphs ( #16490 )
2025-12-16 09:36:21 +02:00
llama-memory-hybrid.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp
memory: Hybrid context shift ( #17009 )
2025-11-10 17:14:23 +02:00
llama-memory-recurrent.h
llama: consistent ctx <-> buf order for KV cache ( #16746 )
2025-10-28 11:23:54 +01:00
llama-memory.cpp
memory : correctly handle failure in apply() ( #14438 )
2025-06-30 18:03:03 +03:00
llama-memory.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-mmap.cpp
llama : Async DirectIO model loading on Linux ( #18012 )
2025-12-18 08:27:19 +02:00
llama-mmap.h
llama : Async DirectIO model loading on Linux ( #18012 )
2025-12-18 08:27:19 +02:00
llama-model-loader.cpp
llama : Async DirectIO model loading on Linux ( #18012 )
2025-12-18 08:27:19 +02:00
llama-model-loader.h
llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization ( #16653 )
2025-12-15 09:24:59 +01:00
llama-model-saver.cpp
llama : improve sep token handling ( #14272 )
2025-06-20 14:04:09 +02:00
llama-model-saver.h
llama/ggml: add LLM training support ( #10544 )
2025-05-12 14:44:49 +02:00
llama-model.cpp
Merge branch 'master' into HEAD
2025-12-18 10:12:47 +02:00
llama-model.h
llama : add support for NVIDIA Nemotron 3 Nano ( #18058 )
2025-12-16 07:19:26 +01:00
llama-quant.cpp
llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization ( #16653 )
2025-12-15 09:24:59 +01:00
llama-quant.h
llama : refactor `src/llama.cpp` ( #10902 )
2025-01-03 10:18:53 +02:00
llama-sampling.cpp
Make backend dist sampler use same rnd's as dist sampler
2025-12-19 11:43:19 +01:00
llama-sampling.h
sampling : check backend support during init
2025-12-04 17:29:08 +02:00
llama-vocab.cpp
model : add KORMo model ( #18032 )
2025-12-15 18:51:43 +01:00
llama-vocab.h
model : add AfmoeForCausalLM support ( #16477 )
2025-11-14 13:54:10 +01:00
llama.cpp
Merge branch 'master' into HEAD
2025-12-18 10:12:47 +02:00
unicode-data.cpp
server : better security control for public deployments ( #9776 )
2024-10-08 13:27:04 +02:00
unicode-data.h
llama : reduce compile time and binary size ( #9712 )
2024-10-02 15:49:55 +02:00
unicode.cpp
fix: prevent segfault in tokenizer on highly repetitive input ( #17786 )
2025-12-05 13:52:23 +02:00
unicode.h
devops: add s390x & ppc64le CI ( #15925 )
2025-09-27 02:03:33 +08:00