..
CMakeLists.txt
kv-cache : drop the "unified" prefix ( #15467 )
2025-08-21 17:00:33 +03:00
llama-adapter.cpp
aLoRA Support ( #15327 )
2025-09-05 17:32:39 -06:00
llama-adapter.h
aLoRA Support ( #15327 )
2025-09-05 17:32:39 -06:00
llama-arch.cpp
model : add GroveMoE support ( #15510 )
2025-09-25 19:50:28 +02:00
llama-arch.h
model : add GroveMoE support ( #15510 )
2025-09-25 19:50:28 +02:00
llama-batch.cpp
perplexity : provide a helpful hint for has_cpl case in split_equal error. ( #15304 )
2025-08-14 14:03:30 +03:00
llama-batch.h
llama : reuse compute graphs ( #14482 )
2025-07-17 19:08:33 +03:00
llama-chat.cpp
model : add grok-2 support ( #15539 )
2025-09-14 23:00:59 +02:00
llama-chat.h
model : add grok-2 support ( #15539 )
2025-09-14 23:00:59 +02:00
llama-context.cpp
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-context.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-cparams.cpp
cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ ( #14188 )
2025-06-15 10:08:58 +03:00
llama-cparams.h
llama : bump max seq limit from 64 to 256 ( #15916 )
2025-09-18 12:47:56 +03:00
llama-grammar.cpp
`server`: streaming of tool calls and thoughts when `--jinja` is on ( #12379 )
2025-05-25 01:48:08 +01:00
llama-grammar.h
`tool-call`: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars ( #12034 )
2025-03-05 13:05:13 +00:00
llama-graph.cpp
model : add GroveMoE support ( #15510 )
2025-09-25 19:50:28 +02:00
llama-graph.h
llama : add support for qwen3 reranker ( #15824 )
2025-09-25 11:53:09 +03:00
llama-hparams.cpp
kv-cache : fix SWA checks + disable cacheless iSWA ( #15811 )
2025-09-05 10:39:22 +03:00
llama-hparams.h
llama : parameter conversion and loading fixes for PLaMo2 variants ( #16075 )
2025-10-01 23:08:15 +02:00
llama-impl.cpp
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
llama-impl.h
llama: use FA + max. GPU layers by default ( #15434 )
2025-08-30 16:32:10 +02:00
llama-io.cpp
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llama-io.h
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llama-kv-cache-iswa.cpp
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-kv-cache-iswa.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-kv-cache.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-kv-cells.h
llama : remove KV cache defragmentation logic ( #15473 )
2025-08-22 12:22:13 +03:00
llama-memory-hybrid.cpp
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-memory-hybrid.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-memory-recurrent.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-memory.cpp
memory : correctly handle failure in apply() ( #14438 )
2025-06-30 18:03:03 +03:00
llama-memory.h
llama: print memory breakdown on exit ( #15860 )
2025-09-24 16:53:48 +02:00
llama-mmap.cpp
llama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama.cpp sources ( #14013 )
2025-06-05 11:57:42 +02:00
llama-mmap.h
llama-mmap: fix missing include ( #11796 )
2025-02-10 20:58:18 +02:00
llama-model-loader.cpp
nvidia nemotron nano v2 (nemotronh) ( #15507 )
2025-08-28 18:39:31 -06:00
llama-model-loader.h
model: support GLM 4.5 family of models ( #14939 )
2025-08-04 20:29:25 +02:00
llama-model-saver.cpp
llama : improve sep token handling ( #14272 )
2025-06-20 14:04:09 +02:00
llama-model-saver.h
llama/ggml: add LLM training support ( #10544 )
2025-05-12 14:44:49 +02:00
llama-model.cpp
llama : parameter conversion and loading fixes for PLaMo2 variants ( #16075 )
2025-10-01 23:08:15 +02:00
llama-model.h
model : add GroveMoE support ( #15510 )
2025-09-25 19:50:28 +02:00
llama-quant.cpp
llama-quant : fix the verification of attention layers for encoder-decoder models ( #16023 )
2025-09-17 09:30:55 +02:00
llama-quant.h
llama : refactor `src/llama.cpp` ( #10902 )
2025-01-03 10:18:53 +02:00
llama-sampling.cpp
sampling : optimize dist sampler ( #15704 )
2025-09-03 18:16:26 +03:00
llama-sampling.h
llama : add `llama_vocab`, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
llama-vocab.cpp
devops: add s390x & ppc64le CI ( #15925 )
2025-09-27 02:03:33 +08:00
llama-vocab.h
model : add grok-2 support ( #15539 )
2025-09-14 23:00:59 +02:00
llama.cpp
ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type ( #15797 )
2025-09-11 22:47:38 +02:00
unicode-data.cpp
server : better security control for public deployments ( #9776 )
2024-10-08 13:27:04 +02:00
unicode-data.h
llama : reduce compile time and binary size ( #9712 )
2024-10-02 15:49:55 +02:00
unicode.cpp
model : add Kimi-K2 support ( #14654 )
2025-07-15 21:54:22 +02:00
unicode.h
devops: add s390x & ppc64le CI ( #15925 )
2025-09-27 02:03:33 +08:00