llama.cpp/src
anchortense 58190cc84d
llama : correct platform-independent loading of BOOL metadata (#21428)
* model-loader : fix GGUF bool array conversion

* model-loader : fix remaining GGUF bool pointer uses
2026-04-06 01:40:38 +02:00
..
models model, mtmd: fix gguf conversion for audio/vision mmproj (#21309) 2026-04-02 17:10:32 +02:00
CMakeLists.txt model, mtmd: fix gguf conversion for audio/vision mmproj (#21309) 2026-04-02 17:10:32 +02:00
llama-adapter.cpp fix: correct misspellings in code comments (#21217) 2026-03-31 13:50:51 +02:00
llama-adapter.h llama : re-enable manual LoRA adapter free (#19983) 2026-03-18 12:03:26 +02:00
llama-arch.cpp model, mtmd: fix gguf conversion for audio/vision mmproj (#21309) 2026-04-02 17:10:32 +02:00
llama-arch.h model, mtmd: fix gguf conversion for audio/vision mmproj (#21309) 2026-04-02 17:10:32 +02:00
llama-batch.cpp kv-cache : fix M-RoPE checkpoints (#20132) 2026-03-06 08:46:51 +02:00
llama-batch.h fix: correct misspellings in code comments (#21217) 2026-03-31 13:50:51 +02:00
llama-chat.cpp model : add HunyuanOCR support (#21395) 2026-04-05 23:32:14 +02:00
llama-chat.h model : add HunyuanOCR support (#21395) 2026-04-05 23:32:14 +02:00
llama-context.cpp fix: correct misspellings in code comments (#21217) 2026-03-31 13:50:51 +02:00
llama-context.h graph : fix KQ mask, lora, cvec reuse checks (#19644) 2026-02-16 09:21:11 +02:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-ext.h tests : add unit test coverage for llama_tensor_get_type (#20112) 2026-04-02 22:53:58 +02:00
llama-grammar.cpp common/grammar: fix grammar parsing issues to prevent stack overflow and hangs (#18604) 2026-03-21 18:43:35 +01:00
llama-grammar.h common/grammar : replace problematic backtracking regex `[\s\S]*` (#18342) 2026-01-03 16:02:43 -06:00
llama-graph.cpp llama : rotate activations for better quantization (#21038) 2026-04-01 16:58:01 +03:00
llama-graph.h llama : rotate activations for better quantization (#21038) 2026-04-01 16:58:01 +03:00
llama-hparams.cpp llama: dynamic head_dim and n_rot for SWA (#20301) 2026-03-09 22:22:39 +01:00
llama-hparams.h model, mtmd: fix gguf conversion for audio/vision mmproj (#21309) 2026-04-02 17:10:32 +02:00
llama-impl.cpp llama : correct platform-independent loading of BOOL metadata (#21428) 2026-04-06 01:40:38 +02:00
llama-impl.h llama : enable chunked fused GDN path (#20340) 2026-03-11 22:46:40 +02:00
llama-io.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-io.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-kv-cache-iswa.cpp (revert) kv-cache : do not quantize SWA KV cache (#21332) 2026-04-03 09:07:01 +03:00
llama-kv-cache-iswa.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp llama : rotate activations for better quantization (#21038) 2026-04-01 16:58:01 +03:00
llama-kv-cache.h llama : rotate activations for better quantization (#21038) 2026-04-01 16:58:01 +03:00
llama-kv-cells.h llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-memory-hybrid-iswa.cpp memory: respect unified KV cache in hybrid memory for eval tasks (#21224) 2026-04-01 12:50:17 +03:00
llama-memory-hybrid-iswa.h memory : add llama_memory_hybrid_iswa (#18601) 2026-01-21 14:30:23 +02:00
llama-memory-hybrid.cpp memory: respect unified KV cache in hybrid memory for eval tasks (#21224) 2026-04-01 12:50:17 +03:00
llama-memory-hybrid.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp memory : fix seq_id bounds in llama_memory_recurrent::state_read_meta() (#20887) 2026-03-23 14:08:46 +02:00
llama-memory-recurrent.h llama: consistent ctx <-> buf order for KV cache (#16746) 2025-10-28 11:23:54 +01:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-mmap.cpp llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-mmap.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-loader.cpp llama : correct platform-independent loading of BOOL metadata (#21428) 2026-04-06 01:40:38 +02:00
llama-model-loader.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-saver.cpp llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model-saver.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
llama-model.cpp llama-model: read final_logit_softcapping for Gemma 4 (#21390) 2026-04-04 13:05:10 +02:00
llama-model.h model, mtmd: fix gguf conversion for audio/vision mmproj (#21309) 2026-04-02 17:10:32 +02:00
llama-quant.cpp tests : add unit test coverage for llama_tensor_get_type (#20112) 2026-04-02 22:53:58 +02:00
llama-quant.h llama : refactor `src/llama.cpp` (#10902) 2025-01-03 10:18:53 +02:00
llama-sampler.cpp llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-sampler.h llama : rename llama-sampling to llama-sampler (#19363) 2026-02-06 07:26:54 +01:00
llama-vocab.cpp common : add gemma 4 specialized parser (#21418) 2026-04-04 20:39:00 +02:00
llama-vocab.h vocab: fix Gemma4 tokenizer (#21343) 2026-04-03 10:33:03 +02:00
llama.cpp llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp llama: add custom newline split for Gemma 4 (#21406) 2026-04-04 15:06:34 +08:00
unicode.h vocab: fix Gemma4 tokenizer (#21343) 2026-04-03 10:33:03 +02:00