llama.cpp/src
Xuan-Son Nguyen 3464bdac37
llama: fix ASAN error with M-RoPE (#16848)
2025-10-29 20:11:39 +01:00
..
CMakeLists.txt kv-cache : drop the "unified" prefix (#15467) 2025-08-21 17:00:33 +03:00
llama-adapter.cpp aLoRA Support (#15327) 2025-09-05 17:32:39 -06:00
llama-adapter.h aLoRA Support (#15327) 2025-09-05 17:32:39 -06:00
llama-arch.cpp model : add BailingMoeV2 support (#16063) 2025-10-20 21:38:20 +02:00
llama-arch.h model : add BailingMoeV2 support (#16063) 2025-10-20 21:38:20 +02:00
llama-batch.cpp llama: fix ASAN error with M-RoPE (#16848) 2025-10-29 20:11:39 +01:00
llama-batch.h llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-chat.cpp model : add BailingMoeV2 support (#16063) 2025-10-20 21:38:20 +02:00
llama-chat.h model : add BailingMoeV2 support (#16063) 2025-10-20 21:38:20 +02:00
llama-context.cpp llama : disable pipeline parallelism if compute buffer allocation fails (#16748) 2025-10-27 21:51:28 +01:00
llama-context.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-cparams.cpp cparams : rename LLAMA_MAX_PARALLEL_SEQUENCES to LLAMA_MAX_SEQ (#14188) 2025-06-15 10:08:58 +03:00
llama-cparams.h llama : bump max seq limit from 64 to 256 (#15916) 2025-09-18 12:47:56 +03:00
llama-grammar.cpp `server`: streaming of tool calls and thoughts when `--jinja` is on (#12379) 2025-05-25 01:48:08 +01:00
llama-grammar.h `tool-call`: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
llama-graph.cpp graph : add clamping to ffn_moe_weights_sum to avoid div-by-zero (#16655) 2025-10-26 17:20:32 +01:00
llama-graph.h graph : support cacheless embeddings with FA and iSWA (#16528) 2025-10-13 22:42:37 +03:00
llama-hparams.cpp hparams : add check for layer index in is_recurrent (#16511) 2025-10-12 07:19:06 +02:00
llama-hparams.h model : add BailingMoeV2 support (#16063) 2025-10-20 21:38:20 +02:00
llama-impl.cpp GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
llama-impl.h llama: use FA + max. GPU layers by default (#15434) 2025-08-30 16:32:10 +02:00
llama-io.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-io.h llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
llama-kv-cache-iswa.cpp server : context checkpointing for hybrid and recurrent models (#16382) 2025-10-03 21:34:51 +03:00
llama-kv-cache-iswa.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-kv-cache.cpp llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-kv-cache.h memory : remove KV cache size padding (#16812) 2025-10-28 20:19:44 +02:00
llama-kv-cells.h llama: store mrope data in KV cell (#16825) 2025-10-29 18:09:18 +01:00
llama-memory-hybrid.cpp memory : use sequential equal splits for recurrent modules (#16442) 2025-10-07 08:24:17 +03:00
llama-memory-hybrid.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-memory-recurrent.cpp llama: consistent ctx <-> buf order for KV cache (#16746) 2025-10-28 11:23:54 +01:00
llama-memory-recurrent.h llama: consistent ctx <-> buf order for KV cache (#16746) 2025-10-28 11:23:54 +01:00
llama-memory.cpp memory : correctly handle failure in apply() (#14438) 2025-06-30 18:03:03 +03:00
llama-memory.h llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
llama-mmap.cpp llama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama.cpp sources (#14013) 2025-06-05 11:57:42 +02:00
llama-mmap.h llama-mmap: fix missing include (#11796) 2025-02-10 20:58:18 +02:00
llama-model-loader.cpp model : Apertus model implementation (#15852) 2025-10-02 20:43:22 +03:00
llama-model-loader.h model: support GLM 4.5 family of models (#14939) 2025-08-04 20:29:25 +02:00
llama-model-saver.cpp llama : improve sep token handling (#14272) 2025-06-20 14:04:09 +02:00
llama-model-saver.h llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
llama-model.cpp memory : remove KV cache size padding (#16812) 2025-10-28 20:19:44 +02:00
llama-model.h memory : remove KV cache size padding (#16812) 2025-10-28 20:19:44 +02:00
llama-quant.cpp llama-quant: add support for mmproj (#16592) 2025-10-15 14:48:08 +02:00
llama-quant.h llama : refactor `src/llama.cpp` (#10902) 2025-01-03 10:18:53 +02:00
llama-sampling.cpp vocab : mark EOT token for Granite models (#16499) 2025-10-10 17:17:31 +03:00
llama-sampling.h llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama-vocab.cpp model : add BailingMoeV2 support (#16063) 2025-10-20 21:38:20 +02:00
llama-vocab.h model : Granite docling + Idefics3 preprocessing (SmolVLM) (#16206) 2025-10-05 14:57:47 +02:00
llama.cpp llama-quant: add support for mmproj (#16592) 2025-10-15 14:48:08 +02:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp model : add Kimi-K2 support (#14654) 2025-07-15 21:54:22 +02:00
unicode.h devops: add s390x & ppc64le CI (#15925) 2025-09-27 02:03:33 +08:00