llama.cpp/examples
Daniel Bevenius 2b6dfe824d
llama : remove write/read of output ids/logits/embeddings (#18862)
* llama : remove write/read of output ids/logits/embeddings

This commit removes the write/read of output ids, logits and
embeddings from the llama context state.

Refs: https://github.com/ggml-org/llama.cpp/pull/18862#issuecomment-3756330941

* completion : add replying of session state

This commit updates the session handing in the completion tool to handle
the that logits are no longer stored in the session file. Instead, we
need to replay the last token to get the logits for sampling.

* common : add common_prompt_batch_decode function

This commit adds a new function which is responsible for decoding prompt
and optionally handle the saving for session data.

* update save-state.cpp to use llama_state_load_file

This commit updates the save-load-state example to utilize the new
llama_state_load_file function for loading the model state from a file.
And it also replays the last token after loading since this state is now
stored before the last token is processed.

* examples : set n_seq_max = 2 for ctx3

This commit updates the save-load-state example to set the n_seq_max
parameter to 2 when initializing the ctx3 context.

The motivation for this change is that using 1 as n_parallel/n_seq_max
the context only supports one sequence, but the test laster tries to
use a second sequence which results in the following error:
```console
main : loaded state with 4 tokens
main : seq 0 copied, 225760 bytes
main : kv cache cleared
find_slot: seq_id=1 >= n_seq_max=1 Try using a bigger --parallel value
state_read_meta: failed to find available cells in kv cache
```
This seems to only happen for recurrent/hybrid models.
2026-02-23 07:04:30 +01:00
..
batched context : reserve new scheduler when graph topology changes (#18547) 2026-01-15 16:39:17 +02:00
batched.swift examples : remove references to `make` in examples [no ci] (#15457) 2025-08-21 06:12:28 +02:00
convert-llama2c-to-ggml gguf: gguf_writer refactor (#15691) 2025-09-05 11:34:28 +02:00
debug Restore clip's cb() to its rightful glory - extract common debugging elements in llama (#17914) 2026-01-14 20:29:35 +01:00
deprecation-warning docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
diffusion llama : add `use_direct_io` flag for model loading (#18166) 2026-01-08 08:35:30 +02:00
embedding model : add LFM2-ColBert-350M (#18607) 2026-01-05 19:52:56 +01:00
eval-callback tests : download models only when running ctest (#18843) 2026-01-15 09:47:29 +01:00
gen-docs gen-docs: automatically update markdown file (#18294) 2025-12-22 19:30:19 +01:00
gguf examples(gguf): GGUF example outputs (#17025) 2025-11-05 19:58:16 +02:00
gguf-hash GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
idle metal : add residency sets keep-alive heartbeat (#17766) 2025-12-05 19:38:54 +02:00
llama.android refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
llama.swiftui llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
lookahead lookup, lookahead: fix crash when n_ctx not specified (#18729) 2026-01-30 22:10:24 +02:00
lookup lookup, lookahead: fix crash when n_ctx not specified (#18729) 2026-01-30 22:10:24 +02:00
model-conversion model-conversion : add option to print tensor values (#19692) 2026-02-17 20:43:22 +01:00
parallel common : refactor common_sampler + grammar logic changes (#17937) 2025-12-14 10:11:13 +02:00
passkey examples : remove references to `make` in examples [no ci] (#15457) 2025-08-21 06:12:28 +02:00
retrieval model : add LFM2-ColBert-350M (#18607) 2026-01-05 19:52:56 +01:00
save-load-state llama : remove write/read of output ids/logits/embeddings (#18862) 2026-02-23 07:04:30 +01:00
simple examples : support encoder-decoder models in the simple example (#16002) 2025-09-17 10:29:00 +03:00
simple-chat simple-chat : fix context-exceeded condition (#14494) 2025-07-02 14:12:07 +03:00
simple-cmake-pkg examples : add missing code block end marker [no ci] (#17756) 2025-12-04 14:17:30 +01:00
speculative spec : add self‑speculative decoding (no draft model required) + refactor (#18471) 2026-01-28 19:42:42 +02:00
speculative-simple spec : add self‑speculative decoding (no draft model required) + refactor (#18471) 2026-01-28 19:42:42 +02:00
sycl create test.sh to enhance the parameters for testing, update the guide, rm useless script (#19243) 2026-02-01 18:24:00 +08:00
training common : refactor common_sampler + grammar logic changes (#17937) 2025-12-14 10:11:13 +02:00
CMakeLists.txt examples : add debug utility/example (#18464) 2026-01-07 10:42:19 +01:00
convert_legacy_llama.py metadata: Detailed Dataset Authorship Metadata (#8875) 2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py
json_schema_to_grammar.py docs : Minor cleanups (#19252) 2026-02-02 08:38:55 +02:00
llama.vim llama : remove KV cache defragmentation logic (#15473) 2025-08-22 12:22:13 +03:00
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
reason-act.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
regex_to_grammar.py
server-llama2-13B.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
server_embd.py llama : fix FA when KV cache is not used (i.e. embeddings) (#12825) 2025-04-08 19:54:51 +03:00
ts-type-to-grammar.sh scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00