llama.cpp/tools
Christopher Albert a19c7a30ad server: add full streaming compliance for Responses API events
- Add sequence_number to ALL streaming events (created, in_progress,
  output_item.added, content_part.added, all delta events)
- Add output_index to all events referencing output items
- Add content_index to content-related events
- Populate full response object in response.created and
  response.in_progress events (was only {id, object, status})
- Add id field to function_call output_item.added events
- Add status: completed to reasoning output_item.done events
- Counter state persisted across streaming chunks via task_result_state

Fixes: spec-compliant client libraries (async-openai) that require
these fields can now parse all streaming events without error.

Refs: ggml-org/llama.cpp#21174 (fumlig review comment)
2026-04-03 08:48:53 +02:00
..
batched-bench common : move up common_init() and fix Windows UTF-8 logs (#21176) 2026-03-31 12:53:41 +02:00
cli common : move up common_init() and fix Windows UTF-8 logs (#21176) 2026-03-31 12:53:41 +02:00
completion common : move up common_init() and fix Windows UTF-8 logs (#21176) 2026-03-31 12:53:41 +02:00
cvector-generator common : move up common_init() and fix Windows UTF-8 logs (#21176) 2026-03-31 12:53:41 +02:00
export-lora common : move up common_init() and fix Windows UTF-8 logs (#21176) 2026-03-31 12:53:41 +02:00
fit-params common : move up common_init() and fix Windows UTF-8 logs (#21176) 2026-03-31 12:53:41 +02:00
gguf-split gguf-split : clarify operation of gguf-split (#19749) 2026-03-25 13:12:50 +02:00
imatrix common : move up common_init() and fix Windows UTF-8 logs (#21176) 2026-03-31 12:53:41 +02:00
llama-bench llama-bench: print `-n-cpu-moe` when offloaded layers > 1 (#20984) 2026-03-25 21:17:27 +08:00
mtmd model, mtmd: fix gguf conversion for audio/vision mmproj (#21309) 2026-04-02 17:10:32 +02:00
parser common/parser: add proper reasoning tag prefill reading (#20424) 2026-03-19 16:58:21 +01:00
perplexity common : move up common_init() and fix Windows UTF-8 logs (#21176) 2026-03-31 12:53:41 +02:00
quantize llama : refactor llama_model_quantize_params to expose a pure C interface (#20346) 2026-04-01 08:43:00 +03:00
results common : move up common_init() and fix Windows UTF-8 logs (#21176) 2026-03-31 12:53:41 +02:00
rpc Fix locale-dependent float printing in GGUF metadata (#17331) 2026-03-04 09:30:40 +01:00
server server: add full streaming compliance for Responses API events 2026-04-03 08:48:53 +02:00
tokenize Fix locale-dependent float printing in GGUF metadata (#17331) 2026-03-04 09:30:40 +01:00
tts common : move up common_init() and fix Windows UTF-8 logs (#21176) 2026-03-31 12:53:41 +02:00
CMakeLists.txt llama: end-to-end tests (#19802) 2026-03-08 12:30:21 +01:00