llama.cpp/tools/server/webui/src/lib/services
Aleksander Grygier 99c53d6558
webui: Add a "Continue" Action for Assistant Message (#16971)
* feat: Add "Continue" action for assistant messages

* feat: Continuation logic & prompt improvements

* chore: update webui build output

* feat: Improve logic for continuing the assistant message

* chore: update webui build output

* chore: Linting

* chore: update webui build output

* fix: Remove synthetic prompt logic, use the prefill feature by sending the conversation payload ending with assistant message

* chore: update webui build output

* feat: Enable "Continue" button based on config & non-reasoning model type

* chore: update webui build output

* chore: Update packages with `npm audit fix`

* fix: Remove redundant error

* chore: update webui build output

* chore: Update `.gitignore`

* fix: Add missing change

* feat: Add auto-resizing for Edit Assistant/User Message textareas

* chore: update webui build output
2025-11-19 14:39:50 +01:00
..
chat.ts webui: Add a "Continue" Action for Assistant Message (#16971) 2025-11-19 14:39:50 +01:00
index.ts webui: remove client-side context pre-check and rely on backend for limits (#16506) 2025-10-12 18:06:41 +02:00
models.ts webui: introduce OpenAI-compatible model selector in JSON payload (#16562) 2025-10-22 16:58:23 +02:00
parameter-sync.spec.ts Add server-driven parameter defaults and syncing (#16515) 2025-10-15 16:22:20 +02:00
parameter-sync.ts Add server-driven parameter defaults and syncing (#16515) 2025-10-15 16:22:20 +02:00
slots.ts Enable per-conversation loading states to allow having parallel conversations (#16327) 2025-10-20 12:41:13 +02:00