llama.cpp/tools/server/webui/docs/flows
Aleksander Grygier 4c61875bf8
webui: Add switcher to Chat Message UI to show raw LLM output (#19571)
2026-02-12 19:55:51 +01:00
..
chat-flow.md server: introduce API for serving / loading / unloading multiple models (#17470) 2025-12-01 19:41:04 +01:00
conversations-flow.md server: introduce API for serving / loading / unloading multiple models (#17470) 2025-12-01 19:41:04 +01:00
data-flow-simplified-model-mode.md server: introduce API for serving / loading / unloading multiple models (#17470) 2025-12-01 19:41:04 +01:00
data-flow-simplified-router-mode.md Use OpenAI-compatible `/v1/models` endpoint by default (#17689) 2025-12-03 20:49:09 +01:00
database-flow.md server: introduce API for serving / loading / unloading multiple models (#17470) 2025-12-01 19:41:04 +01:00
models-flow.md Use OpenAI-compatible `/v1/models` endpoint by default (#17689) 2025-12-03 20:49:09 +01:00
server-flow.md server: introduce API for serving / loading / unloading multiple models (#17470) 2025-12-01 19:41:04 +01:00
settings-flow.md webui: Add switcher to Chat Message UI to show raw LLM output (#19571) 2026-02-12 19:55:51 +01:00