llama.cpp/tools/server/webui/src/lib/services
Pascal afb79b2970 webui: split raw output into backend parsing and frontend display options 2026-02-13 13:17:33 +01:00
..
chat.ts refactor: Tool call handling 2026-02-13 12:57:03 +01:00
database.service.ts WebUI Architecture Cleanup (#19541) 2026-02-12 11:22:27 +01:00
database.ts webui: Per-conversation system message with UI displaying, edition & branching (#17275) 2025-12-06 13:19:05 +01:00
index.ts server: introduce API for serving / loading / unloading multiple models (#17470) 2025-12-01 19:41:04 +01:00
models.service.ts WebUI Architecture Cleanup (#19541) 2026-02-12 11:22:27 +01:00
models.ts Use OpenAI-compatible `/v1/models` endpoint by default (#17689) 2025-12-03 20:49:09 +01:00
parameter-sync.service.spec.ts WebUI Architecture Cleanup (#19541) 2026-02-12 11:22:27 +01:00
parameter-sync.service.ts WebUI Architecture Cleanup (#19541) 2026-02-12 11:22:27 +01:00
parameter-sync.spec.ts server: (webui) add --webui-config (#18028) 2025-12-17 21:45:45 +01:00
parameter-sync.ts webui: split raw output into backend parsing and frontend display options 2026-02-13 13:17:33 +01:00
props.service.ts WebUI Architecture Cleanup (#19541) 2026-02-12 11:22:27 +01:00
props.ts server: introduce API for serving / loading / unloading multiple models (#17470) 2025-12-01 19:41:04 +01:00