llama.cpp/tools/server/webui/src/routes
Pascal 6ce3d85796
server: (webui) add --webui-config (#18028)
* server/webui: add server-side WebUI config support

Add CLI arguments --webui-config (inline JSON) and --webui-config-file
(file path) to configure WebUI default settings from server side.

Backend changes:
- Parse JSON once in server_context::load_model() for performance
- Cache parsed config in webui_settings member (zero overhead on /props)
- Add proper error handling in router mode with try/catch
- Expose webui_settings in /props endpoint for both router and child modes

Frontend changes:
- Add 14 configurable WebUI settings via parameter sync
- Add tests for webui settings extraction
- Fix subpath support with base path in API calls

Addresses feedback from @ngxson and @ggerganov

* server: address review feedback from ngxson

* server: regenerate README with llama-gen-docs
2025-12-17 21:45:45 +01:00
..
chat/[id] server: introduce API for serving / loading / unloading multiple models (#17470) 2025-12-01 19:41:04 +01:00
+error.svelte webui: switch to hash-based routing (alternative of #16079) (#16157) 2025-09-26 18:36:48 +03:00
+layout.svelte server: (webui) add --webui-config (#18028) 2025-12-17 21:45:45 +01:00
+page.svelte server: introduce API for serving / loading / unloading multiple models (#17470) 2025-12-01 19:41:04 +01:00
+page.ts server: introduce API for serving / loading / unloading multiple models (#17470) 2025-12-01 19:41:04 +01:00