Commit Graph

450 Commits

Author SHA1 Message Date
Aleksander Grygier 172e93d494 Merge remote-tracking branch 'ggml-org/master' into allozaur/mcp-mvp 2026-01-24 15:13:58 +01:00
Aleksander Grygier da9c245838 chore: update webui build output 2026-01-24 13:59:52 +01:00
Aleksander Grygier 7c4bedda87 feat: Improve formatting performance time 2026-01-24 13:58:23 +01:00
Aleksander Grygier c39c6ef436 fix: System prompt sorting 2026-01-24 13:44:41 +01:00
Aleksander Grygier 2601bf0f59 fix: Save draft message in Chat Form when adding System Prompt from new chat view 2026-01-24 13:32:49 +01:00
Aleksander Grygier a647edfc0b fix: Chat Form submission 2026-01-24 12:33:24 +01:00
Aleksander Grygier bd16b6145c chore: update webui build output 2026-01-24 01:32:36 +01:00
Aleksander Grygier 8428741034 feat: MCP Prompts WIP 2026-01-24 01:26:17 +01:00
Aleksander Grygier 3d88d0b6b2 chore: update webui build output 2026-01-23 15:21:56 +01:00
Aleksander Grygier 9c391d8e0d feat: UI improvements 2026-01-23 15:21:03 +01:00
Xuan-Son Nguyen 51fa458a92
server : support preserving reasoning_content in assistant message (#18994)
* support reasoning_content input

* report template caps to webui

* add docs

* rm commented code
2026-01-22 21:30:06 +01:00
Xuan-Son Nguyen 4e595b250a
server: do not log certain endpoints (avoid log spam) (#19028) 2026-01-22 19:24:37 +01:00
Aleksander Grygier 963711cccb chore: update webui build output 2026-01-22 18:20:55 +01:00
Aleksander Grygier 6018f85c65 feat: Architectural improvements 2026-01-22 18:19:37 +01:00
Aleksander Grygier c02e83c32a feat: Per-conversation agentic loop state 2026-01-22 17:38:51 +01:00
손희준 c6926d1d95
server: Reorder methods in `server-task.cpp` (#19016)
* Move `task_result_state::update_chat_msg` to match with header

* Move `server_task_result_cmpl_partial::to_json_anthropic()` to match with header

---------

Co-authored-by: openingnow <>
2026-01-22 14:36:04 +01:00
Hendrik Erz 3802d3c78f
fix: Use `tabular-nums` for chat message statistics (#18915)
* fix: Use `tabular-nums` for chat message statistics

* fix: Rebuild WebUI
2026-01-21 18:46:01 +01:00
손희준 fbbf3ad190
server: /v1/responses (partial) (#18486)
* from previous PR

* Make instruction(system) as first message

* Convert [input_message] (text/image/file)

* Rename convert_responses_to_chatcmpl(body) -> response_body

* Initial tool call support

* Erase instructions field from chatcmpl body

* Feed reasoning texts to chat template

* Use std::vector instead of opaque json array

* Make output_item.added events consistent

* Move `server_task_result_cmpl_partial::update` from header to source

* Match ID of output_item.added and .done events

* Add function_call only if there is no "fc_" prefix

* Add function call output at non-streaming API

* Test if ID is persistent

* Add doc

* Fix style - use trailing comma

* Rewrite state management

* catch up with upstream/master

* Fix style - "type" is the first item of SSE data

* Explicitly check "instructions" from response_body

* Make lambdas static

* Check if reasoning content exists

* Add `oai_resp_id` to task_result_state(also initialized at ctor), server_task_result_cmpl_partial, and server_task_result_cmpl_final

* Reject `input_file` since it is not supported by chatcmpl

* Add "fc_" prefix to non-straming function call id as coderabbit pointed out

---------

Co-authored-by: openingnow <>
2026-01-21 17:47:23 +01:00
Adrien Gallouët 1c7cf94b22
common, server : use the same User-Agent by default (#18957)
This commit also ensures that if a custom User-Agent is used, it will be
the only one sent.

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-01-20 18:28:43 +01:00
Xuan-Son Nguyen 2c1f199653
cli : fix reasoning responses in CLI (#18961)
* cli : fix reasoning responses in CLI

* fix build

* fix build (2)
2026-01-20 18:23:25 +01:00
Xuan-Son Nguyen 6df686bee6
server : refactor oai_parser_opt, move it to server_chat_params (#18937)
* server_chat_params

* move chat format into CLI

* use meta whenever possible

* clean up, no more chatml fallback
2026-01-19 23:28:01 +01:00
Lennart Austenfeld 18361c579c
server: fix memory reservations in populate_token_probs (#18787) 2026-01-19 19:13:31 +01:00
Aleksander Grygier 39d0ff485d chore: update webui build output 2026-01-19 19:02:40 +01:00
Aleksander Grygier 8a95ec3ea6 feat: Improve MCP Server selection UI + lazy load health checks 2026-01-19 19:01:32 +01:00
Aleksander Grygier cafb9c09d3 feat: UI improvements 2026-01-19 16:56:02 +01:00
Aleksander Grygier 54192b05fb feat: Simplify MCP server enabling logic per chat
Refactors MCP server enabling logic to remove the dependency on global settings.

This simplifies the logic by directly checking the per-chat override status, and removes the need to pass the global enabled state as a parameter.

Additionally:
- Only shows MCP servers that are enabled in settings in the selector.
- Sorts the servers by whether they are enabled for the current chat.
2026-01-19 16:43:53 +01:00
Aleksander Grygier 62ed7f112d chore: update webui build output 2026-01-19 16:26:16 +01:00
Aleksander Grygier d37683942b fix: Missing onModelChange callback running assistant message re-generation 2026-01-19 16:25:49 +01:00
Pascal d6dfe8e064 chore: update webui build output 2026-01-19 12:12:52 +01:00
Pascal 058929d453 fix: acurate tool_response display 2026-01-19 12:11:06 +01:00
Pascal d92b621346 fix: unify MCP server label logic with simplified fallback 2026-01-18 13:10:03 +01:00
Pascal 16a03eea36 chore: update webui build output 2026-01-18 10:43:45 +01:00
Pascal d8af98f1ed refactor: remove multimodal validation from model selector
Remove all frontend validation logic that prevented users from selecting
models based on multimodal capabilities. This refactoring removes
restrictive UI code while maintaining full functionality

- Vision models can describe images as text
- That text remains useful for non-vision models
- Chaining vision -> non-vision is a valid workflow
- Users know their use case better than the UI
- Users can return to vision models when needed
2026-01-18 10:42:01 +01:00
Pascal 5c28b7a2ee chore: update webui build output 2026-01-17 18:38:50 +01:00
Pascal fca7177eae fix: ignore assistant attachments (MCP) for modality detection 2026-01-17 18:36:41 +01:00
Pascal 3572667788 chore: update webui build output 2026-01-17 16:35:54 +01:00
Pascal 506da17931 refactor: eliminate MCP circular dependency
- Change architecture from mcpStore <-> mcpClient to mcpClient -> mcpStore
- Remove bidirectional callback pattern (set*Callback, notify* methods)
- Add updateState/updateHealthCheck public methods in mcpStore
- Replace callback calls with direct mcpStore method calls
- Remove unused imports (browser, HealthCheckState) and constructor
- Fixes CI: ReferenceError Cannot access mcpClient before initialization
2026-01-17 16:30:42 +01:00
Pascal 9b3417703f fix: remove obsolete modality UI tests causing CI failures
- Remove VisionModality/AudioModality test stories
- Remove mockServerProps usage and imports
- Simplify Default test (remove dropdown interaction checks)
- Simplify FileAttachments test (remove mocks)
2026-01-17 16:30:36 +01:00
Pascal a723238245 chore: update webui build output 2026-01-16 19:52:23 +01:00
Pascal 229aba7c3e fix: strip reasoning content and UI proprietary tags from prompts
TODO: add toggle and ensure backend API compliance for reasoning format
2026-01-16 19:50:36 +01:00
Pascal f09395821b chore: update webui build output 2026-01-16 15:22:46 +01:00
Pascal 78c6380222 refactor: remove reasoning after first turn filter 2026-01-16 15:19:50 +01:00
Pascal 2973c64609 refactor: inline reasoning with tags, remove fixed thinking field 2026-01-16 15:19:42 +01:00
Xuan-Son Nguyen c15395f73c
common : implement new jinja template engine (#18462)
* jinja vm

* lexer

* add vm types

* demo

* clean up

* parser ok

* binary_expression::execute

* shadow naming

* bin ops works!

* fix map object

* add string builtins

* add more builtins

* wip

* use mk_val

* eval with is_user_input

* render gemma tmpl ok

* track input string even after transformations

* support binded functions

* keyword arguments and slicing array

* use shared_ptr for values

* add mk_stmt

* allow print source on exception

* fix negate test

* testing more templates

* mostly works

* add filter_statement

* allow func to access ctx

* add jinja-value.cpp

* impl global_from_json

* a lot of fixes

* more tests

* more fix, more tests

* more fixes

* rm workarounds

* demo: type inferrence

* add placeholder for tojson

* improve function args handling

* rm type inference

* no more std::regex

* trailing spaces

* make testing more flexible

* make output a bit cleaner

* (wip) redirect minja calls

* test: add --output

* fix crash on macro kwargs

* add minimal caps system

* add some workarounds

* rm caps_apply_workarounds

* get rid of preprocessing

* more fixes

* fix test-chat-template

* move test-chat-jinja into test-chat-template

* rm test-chat-jinja from cmake

* test-chat-template: use common

* fix build

* fix build (2)

* rename vm --> interpreter

* improve error reporting

* correct lstrip behavior

* add tojson

* more fixes

* disable tests for COMMON_CHAT_FORMAT_GENERIC

* make sure tojson output correct order

* add object.length

* fully functional selectattr / rejectattr

* improve error reporting

* more builtins added, more fixes

* create jinja rendering tests

* fix testing.h path

* adjust whitespace rules

* more fixes

* temporary disable test for ibm-granite

* r/lstrip behavior matched with hf.js

* minimax, glm4.5 ok

* add append and pop

* kimi-k2 ok

* test-chat passed

* fix lstrip_block

* add more jinja tests

* cast to unsigned char

* allow dict key to be numeric

* nemotron: rm windows newline

* tests ok

* fix test

* rename interpreter --> runtime

* fix build

* add more checks

* bring back generic format support

* fix Apertus

* [json.exception.out_of_range.403] key 'content' not found

* rm generic test

* refactor input marking

* add docs

* fix windows build

* clarify error message

* improved tests

* split/rsplit with maxsplit

* non-inverse maxsplit

forgot to change after simplifying

* implement separators for tojson and fix indent

* i like to move it move it

* rename null -- > none

* token::eof

* some nits + comments

* add exception classes for lexer and parser

* null -> none

* rename global -> env

* rm minja

* update docs

* docs: add input marking caveats

* imlement missing jinja-tests functions

* oops

* support trim filter with args, remove bogus to_json reference

* numerous argument fixes

* updated tests

* implement optional strip chars parameter

* use new chars parameter

* float filter also has default

* always leave at least one decimal in float string

* jinja : static analysis + header cleanup + minor fixes

* add fuzz test

* add string.cpp

* fix chat_template_kwargs

* nits

* fix build

* revert

* unrevert

sorry :)

* add fuzz func_args, refactor to be safer

* fix array.map()

* loosen ensure_vals max count condition, add not impl for map(int)

* hopefully fix windows

* check if empty first

* normalize newlines

---------

Co-authored-by: Alde Rojas <hello@alde.dev>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2026-01-16 11:22:06 +01:00
Pascal a1550ab77d chore: update webui build output 2026-01-16 11:02:17 +01:00
Pascal db37b712b2 feat: resolve MCP attachment images via rehype plugin
LLM can reference tool-generated images using markdown links like,
plugin resolves attachment names to base64 from message.extra when present,
regular HTTP/data URLs pass through unchanged (no regression)

- rehypeResolveAttachmentImages plugin in markdown pipeline
- Pass message prop to MarkdownContent and AgenticContent
- Force processor reactivity on message.extra changes
- Filter assistant images from API context (display-only)
2026-01-16 10:49:28 +01:00
Pascal a3c2144c1d feat: persist base64 attachments from tool results 2026-01-16 08:07:20 +01:00
Pascal a377605f60 webui: fix custom headers persistence in UI (derived) 2026-01-15 20:36:14 +01:00
Pascal 3360f60b94 webui: fix custom headers persistence in UI 2026-01-15 20:13:01 +01:00
ddh0 13f1e4a9ca
llama : add adaptive-p sampler (#17927)
* initial commit for branch

* simplify constants

* add params to `struct common_params_sampling`, add reference to PR

* explicitly clamp `min_target` and `max_target` to `[0.0, 1.0]`

* add args, rename `queue_size` -> `window_size`

* improved comments

* minor

* remove old unused code from algorithm

* minor

* add power law case to `common_sampler_init`, add sampler name mappings

* clarify behaviour when `window_size = 0`

* add missing enums

* remove `target_range` param, make `target == 1` no-op, cleanup code

* oops, straggler

* add missing parameters in `server-task.cpp`

* copy from author

ref:
https://gist.github.com/MrJackSpade/9be99c7efbba7b95a41377e123b7b069

* remove old debug log, style nit

* fix compiler warning, add commented-out logging per token

* re-write + change parameters + simplify

* oops forgot args.cpp

* fix leftover `window_size`

* add missing values to `common_params_sampling::print()`

* with logging

* does this fix it?

* no, but does this?

* update default decay

* optimize

* fix bad merge

my git skills are lacking

* silence `missing initializer for member`

* update default decay to 0.9

* fix logging

* format (double)

* add power law to the new `samplers` vector

* log sampler init values

* improve logging messages in llama_sampler_power_law

* remove extraneous logging

* simplify target computation

last commit with debug logging!

* remove debug logging, explicitly clamp params at init

* add `use_power_law` flag + logic, minor cleanup

* update `power-law` -> `adaptive-p`

* fix cold start EMA

- `ctx->weighted_sum` is now initialized and reset to `target / (1.0f -
clamped_decay)`
- `ctx->total_weight` is now initialized and reset to `1.0f / (1.0f -
clamped_decay)`

this fixes a "cold start" problem with the moving average

* update `SHARPNESS` constant to `10.0f`

* minor style fixes

no functional changes

* minor style fixes cont.

* update `llama_sampler_adaptive_p_i` for backend sampling (ref: #17004)

* separate into `apply` + `accept` functions

* `pending_token_idx`: switch from `llama_token` to `int32`

functionally identical (`llama.h` has `typedef int32_t llama_token;`),
but its more correct now

* don't transform logits <= -1e9f

* fix masking in backend top-p, min-p

* address review comments

* typo in comments `RND` -> `RNG`

* add docs

* add recommended values in completion docs

* address PR feedback

* remove trailing whitespace (for CI `editorconfig`)

* add to adaptive-p to `common_sampler_types_from_chars`
2026-01-15 19:16:29 +02:00