Commit Graph

140 Commits

Author SHA1 Message Date
Xuan Son Nguyen 4a1c05c383 fix invalid ptr to shutdown_handler 2025-11-30 15:31:05 +01:00
Xuan Son Nguyen 23cb411317 also route anthropic endpoints 2025-11-29 23:29:06 +01:00
Xuan Son Nguyen a82dbbfb30 decouple server_models from server_routes 2025-11-29 23:00:35 +01:00
Xuan Son Nguyen c1dfccd078 Merge branch 'master' into xsn/server_model_management_v1_2 2025-11-29 22:34:16 +01:00
Xuan-Son Nguyen ab49f094d2
server: move server-context to its own cpp|h (#17595)
* git mv

* add server-context.h

* add server-context.h

* clean up headers

* cont : cleanup

* also expose server_response_reader (to be used by CLI)

* fix windows build

* decouple server_routes and server_http

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-29 22:04:44 +01:00
o7si 3ce7a65c2f
server: fix: /metrics endpoint returning JSON-escaped Prometheus format (#17386)
* fix: /metrics endpoint returning JSON-escaped Prometheus format

* mod: remove string overload from ok() method
2025-11-28 19:14:00 +01:00
Xuan Son Nguyen bdaf44a13c Merge branch 'master' into xsn/server_model_management_v1_2 2025-11-28 13:07:36 +01:00
Fredrik Hultin ddf9f94389
server : add Anthropic Messages API support (#17570)
* server : add Anthropic Messages API support

* remove -@pytest.mark.slow from tool calling/jinja tests

* server : remove unused code and slow/skip on test_anthropic_vision_base64_with_multimodal_model in test_anthropic_api.py

* server : removed redundant n field logic in anthropic_params_from_json

* server : use single error object instead of error_array in streaming response handler for /v1/chat/completions and use unordered_set instead of set in to_json_anthropic_stream()

* server : refactor Anthropic API to use OAI conversion

* make sure basic test always go first

* clean up

* clean up api key check, add test

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2025-11-28 12:57:04 +01:00
Xuan Son Nguyen e2731c3767 set hf_repo/docker_repo as model alias when posible 2025-11-26 15:57:20 +01:00
Xuan Son Nguyen e40f35fb61 remove support for extra args 2025-11-26 15:43:27 +01:00
Xuan Son Nguyen 399b39f21b Merge branch 'master' into xsn/server_model_management_v1_2 2025-11-24 14:45:57 +01:00
Xuan-Son Nguyen b8372eecd9
server: split server.cpp code into server/common/task/queue (#17362)
* add server-task, server-common

* add server-queue

* rm redundant includes

* move enum stop_type to server-task

* server : headers cleanup

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-24 14:41:53 +01:00
Xuan Son Nguyen 2c6b58f785 nits 2025-11-24 12:20:34 +01:00
Xuan Son Nguyen 6ed192b4dd add --models-allow-extra-args for security 2025-11-24 12:01:16 +01:00
Xuan Son Nguyen d65be9170b address review comments 2025-11-23 19:31:21 +01:00
Xuan Son Nguyen 5ad594e6d6 cleaner 2025-11-23 19:02:07 +01:00
Xuan Son Nguyen 2e355c7f8e oai-compat /models endpoint 2025-11-23 17:25:24 +01:00
Xuan Son Nguyen 74685f4194 allow reusing args if auto_load 2025-11-23 15:42:33 +01:00
Xuan Son Nguyen f927e21ffc support extra_args on loading model 2025-11-23 15:39:03 +01:00
Xuan Son Nguyen 7ef6312f85 add note 2025-11-23 15:08:31 +01:00
Xuan Son Nguyen f25bfaba4d expose args and exit_code in API 2025-11-23 14:59:04 +01:00
Xuan Son Nguyen d32bbfec82 ad endpoint docs 2025-11-22 18:01:48 +01:00
Xuan Son Nguyen 7cd929076d remove default model path 2025-11-21 22:33:04 +01:00
Xuan Son Nguyen 62ee883d5a implement LRU 2025-11-21 22:22:57 +01:00
Xuan Son Nguyen 6610724f8e fix unsafe pointer 2025-11-20 16:13:30 +01:00
Xuan Son Nguyen b9ebdf616a more stable 2025-11-20 15:49:40 +01:00
Xuan Son Nguyen 7c6eb17fad fix windows 2025-11-20 13:14:56 +01:00
Xuan Son Nguyen 5423d42a35 use subprocess.h, better logging 2025-11-20 00:05:29 +01:00
Xuan Son Nguyen 399f536dc7 fix compile error 2025-11-19 21:33:44 +01:00
Xuan Son Nguyen fc5901a449 server: add model management and proxy 2025-11-19 21:23:00 +01:00
Xuan-Son Nguyen 0de8878c96
server: split HTTP into its own interface (#17216)
* server: split HTTP into its own interface

* move server-http and httplib to its own file

* add the remaining endpoints

* fix exception/error handling

* renaming

* missing header

* fix missing windows header

* fix error responses from http layer

* fix slot save/restore handler

* fix case where only one stream chunk is returned

* add NOMINMAX

* do not call sink.write on empty data

* use safe_json_to_str for SSE

* clean up

* add some comments

* improve usage of next()

* bring back the "server is listening on" message

* more generic handler

* add req.headers

* move the chat template print to init()

* add req.path

* cont : minor

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-11-17 22:05:44 +01:00
Georgi Gerganov 5b2093becc
server : handle context overflow during decode (#17267)
* server : handle context overflow during decode

* server : minor refactor
2025-11-16 09:23:37 +02:00
Xuan-Son Nguyen 9b17d74ab7
mtmd: add mtmd_log_set (#17268) 2025-11-14 15:56:19 +01:00
Georgi Gerganov d396b43748
server : fix "can batch with" bug (#17263) 2025-11-14 14:03:45 +02:00
Xuan-Son Nguyen c4abcb2457
server: fixing naming conflict res_error (#17243) 2025-11-13 20:53:47 +01:00
Xuan-Son Nguyen 00c94083b3
server: (refactor) implement generator-based API for task results (#17174)
* server: (refactor) implement generator-based API for task results

* improve

* moving some code

* fix "Response ended prematurely"

* add sink.done before return false

* rm redundant check

* rm unused var

* rename generator --> reader
2025-11-12 18:50:52 +01:00
Xuan-Son Nguyen ee8dd5c658
server: move res_error/res_ok to static function (#17167) 2025-11-12 14:17:24 +01:00
Georgi Gerganov cb1adf8851
server : handle failures to restore host cache (#17078)
* server : handle failures to restore host cache

* server : add tests for the prompt cache
2025-11-09 14:27:05 +02:00
Aidan eeee367de5
server: fix correct time_ms calculation in prompt_progress (#17093)
* fix: correct time_ms calculation in send_partial_response

The time_ms field was incorrectly calculated. The division was happening
before the subtraction leading to incorrect values.

Before: (ggml_time_us() - slot.t_start_process_prompt / 1000) After:
(ggml_time_us() - slot.t_start_process_prompt) / 1000

* docs : document time_ms field in prompt_progress
2025-11-08 15:12:11 +02:00
Georgi Gerganov 8c0d6bb455
server : print the samplers chain for each request (#17070) 2025-11-07 12:24:47 +02:00
Georgi Gerganov b7f9010d24
server : disable checkpoints with mtmd (#17045) 2025-11-06 12:09:29 +02:00
Georgi Gerganov 13b339bcd9
server : do not default to multiple slots with speculative decoding (#17017)
* server : do not default to multiple slots with speculative decoding

* cont : fix
2025-11-05 14:32:55 +02:00
Georgi Gerganov 66d8eccd42
server : do context shift only while generating (#17000) 2025-11-04 19:21:36 +02:00
Georgi Gerganov 48bd26501b
server : add props.model_alias (#16943)
* server : add props.model_alias

* webui : npm run format
2025-11-03 14:38:23 +01:00
Xuan-Son Nguyen 070ff4d535
mtmd: add --image-min/max-tokens (#16921) 2025-11-03 11:11:18 +01:00
Georgi Gerganov 2f966b8ed8
clip : use FA (#16837)
* clip : use FA

* cont : add warning about unsupported ops

* implement "auto" mode for clip flash attn

* clip : print more detailed op support info during warmup

* cont : remove obsolete comment [no ci]

* improve debugging message

* trailing space

* metal : remove stray return

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2025-11-02 21:21:48 +01:00
Georgi Gerganov cd5e3b5754
server : support unified cache across slots (#16736)
* server : support unified context across slots

* cont : fix speculative decoding initialization

* context : fix n_ctx_per_seq computation

* server : purge slots one by one

* tests : add unified cache server tests

* llama : update per-seq context computation

* test-thread-safety : handle tiny training context of the input model

* server : fix server_tokens clear()

* server : use 4 slots + unified KV by default

* llama : add note about context size queries

* cont : update todos [no ci]

* context : do not cap the size of the context

* tests : adjust parameters to be CI friendlier

* context : add warning
2025-11-02 18:14:04 +02:00
Georgi Gerganov c22473b580
server : don't print user inputs to console (#16871) 2025-10-31 10:54:19 +02:00
Daniel Bevenius 0f715b4e75
server : fix typos in server.cpp comments [no ci] (#16883) 2025-10-31 09:51:26 +01:00
Georgi Gerganov b52edd2558
server : remove n_past (#16818)
* server : remove n_past

* server : replace slot.n_prompt_tokens() with slot.task->n_tokens()

* server : fixes + clean-up

* cont : fix context shift

* server : add server_tokens::pos_next()

Co-authored-by: Xuan-Son Nguyen <son@huggingface.co>

* server : fix pos_next() usage

Co-authored-by: Xuan-Son Nguyen <son@huggingface.co>

---------

Co-authored-by: Xuan-Son Nguyen <son@huggingface.co>
2025-10-30 18:42:57 +02:00