server: update readme to mention n_past_max metric (#16436)
https://github.com/ggml-org/llama.cpp/pull/15361 added new metric exported, but I've missed this doc.
This commit is contained in:
parent
ca71fb9b36
commit
c5fef0fcea
|
|
@ -1045,6 +1045,7 @@ Available metrics:
|
|||
- `llamacpp:kv_cache_tokens`: KV-cache tokens.
|
||||
- `llamacpp:requests_processing`: Number of requests processing.
|
||||
- `llamacpp:requests_deferred`: Number of requests deferred.
|
||||
- `llamacpp:n_past_max`: High watermark of the context size observed.
|
||||
|
||||
### POST `/slots/{id_slot}?action=save`: Save the prompt cache of the specified slot to a file.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue