llama.cpp/tools
hanishkvc a4e023d21c SimpleChatTCRV:Config++:Cleanup the initial go
Ensure toolNames array is reset, each time setup is called, so
that it doesnt end up with duplicate entries and equally dont
end up with entries of tool calls which are no longer available,
maybe because some config changed etal.

Ensure the ChatId is logged wrt the toolweb related setup actions.

Ensure that ExternalAi tool call related chat session, has its
tools config disabled, when its created itself, so that end user
doesnt get confused, given that external_ai toolcall explicitly
forces tools support to disabled.

Update some of the notes and readme
2025-12-04 19:41:40 +05:30
..
batched-bench batched-bench : add "separate text gen" mode (#17103) 2025-11-10 12:59:29 +02:00
cvector-generator cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
export-lora cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
gguf-split ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
imatrix Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
llama-bench bench : cache the llama_context state at computed depth (#16944) 2025-11-07 21:23:11 +02:00
main cli: add migration warning (#17620) 2025-11-30 15:32:43 +01:00
mtmd mtmd: fix --no-warmup (#17695) 2025-12-02 22:48:08 +01:00
perplexity perplexity : show more kl-divergence data (#16321) 2025-09-29 09:30:45 +03:00
quantize ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
rpc Install rpc-server when GGML_RPC is ON. (#17149) 2025-11-11 10:53:59 +00:00
run Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
server SimpleChatTCRV:Config++:Cleanup the initial go 2025-12-04 19:41:40 +05:30
tokenize cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
tts model : Apertus model implementation (#15852) 2025-10-02 20:43:22 +03:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00