llama.cpp/tools
hanishkvc 2981033fac SimpleChatTCRV:UICleanup:Ai Reasoning/Live response
Implement todo noted in last commit, and bit more.

This brings in clearing of the external ai tool call special chat
session divStream during chat show, which ensures that it gets
hidden by default wrt other chat sessions and inturn only get
enabled if user triggers a new tool call involving external ai
tool call.

This patch also ensures that if ext ai tool call takes too much
time and so logic gives you back control with a timed out response
as a possible response back to ai wrt the tool call, then the
external ai tool call's ai live response is no longer visible in
the current chat session ui. So user can go ahead with the timed
out response or some other user decided response as the response
to the tool call. And take the chat in a different direction of
their and ai's choosing.

Or else, if they want to they can switch to the External Ai
specific special chat session and continue to monitor the response
from the tool call there, to understand what the final response
would have been wrt that tool call.

Rather this should keep the ui flow clean.

ALERT: If the user triggers a new ext ai tool call, when the
old one is still alive in the background, then response from
both will be in a race for user visibility, so beware of it.
2025-12-04 19:41:40 +05:30
..
batched-bench batched-bench : add "separate text gen" mode (#17103) 2025-11-10 12:59:29 +02:00
cvector-generator cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
export-lora cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
gguf-split ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
imatrix Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
llama-bench bench : cache the llama_context state at computed depth (#16944) 2025-11-07 21:23:11 +02:00
main cli: add migration warning (#17620) 2025-11-30 15:32:43 +01:00
mtmd mtmd: fix --no-warmup (#17695) 2025-12-02 22:48:08 +01:00
perplexity perplexity : show more kl-divergence data (#16321) 2025-09-29 09:30:45 +03:00
quantize ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
rpc Install rpc-server when GGML_RPC is ON. (#17149) 2025-11-11 10:53:59 +00:00
run Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
server SimpleChatTCRV:UICleanup:Ai Reasoning/Live response 2025-12-04 19:41:40 +05:30
tokenize cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
tts model : Apertus model implementation (#15852) 2025-10-02 20:43:22 +03:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00