llama.cpp/tools
hanishkvc e79faebde1 SimpleChatTC:ToolTemp and ChatShow
Add a new role ToolTemp, which is used to maintain any tool call
response on the client ui side, without submitting it to the server
ie till user or auto submit triggers the submitting of that tool
call response.

When ever a tool call response is got, create a ToolTemp role based
message in the corresponding chat session. And dont directly update
the user query input area, rather leave it to the updated simplechat
show and the new multichatui chat_show helper and inturn whether the
current chat session active in ui is same as the one for which the
tool call response has been recieved.

TODO: Currently the response message is added to the current
active chat session, but this needs to be changed by tracking
chatId/session through the full tool call cycle and then adding
the tool call response in the related chat session, and inturn
updating or not the ui based on whether that chat session is
still the active chat session in ui or not, given that tool call
gets handled in a asynchronous way.

Now when that tool call response is submitted, promote the equiv
tool temp role based message that should be in the session's chat
history as the last message into becoming a normal tool response
message.

SimpleChat.show has been updated to take care of showing any
ToolTemp role message in the user query input area.

A newer chat_show helper added to MultiChatUI, that takes care of
calling SimpleChat.show, provided the chat_show is being requested
for the currently active in ui, chat session. As well as to take
care of passing both the ChatDiv and elInUser. Converts users of
SimpleChat.show to use MultiChatUI.chat_show
2025-12-04 19:41:39 +05:30
..
batched-bench batched-bench : add "separate text gen" mode (#17103) 2025-11-10 12:59:29 +02:00
cvector-generator cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
export-lora cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
gguf-split ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
imatrix Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
llama-bench bench : cache the llama_context state at computed depth (#16944) 2025-11-07 21:23:11 +02:00
main cli: add migration warning (#17620) 2025-11-30 15:32:43 +01:00
mtmd mtmd: fix --no-warmup (#17695) 2025-12-02 22:48:08 +01:00
perplexity perplexity : show more kl-divergence data (#16321) 2025-09-29 09:30:45 +03:00
quantize ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
rpc Install rpc-server when GGML_RPC is ON. (#17149) 2025-11-11 10:53:59 +00:00
run Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
server SimpleChatTC:ToolTemp and ChatShow 2025-12-04 19:41:39 +05:30
tokenize cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
tts model : Apertus model implementation (#15852) 2025-10-02 20:43:22 +03:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00