SimpleChatTC: Update readme with bit more details, Cleaner UI

Also avoid showing Tool calling UI elements, when not needed to
be shown.
This commit is contained in:
hanishkvc 2025-10-13 02:57:17 +05:30
parent bfe789706e
commit 1e5b638beb
2 changed files with 53 additions and 6 deletions

View File

@ -33,6 +33,10 @@ Allows developer/end-user to control some of the behaviour by updating gMe membe
console. Parallely some of the directly useful to end-user settings can also be changed using the provided console. Parallely some of the directly useful to end-user settings can also be changed using the provided
settings ui. settings ui.
For GenAi/LLM models supporting tool / function calling, allows one to interact with them and explore use of
ai driven augmenting of the knowledge used for generating answers by using the predefined tools/functions.
The end user is provided control over tool calling and response submitting.
NOTE: Current web service api doesnt expose the model context length directly, so client logic doesnt provide NOTE: Current web service api doesnt expose the model context length directly, so client logic doesnt provide
any adaptive culling of old messages nor of replacing them with summary of their content etal. However there any adaptive culling of old messages nor of replacing them with summary of their content etal. However there
is a optional sliding window based chat logic, which provides a simple minded culling of old messages from is a optional sliding window based chat logic, which provides a simple minded culling of old messages from
@ -117,12 +121,15 @@ Once inside
* the user input box will be disabled and a working message will be shown in it. * the user input box will be disabled and a working message will be shown in it.
* if trim garbage is enabled, the logic will try to trim repeating text kind of garbage to some extent. * if trim garbage is enabled, the logic will try to trim repeating text kind of garbage to some extent.
* tool calling flow * tool calling flow when working with ai models which support tool / function calling
* if tool calling is enabled and the user query results in need for one of the builtin tools to be * if tool calling is enabled and the user query results in need for one of the builtin tools to be
called, then the response will include request for tool call. called, then the ai response might include request for tool call.
* the SimpleChat client will call the requested tool and inturn place the returned result into user * the SimpleChat client will show details of the tool call (ie tool name and args passed) requested
entry text area with <tool_response> generated result </tool_response> and allow the user to trigger it as is or after modifying things as needed.
* inturn returned / generated result is placed into user query entry text area with approriate tags
ie <tool_response> generated result </tool_response>
* if user is ok with the tool response, they can click submit to send the same to the GenAi/LLM. * if user is ok with the tool response, they can click submit to send the same to the GenAi/LLM.
User can even modify the response generated by the tool, if required, before submitting.
* just refresh the page, to reset wrt the chat history and or system prompt and start afresh. * just refresh the page, to reset wrt the chat history and or system prompt and start afresh.
@ -170,11 +177,15 @@ It is attached to the document object. Some of these can also be updated using t
remember to enable this only for GenAi/LLM models which support tool/function calling. remember to enable this only for GenAi/LLM models which support tool/function calling.
the builtin tools meta data is sent to the ai model in the requests sent to it. the builtin tools' meta data is sent to the ai model in the requests sent to it.
inturn if the ai model requests a tool call to be made, the same will be done and the response inturn if the ai model requests a tool call to be made, the same will be done and the response
sent back to the ai model, under user control. sent back to the ai model, under user control.
as tool calling will involve a bit of back and forth between ai assistant and end user, it is
recommended to set iRecentUserMsgCnt to 5 or more, so that enough context is retained during
chatting with ai models with tool support.
apiEP - select between /completions and /chat/completions endpoint provided by the server/ai-model. apiEP - select between /completions and /chat/completions endpoint provided by the server/ai-model.
bCompletionFreshChatAlways - whether Completion mode collates complete/sliding-window history when bCompletionFreshChatAlways - whether Completion mode collates complete/sliding-window history when
@ -300,9 +311,31 @@ work.
### Tool Calling ### Tool Calling
ALERT: Currently the way this is implemented, it is dangerous to use this, unless one verifies
all the tool calls requested and the responses generated manually to ensure everything is fine,
during interaction with ai modles with tools support.
#### Builtin Tools
The following tools/functions are currently provided by default
* simple_calculator - which can solve simple arithmatic expressions
* run_javascript_function_code - which can be used to run some javascript code in the browser
context.
Currently the generated code / expression is run through a simple dynamic function mechanism.
May update things, in future, so that a WebWorker is used to avoid exposing browser global scope
to the generated code directly. Either way always remember to cross check the tool requests and
generated responses when using tool calling.
May add
* web_fetch along with a corresponding simple local proxy server logic that can bypass the
CORS restrictions applied if trying to directly fetch from the browser js runtime environment.
#### Extending wiht new tools #### Extending wiht new tools
Provide a descriptive meta data explaining the tool / function being provided for tool calling. Provide a descriptive meta data explaining the tool / function being provided for tool calling,
as well as its arguments.
Provide a handler which should implement the specified tool / function call. It should place Provide a handler which should implement the specified tool / function call. It should place
the result to be sent back to the ai model in the result key of the tc_switch entry for the the result to be sent back to the ai model in the result key of the tc_switch entry for the
@ -330,6 +363,15 @@ TODO: Need to think later, whether to continue this simple flow, or atleast use
the tool call responses or even go further and have the logically seperate tool_call request the tool call responses or even go further and have the logically seperate tool_call request
structures also. structures also.
#### ToDo
Update to use web worker.
Make the Tool Call related ui elements use up horizontal space properly.
Try and trap promises based flows to ensure all generated results or errors if any are caught
before responding back to the ai model.
### Debuging the handshake ### Debuging the handshake

View File

@ -649,6 +649,8 @@ class MultiChatUI {
this.handle_session_switch(this.curChatId); this.handle_session_switch(this.curChatId);
} }
this.ui_reset_toolcall_as_needed(new AssistantResponse());
this.elBtnSettings.addEventListener("click", (ev)=>{ this.elBtnSettings.addEventListener("click", (ev)=>{
this.elDivChat.replaceChildren(); this.elDivChat.replaceChildren();
gMe.show_settings(this.elDivChat); gMe.show_settings(this.elDivChat);
@ -729,6 +731,8 @@ class MultiChatUI {
chat.clear(); chat.clear();
} }
this.ui_reset_toolcall_as_needed(new AssistantResponse());
chat.add_system_anytime(this.elInSystem.value, chatId); chat.add_system_anytime(this.elInSystem.value, chatId);
let content = this.elInUser.value; let content = this.elInUser.value;
@ -766,6 +770,7 @@ class MultiChatUI {
} }
/** /**
* Handle running of specified tool call if any, for the specified chat session.
* @param {string} chatId * @param {string} chatId
*/ */
async handle_tool_run(chatId) { async handle_tool_run(chatId) {