SimpleChatTCRV:AiCallingAi ToolCall: flow cleanup and flexibility

By default ensure external_ai tool call related special chat session
starts with tool calls disabled and client side sliding window of 1.

Add a helper in SimpleChat class to set these along with clearing of
any chat history.

Inturn now give the user flexibility to change this from within
the program, if they need to for what ever reason, till the program
restarts.
This commit is contained in:
hanishkvc 2025-11-24 22:41:19 +05:30
parent 1751ed1827
commit 073c570cad
6 changed files with 66 additions and 18 deletions

View File

@ -296,6 +296,9 @@ Chat Session specific settings
chat sliding window size (which takes care of try avoid overloading the ai model context size) selected
by user already. User can always change the sliding window size to view past messages beyond the currently
active sliding window size and then switch back again, if they want to.
* More flexibility to user wrt ExternalAi tool call ie ai calling ai
* the user can change the default behaviour of tools being disabled and sliding window of 1
* program restart will reset these back to the default
## ToDo

View File

@ -254,16 +254,18 @@ It is attached to the document object. Some of these can also be updated using t
* if a very long text is being generated, which leads to no user interaction for sometime and inturn the machine goes into power saving mode or so, the platform may stop network connection, leading to exception.
* iRecentUserMsgCnt - a simple minded SlidingWindow to limit context window load at Ai Model end. This is set to 5 by default. So in addition to latest system message, last/latest iRecentUserMsgCnt user messages after the latest system prompt and its responses from the ai model will be sent to the ai-model, when querying for a new response. Note that if enabled, only user messages after the latest system message/prompt will be considered.
* iRecentUserMsgCnt - a simple minded ClientSide SlidingWindow logic to limit context window load at Ai Model end. This is set to 5 by default. So in addition to latest system message, last/latest iRecentUserMsgCnt user messages (after the latest system prompt) and its responses from the ai model along with any associated tool calls will be sent to the ai-model, when querying for a new response. Note that if enabled, only user messages after the latest system message/prompt will be considered.
This specified sliding window user message count also includes the latest user query.
* less than 0 : Send entire chat history to server
* 0 : Send only the system message if any to the server
* 0 : Send only the system message if any to the server. Even the latest user message wont be sent.
* greater than 0 : Send the latest chat history from the latest system prompt, limited to specified cnt.
* NOTE: the latest user message (query/response/...) for which we need a ai response, will also be counted as belonging to the iRecentUserMsgCnt.
* bCompletionFreshChatAlways - whether Completion mode collates complete/sliding-window history when communicating with the server or only sends the latest user query/message.
* bCompletionInsertStandardRolePrefix - whether Completion mode inserts role related prefix wrt the messages that get inserted into prompt field wrt /Completion endpoint.
@ -467,16 +469,22 @@ The following tools/functions are currently provided by default
* data_store_get/set/delete/list - allows for a basic data store to be used, to maintain data
and or context across sessions and so...
* external_ai - allows ai to use an independent session of itself / different instance of ai,
* external_ai - allows ai to use an independent fresh by default session of itself / different ai,
with a custom system prompt of ai's choosing and similarly user message of ai's choosing,
in order to get any job it deems necessary to be done in a uncluttered indepedent session.
* helps ai to process stuff that it needs, without having to worry about any previous chat history
etal messing with the current data's context and processing.
* in its default configuration, helps ai to process stuff that it needs, without having to worry
about any previous chat history etal messing with the current data's context and processing.
* helps ai to process stuff with targeted system prompts of its choosing, for the job at hand.
* tool calling is disabled wrt the external_ai's independent session, for now.
* it was noticed that else even the external_ai may call into more external_ai calls trying to
find answer to the same question. maybe one can enable tool calling, while explicitly disabling
external_ai tool call from within external_ai tool call or so later...
* by default
* tool calling is disabled wrt the external_ai's independent session.
* it was noticed that else even external_ai may call into more external_ai calls trying to
find answers to the same question/situation.
* maybe one can enable tool calling, while explicitly disabling of external_ai tool call
from within external_ai tool call related session or so later...
* client side sliding window size is set to 1 so that only system prompt and ai set user message
gets handshaked with the external_ai instance
* End user can change this behaviour by changing the corresponding settings of the TCExternalAi
special chat session, which is internally used for this tool call.
* Could be used by ai for example to
* summarise a large text content, where it could use the context of the text to generate a
suitable system prompt for summarising things suitably
@ -486,7 +494,7 @@ The following tools/functions are currently provided by default
* given the fuzzy nature of the generative ai, sometimes the model may even use this tool call
to get answer to questions like what is your name ;>
* end user can use this mechanism to try and bring in an instance of ai running on a more powerful
machine, but then to be used only if needed or so
machine with more compute and memory capabiliteis, but then to be used only if needed or so
Most of the above (except for external ai call) are run from inside web worker contexts. Currently the
ai generated code / expression is run through a simple minded eval inside a web worker mechanism. Use

View File

@ -32,7 +32,7 @@ function startme() {
sL.push(gMe.multiChat.new_chat_session(cid));
}
await Promise.allSettled(sL)
gMe.multiChat.simpleChats[mChatMagic.AI_TC_SESSIONNAME].cfg.tools.enabled = false
gMe.multiChat.simpleChats[mChatMagic.AI_TC_SESSIONNAME].default_isolating()
gMe.multiChat.setup_ui(gMe.defaultChatIds[0]);
gMe.multiChat.show_sessions();
gMe.multiChat.handle_session_switch(gMe.multiChat.curChatId)

View File

@ -112,10 +112,17 @@ A lightweight simple minded ai chat client with a web front-end that supports mu
- verify and optionally edit the tool call response, before submitting the same
- user can update the settings for auto executing these actions, if needed
- external_ai allows invoking a separate fresh ai instance
- external_ai allows invoking a separate optionally fresh by default ai instance
- ai could run self modified targeted versions of itself/... using custom system prompts and user messages as needed
- user can setup an ai instance with additional compute access, which should be used only if needed
- tool calling is currently kept disabled in such a instance
- by default in such a instance
- tool calling is kept disabled along with
- client side sliding window of 1,
ie only system prompt and latest user message is sent to ai server.
- TCExternalAI is the special chat session used internally for this,
and the default behaviour will get impacted if you modify the settings of this special chat session.
- Restarting this chat client logic will force reset things to the default behaviour,
how ever any other settings wrt TCExternalAi, that where changed, will persist across restarts.
- Client side Sliding window Context control, using `iRecentUserMsgCnt`, helps limit context sent to ai model

View File

@ -486,9 +486,10 @@ function usage_note(sRecentUserMsgCnt) {
<ul class="ul2">
<li> ChatHistInCtxt, MaxTokens, ModelCtxt window to expand</li>
</ul>
<li> ${AI_TC_SESSIONNAME} session keeps tool calls disabled, to avoid recursive...</li>
<li> ${AI_TC_SESSIONNAME} session used for external_ai tool call, ie ai calling ai</li>
<ul class="ul2">
<li> Used by external_ai tool call, which allows ai calling ai, as needed.</li>
<li> by default keeps tool calls disabled, client side sliding window of 1</li>
<li> if you change for some reason, you may want to change back to these</li>
</ul>
</ul>
</details>`;
@ -572,6 +573,18 @@ export class SimpleChat {
this.latestResponse = new ChatMessageEx();
}
/**
* A relatively isolating default setup
* * clear any chat history
* * disable tool calls
* * set client side sliding window to 1 so that only system prompt is sent along with latest user message
*/
default_isolating() {
this.clear()
this.cfg.tools.enabled = false
this.cfg.chatProps.iRecentUserMsgCnt = 1
}
setup() {
return this.toolsMgr.setup(this.chatId)
}
@ -669,6 +682,7 @@ export class SimpleChat {
*
* Else Return chat messages from latest going back till the last/latest system prompt.
* While keeping track that the number of user queries/messages doesnt exceed iRecentUserMsgCnt.
*
* @param {number} iRecentUserMsgCnt
*/
recent_chat(iRecentUserMsgCnt) {
@ -2074,6 +2088,13 @@ export class Config {
this.chatProps = {
apiEP: ApiEP.Type.Chat,
stream: true,
/**
* How many recent user msgs to consider and include along with their corresponding
* assistant responses and tool calls if any, wrt client side sliding window logic.
* * user specified System prompt is outside this count.
* * the latest user query/response to send to ai server is part of this.
* * only user messages following the latest system prompt is considered.
*/
iRecentUserMsgCnt: 5,
bCompletionFreshChatAlways: true,
bCompletionInsertStandardRolePrefix: false,
@ -2201,7 +2222,15 @@ export class Me {
this.dataURLs = []
this.houseKeeping = {
clear: true,
}
};
/**
* Control if externalai toolcall related special chat session starts in a forced isolating state
* * always external_ai tool call is made
* * Or does it start in such a state only at the time of program loading and inturn
* if user has the flexibility to change this characteristic till this program is restarted,
* for what ever reason they may deem fit.
*/
this.tcexternalaiForceIsolatingDefaultsAlways = false;
}
/**

View File

@ -92,10 +92,11 @@ let externalai_meta = {
*/
function externalai_run(chatid, toolcallid, toolname, obj) {
let sc = gMe.multiChat.simpleChats[mChatMagic.AI_TC_SESSIONNAME];
sc.clear()
if (gMe.tcexternalaiForceIsolatingDefaultsAlways) {
sc.default_isolating()
}
sc.add_system_anytime(obj['system_prompt'], 'TC:ExternalAI')
sc.add(new mChatMagic.ChatMessageEx(new mChatMagic.NSChatMessage(mChatMagic.Roles.User, obj['user_message'])))
sc.cfg.tools.enabled = false
sc.handle_chat_hs(sc.cfg.baseURL, mChatMagic.ApiEP.Type.Chat, gMe.multiChat.elDivStreams).then((resp)=>{
gMe.toolsMgr.workers_postmessage_for_main(gMe.toolsMgr.workers.js, chatid, toolcallid, toolname, resp.content_equiv());
}).catch((err)=>{