SimpleChatTCRV:ToolCall Ai - Decouple SimpleChat from Me a bit
Had a custom struct for parts of Me needed by SimpleChat, and inturn that need to be controlled when starting a independent parallel ai session. For now if chat streaming is used or not and tools is enabled or not is allowed to be explicitly controlled when handle_chat_hs is called. Inturn when toolai triggers the parallel ai session, it disables tools calling support wrt this child/external ai tool call. Else even the external ai session may end up triggering tool call wrt external ai for the same job and thus get into a infinite recursive loop. Also * update the toolai meta data a bit. * increase time allowed for toolcall to return a response, especially usefull for external_ai tool call. Cleanup some old notes and update readme in general.
This commit is contained in:
parent
18529445ce
commit
5025001bd4
|
|
@ -461,10 +461,25 @@ The following tools/functions are currently provided by default
|
|||
|
||||
* data_store_get/set/delete/list - allows for a basic data store to be used.
|
||||
|
||||
All of the above are run from inside web worker contexts. Currently the ai generated code / expression
|
||||
is run through a simple minded eval inside a web worker mechanism. Use of WebWorker helps avoid exposing
|
||||
browser global scope to the generated code directly. However any shared web worker scope isnt isolated.
|
||||
Either way always remember to cross check the tool requests and generated responses when using tool calling.
|
||||
* external_ai - allows ai to use an independent session of itself/ai, with a custom system prompt
|
||||
of ai's choosing and similarly user message of ai's choosing, in order to get any job it deems
|
||||
necessary to be done in a uncluttered indepedent session.
|
||||
* helps ai to process stuff that it needs, without having to worry about any previous chat history
|
||||
etal messing with the current data's processing.
|
||||
* helps ai to process stuff with targeted system prompts of its choosing, for the job at hand.
|
||||
* Could be used by ai for example to
|
||||
* summarise a large text content, where it could use the context of the text to generate a
|
||||
suitable system prompt for summarising things suitably
|
||||
* create a structured data from a raw textual data
|
||||
* act as a literary critic or any domain expert as the case may be
|
||||
* or so and so and so ...
|
||||
|
||||
Most of the above (except for external ai call) are run from inside web worker contexts. Currently the
|
||||
ai generated code / expression is run through a simple minded eval inside a web worker mechanism. Use
|
||||
of WebWorker helps avoid exposing browser global scope to the generated code directly. However any
|
||||
shared web worker scope isnt isolated.
|
||||
|
||||
Either way always remember to cross check tool requests and generated responses when using tool calling.
|
||||
|
||||
##### using bundled simpleproxy.py (helps bypass browser cors restriction, ...)
|
||||
|
||||
|
|
@ -475,7 +490,7 @@ Either way always remember to cross check the tool requests and generated respon
|
|||
nav, ... blocks.
|
||||
|
||||
* search_web_text - search for the specified words using the configured search engine and return the
|
||||
plain textual content from the search result page.
|
||||
plain textual content from the search result page.
|
||||
|
||||
* fetch_pdf_as_text - fetch/read specified pdf file and extract its textual content
|
||||
* this depends on the pypdf python based open source library
|
||||
|
|
@ -838,6 +853,7 @@ Cleanup in general
|
|||
* Allow user to load multiple images and submit to ai as part of a single user message.
|
||||
* Use popover ui to allow user to view larger versions of loaded images as well as remove before submitting
|
||||
to ai, if and when needed.
|
||||
* add external_ai toolcall
|
||||
|
||||
|
||||
#### ToDo
|
||||
|
|
@ -866,9 +882,6 @@ potentially.
|
|||
MAYBE add a special ClientSideOnly role for use wrt Chat history to maintain things to be shown in a chat
|
||||
session to the end user, but inturn not to be sent to the ai server. Ex current settings or so ...
|
||||
|
||||
Update UIRefresh helper to optionally remove messages no longer in the sliding window, so user only sees
|
||||
what is sent to the ai server in the chat session messages ui.
|
||||
|
||||
Updating system prompt, will reset user input area fully now, which seems a good enough behaviour, while
|
||||
keeping the code flow also simple and straight, do I need to change it, I dont think so as of now.
|
||||
|
||||
|
|
@ -877,13 +890,8 @@ or show all the messages (ie even beyond the sliding window)?
|
|||
* rather previously with chat_show only whats in current sliding window was being shown, but now with
|
||||
the uirefresh based logic, all messages from last chat_show will be shown irrespective of whether still
|
||||
in ai server handshake related sliding window or not.
|
||||
|
||||
Add support for submitting multiple images in a single user query/response.
|
||||
|
||||
Allow ai to use a independent session of itself/ai as a tool call, with a custom system prompt it generates,
|
||||
in order to get any job it deems necessary to be done in a uncluttered independent session. Could be used by
|
||||
ai to summarise a large text content, where it could using the context of the text to generate a suitable
|
||||
system prompt for summarising things suitably or so and so and so ...
|
||||
* Update UIRefresh helper to optionally remove messages no longer in the sliding window, so user only sees
|
||||
what is sent to the ai server in the chat session messages ui.
|
||||
|
||||
For now amn't bringing in mozilla/github/standard-entities pdf, md, mathslatex etal javascript libraries for
|
||||
their respective functionalities.
|
||||
|
|
|
|||
|
|
@ -475,6 +475,8 @@ function usage_note(sRecentUserMsgCnt) {
|
|||
}
|
||||
|
||||
|
||||
/** @typedef {{ chatPropsStream: boolean, toolsEnabled: boolean}} SCHandshakeProps*/
|
||||
|
||||
/** @typedef {ChatMessageEx[]} ChatMessages */
|
||||
|
||||
/** @typedef {{iLastSys: number, xchat: ChatMessages}} SimpleChatODS */
|
||||
|
|
@ -495,6 +497,11 @@ class SimpleChat {
|
|||
this.iLastSys = -1;
|
||||
this.latestResponse = new ChatMessageEx();
|
||||
this.me = me;
|
||||
/** @type {SCHandshakeProps} */
|
||||
this.handshakeProps = {
|
||||
chatPropsStream: false,
|
||||
toolsEnabled: true,
|
||||
}
|
||||
}
|
||||
|
||||
clear() {
|
||||
|
|
@ -775,10 +782,10 @@ class SimpleChat {
|
|||
for(let k in this.me.apiRequestOptions) {
|
||||
obj[k] = this.me.apiRequestOptions[k];
|
||||
}
|
||||
if (this.me.chatProps.stream) {
|
||||
if (this.handshakeProps.chatPropsStream) {
|
||||
obj["stream"] = true;
|
||||
}
|
||||
if (this.me.tools.enabled) {
|
||||
if (this.handshakeProps.toolsEnabled) {
|
||||
obj["tools"] = this.me.toolsMgr.meta();
|
||||
}
|
||||
return JSON.stringify(obj);
|
||||
|
|
@ -935,8 +942,6 @@ class SimpleChat {
|
|||
/**
|
||||
* Handle the response from the server be it in oneshot or multipart/stream mode.
|
||||
* Also take care of the optional garbage trimming.
|
||||
* TODO: Need to handle tool calling and related flow, including how to show
|
||||
* the assistant's request for tool calling and the response from tool.
|
||||
* @param {Response} resp
|
||||
* @param {string} apiEP
|
||||
* @param {HTMLDivElement} elDiv
|
||||
|
|
@ -944,7 +949,7 @@ class SimpleChat {
|
|||
async handle_response(resp, apiEP, elDiv) {
|
||||
let theResp = null;
|
||||
try {
|
||||
if (this.me.chatProps.stream) {
|
||||
if (this.handshakeProps.chatPropsStream) {
|
||||
theResp = await this.handle_response_multipart(resp, apiEP, elDiv);
|
||||
this.latestResponse.clear();
|
||||
} else {
|
||||
|
|
@ -973,9 +978,10 @@ class SimpleChat {
|
|||
* Handle the chat handshake with the ai server
|
||||
* @param {string} baseURL
|
||||
* @param {string} apiEP
|
||||
* @param {SCHandshakeProps} hsProps
|
||||
* @param {HTMLDivElement} elDivChat - used to show chat response as it is being generated/recieved in streaming mode
|
||||
*/
|
||||
async handle_chat_hs(baseURL, apiEP, elDivChat) {
|
||||
async handle_chat_hs(baseURL, apiEP, hsProps, elDivChat) {
|
||||
class ChatHSError extends Error {
|
||||
constructor(/** @type {string} */message) {
|
||||
super(message);
|
||||
|
|
@ -983,6 +989,8 @@ class SimpleChat {
|
|||
}
|
||||
}
|
||||
|
||||
this.handshakeProps.chatPropsStream = hsProps.chatPropsStream
|
||||
this.handshakeProps.toolsEnabled = hsProps.toolsEnabled
|
||||
let theUrl = ApiEP.Url(baseURL, apiEP);
|
||||
let theBody = this.request_jsonstr(apiEP);
|
||||
console.debug(`DBUG:SimpleChat:${this.chatId}:HandleChatHS:${theUrl}:ReqBody:${theBody}`);
|
||||
|
|
@ -1742,7 +1750,7 @@ class MultiChatUI {
|
|||
this.elInUser.disabled = true;
|
||||
|
||||
try {
|
||||
let theResp = await chat.handle_chat_hs(this.me.baseURL, apiEP, this.elDivChat)
|
||||
let theResp = await chat.handle_chat_hs(this.me.baseURL, apiEP, { chatPropsStream: this.me.chatProps.stream, toolsEnabled: this.me.tools.enabled }, this.elDivChat)
|
||||
if (chatId == this.curChatId) {
|
||||
this.chat_uirefresh(chatId);
|
||||
if ((theResp.trimmedContent) && (theResp.trimmedContent.length > 0)) {
|
||||
|
|
@ -1910,7 +1918,7 @@ export class Me {
|
|||
* Control how many milliseconds to wait for tool call to respond, before generating a timed out
|
||||
* error response and giving control back to end user.
|
||||
*/
|
||||
toolCallResponseTimeoutMS: 20000,
|
||||
toolCallResponseTimeoutMS: 200*1000,
|
||||
/**
|
||||
* Control how many seconds to wait before auto triggering tool call or its response submission.
|
||||
* A value of 0 is treated as auto triggering disable.
|
||||
|
|
|
|||
|
|
@ -23,11 +23,11 @@ let externalai_meta = {
|
|||
"type": "string",
|
||||
"description": "The system prompt to define the role and expected behavior of the external AI.",
|
||||
"required": true,
|
||||
"example": "You are a professional summarizer. Summarize the following text in 100 words:"
|
||||
"example": "You are a professional summarizer. Summarize the following text with up to around 500 words, or as the case may be based on the context:"
|
||||
},
|
||||
"user_message": {
|
||||
"type": "string",
|
||||
"description": "The message to be processed by the external AI.",
|
||||
"description": "The detailed message with all the needed context to be processed by the external AI.",
|
||||
"required": true,
|
||||
"example": "This is a long document about climate change. It discusses rising temperatures, policy responses, and future projections. The remaining part of the document is captured here..."
|
||||
},
|
||||
|
|
@ -94,7 +94,7 @@ function externalai_run(chatid, toolcallid, toolname, obj) {
|
|||
|
||||
sc.add_system_anytime(obj['system_prompt'], 'TC:ExternalAI')
|
||||
sc.add(new mChatMagic.ChatMessageEx(new mChatMagic.NSChatMessage(mChatMagic.Roles.User, obj['user_message'])))
|
||||
sc.handle_chat_hs(gMe.baseURL, mChatMagic.ApiEP.Type.Chat, gMe.multiChat.elDivChat).then((resp)=>{
|
||||
sc.handle_chat_hs(gMe.baseURL, mChatMagic.ApiEP.Type.Chat, { chatPropsStream: gMe.chatProps.stream, toolsEnabled: false }, gMe.multiChat.elDivChat).then((resp)=>{
|
||||
gMe.toolsMgr.workers_postmessage_for_main(gMe.toolsMgr.workers.js, chatid, toolcallid, toolname, resp.content_equiv());
|
||||
}).catch((err)=>{
|
||||
gMe.toolsMgr.workers_postmessage_for_main(gMe.toolsMgr.workers.js, chatid, toolcallid, toolname, `Error:TC:ExternalAI:${err}`);
|
||||
|
|
|
|||
Loading…
Reference in New Issue