Ensures that if the url being requested as any query strings in
them then things dont get messed up, when the url to get inc its
query is extracted from the proxy request's query string
First identify lines which have only whitespace and replace them
with lines with only newline char in them.
Next strip out adjacent lines, if they have only newlines
As html can be malformed, xml ElementTree XMLParser cant handle
the same properly, so switch to the HtmlParser helper class that is
provided by python and try extend it.
Currently a minimal skeleton to just start it out, which captures
only the body contents.
Declare the result of UrlReq as a DataClass, so that one doesnt
goof up wrt updating and accessing members.
Duplicate UrlRaw into UrlText, need to add Text extracting from
html next for UrlText
Dont forget to map members of got entity from fetch to things
from saved original promise, bcas remember what is got is a promise.
also
add some comments around certain decisions and needed exploration
also possible refinement wrt trapping, if needed, added as comment
all or allSettled to use or not is the question.
whether to wait for a round trip through the related event loop or
not is also a question.
Also update the sliding window context size to last 9 chat messages
so that there is a sufficiently large context for multi turn tool
calls based adjusting by ai and user, without needing to go full
hog, which has the issue of overflowing the currently set context
window wrt the loaded ai model.
these common helpers avoid needing ignore tagging to ts-check, in
places where valid constructs have been used which go beyond strict
structured js handling that is tried to be achieved using it, but
are still valid and legal.
Expand the xml format id, name and content in content field of
tool result into apropriate fields in the tool result message sent
to the genai/llm engine on the server.
Use HTMLElement's dataset to maintain tool call id along with
the element which maintains the toolname.
Pass it along to the tools manager and inturn the actual tool
calls and through them to the web worker handling the tool call
related code and inturn returning it back as part of the obj
which is used to return the tool call result.
Embed the tool call id, function name and function result into
the content field of chat message in terms of a xml structure
Also make use of tool role to send back the tool call result.
Do note that currently the id, name and content are all embedded
into the content field of the tool role message sent to the
ai engine on the server.
NOTE: Use the user query entry area for showing tool call result
in the above mentioned xml form, as well as for user to enter
their own queries. Based on presence of the xml format data at
beginning the logic will treat it has a tool result and if not
then as a normal user query.
The css has been updated to help show tool results/msgs in a
lightyellow background
Users of recent_chat updated to work with ChatMessageEx
As part of same recent_chat_ns also added, for the case where the
array of chat messages can be passed as is ie in the chat mode,
provided it has only the network handshake representation of the
messages.
Simplify Add semantic by expecting any validation of stuff before
adding to be done by the callers of Add and not by add itself.
Also update it to expect ChatMessageEx object
Update all users of add to follow the new syntax and semantic.
Remove the old and ununsed AddSysPromptOnlyAtBegin helper
GetSystemLatest and its users updated wrt ChatMessageEx.
RecentChat updated wrt ChatMessageEx. Also now irrespective of
whether full history is being retrieved or only a subset, both
cases refer to the ChatMessageEx instances in SimpleChat.xchat
without creating new instances of anything.
Use the equivalent update_stream directly added to ChatMessageEx.
update_stream is also more generic to some extent and also directly
implemented by the ChatMessageEx class.
Rename ChatMessage to ChatMessageEx.
Add typedefs for NSToolCall and NSChatMessage, they represent the
way the corresponding data is structured in network hs.
Add logic to build the ChatMessageEx from data got over network in
streaming mode.
As the tool calling, if enabled, will need access to last few
user query and ai assistant responses (which will also include
in them the tool call requests and the corresponding results),
so that the model can build answers based on its tool call reqs
and got responses, and also given that most of the models these
days have sufficiently large context windows, so the sliding
window context implemented by SimpleChat logic has been increased
by default to include last 4 query and their responses roughlty.
Had forgotten to specify type as module wrt web worker, in order
to allow it to import the toolsconsole module.
Had forgotten to maintain the id of the timeout handler, which is
needed to clear/stop the timeout handler from triggering, if tool
call response is got well in time.
As I am currently reverting the console redirection at end of
handling a tool call code in the web worker message handler, I
need to setup the redirection each time. Also I had forgotten
to clear the console.log capture data space, before a new tool
call code is executed, this is also fixed by this change.
TODO: Need to abort the tool call code execution in the web worker
if possible in future, if the client / browser side times out
waiting for tool call response, ie if the tool call code is taking
up too much time.
tools manager/module
* setup the web worker that will help execute the tool call related
codes in a js environment that is isolated from the browsers main
js environment
* pass the web worker to the tool call providers, for them to use
* dont wait for the result from the tool call, as it will be got
later asynchronously through a message
* allow users of the tools manager to register a call back, which
will be called when ever a message is got from the web worker
containing response wrt previously requested tool call execution.
simplechat
* decouple toolcall response handling and toolcall requesting logic
* setup a timeout to take back control if tool call takes up too
much time. Inturn help alert the ai model, that the tool call
took up too much time and so was aborted, by placing a approriate
tagged tool response into user query area.
* register a call back that will be called when response is got
asynchronously wrt anye requested tool calls.
In turn take care of updating the user query area with response
got wrt the tool call, along with tool response tag around it.