Rename chatmsg_ui_refresh to chat_uirefresh.
Even in case of the delete path and inturn deleting one of the last
two messages in a chat session, now use the generified chat uirefresh
logic instead of the chat_show full session refresh / recreation of
the session ui
Inturn to make the uirefresh really generic and usable in all cases
including above case, now take care of clearing the tool call edit
/ trigger at the beginning, so that the last 2 messages decide in
turn whether to show a tool call edit/trigger ui or not, as well as
the tool call response edit / submit ui.
SC.add discards any temp role message and not just tool temp msg
New SC.add_smart which either adds a new message to the chat session
or replaces existing last message in the chat session, based on same
role or different, given that the chat session cant have the same
role type chat message occuring adjacent in it. Inturn rename MCUI
chatmsg_add_uishow to chatmsg_addsmart_uishow and use add_smart
* helps wrt trying to rerun a tool call with modified args or so.
Rather prev discard temp role messages in SC.add not good enough
as uniqId will change in its case.
Helps avoid adding duplicate ToolTemp messages in chat session ui
* can help if loading a prev chat session which ended in a user
message without a ai response. User can type in a new message
and continue that old chat session, with the new message, replacing
the old user message as well as initiating handshake with ai server
in a proper manner.
Replace MCUI chatmsg_ui_updateprev_appendlast with chatmsg_ui_refresh
which is a more generic flow, which takes care of updating the ui as
needed irrespective of if the specified set of messages are already
in the chat session ui or not. Also allows the caller to control how
many messages at the end need to be refreshed wrt ui.
If deleting a non last (and non just before last) message, then
just directly remove the corresponding chat message block from the
ChatSession UI and be done with it.
However if deleting the last (or just before last) message, then
one needs to decide on whether tool call edit/trigger ui is shown
or removed and so... and similarly wrt tool response edit/submit.
As position-area is not yet officially supported in firefox (its
only in nightly builds, as of now), so switching to the inset
block/inline start/end css properties.
Had forgotten to move the one shot resp into try catch before.
Fixed it. Ensure both oneshot multipart resp try catch
Add some todos for later.
Add a new check wrt response being normal or a error related one
ie the content is actually a error message.
Use css conditional attribute styling to change background color
of the user input textarea to match the tool role message block
color, when the user input textarea is in the TOOL.TEMP mode
With this user can know that the user input area is currently
showing and accepting tool result data for submission.
Had forgotten to update these two functions wrt the tool response
related new fields. This is fixed now.
Also show tool-call-id and tool-name to end user as part of chat
message showing.
ALERT on disk structure change old saves wont work esp wrt tool
responses
Pass a list to keep track of the numbering at different depths
as well as to delay incrementing the numbering to the last min
Dont let recursion go beyond a predefined limit
Rather a chat with gpt-oss generated a assistant response which
included chat-content, chat-reasoning and chat-toolcall all in the
same response. On responding to same with tool call result, the
server http handshake responded with a 500 Internal server error,
So added this to get more details in this case, as well as in
general for future.
To make it easier for the ai model to understand that this works
mainly for html pages and not say xml or pdf or so. For those
one needs to use other explict tool calls provided like fetchpdftext
or fetchxmltext or so
The server service path renamed from urltext to htmltext.
SearchWebText also updated to use htmltext now
At simpleproxy end
* Add the tag names hierarchy before contents of a tag
* Remember to convert the tagDrops to small case as HTMLParser base
class seems to do that by default.
At the client ui end
* if undefined remember to pass a empty list wrt tagDrops.
* cleanup the func description and also mention possible tagDrops
for RSS feeds in the tool meta
Me.tools.toolNames is now directly updated by init of ToolsManager
The two then in the old tools.init was also unneeded then also as
both could have been merged into a single then, even then. However
with the new flow, the 1st then is no longer required.
Also now the direct calling of onmessage handler on the main thread
side wrt immidiate result from tool call is delayed for a cycling
through the events loop, by using a setTimeout.
No longer expose the tools module throught documents, given that
the tools module mainly contains ToolsManager, whose only instance
is available through the global gMe.
Move the devel related exposing throught document object into a
function of its own.
Some ai's dont seem to be prefering to use this direct helper
provided for fetching pdf as text, on its own. Instead ai (gptoss)
seems to be keen on fetching raw pdf and extract text etal, so now
renaming the function call to try and make its semantic more
readily obivious hopefully.
It sometimes (not always) seem to assum fetch_web_url_text, can
convert pdf to text and return it. Maybe I need to place the
specific fetch pdf as text before the generic fetch web url text
and so...
With the rename, the pdf specific fetch seems to be getting used
more.
Allow user to clear the existing chat. The user does have the
option to load the just cleared chat, if required.
Add icons wrt clearing chat and settings.
Update readme wrt searchDrops, auto settings ui creation
Rename tools-auto to tools-autoSecs, to make it easy to realise
that the value represents seconds.
Update the initial skeleton wrt the tag drops logic
* had forgotten to convert object to json string at the client end
* had confused between js and python and tried accessing the dict
elements using . notation rather than [] notation in python.
* if the id filtered tag to be dropped is found, from then on
track all other tags of the same type (independent of id),
so that start and end tags can be matched. bcas end tag call
wont have attribute, so all other tags of same type need to
be tracked, for proper winding and unwinding to try find
matching end tag
* remember to reset the tracked drop tag type to None once matching
end tag at same depth is found. should avoid some unnecessary
unwinding.
* set/fix the type wrt tagDrops explicitly to needed depth and
ensure the dummy one and any explicitly got one is of right type.
Tested with duckduckgo search engine and now the div based unneeded
header is avoided in returned search result.
Chances are for ai models which dont support tool calling, things
will be such that the tool calls meta data shared will be silently
ignored without much issue.
So enabling tool calling feature by default, so that in case one
is using a ai model with tool calling the feature is readily
available for use.
Revert SlidingWindow ChatHistory in Context from last 10 to last 5
(2 more then origianl, given more context support in todays models)
by default, given that now tool handshakes go through the tools
related side channel in the http handshake and arent morphed into
normal user-assistant channel of the handshake.
Rename path and tags/identifiers from Pdf2Text to PdfText
Rename the function call to pdf_to_text, this should also help
indicate semantic more unambiguously, just in case, especially
for smaller models.
also move debug dump helper to its own module
also remember to specify the Class name in quotes, similar to
refering to a class within a member of th class wrt python type
checking.
As I was seeing the truncated message even for stripped plain text
web acces, relooking at that initial go at truncating, revealed
a oversight, which had the truncation logic trigger anytime the
iResultMaxDataLength was greater than 0, irrespective of whether
the actual result was smaller than the allowed limit or not,
thus adding that truncated message to end of result unnecessarily.
Have fixed that oversight
Also recent any number of args based simpleprox handshake helper
in toolweb seems to be working (atleast for the existing single
arg based calls).
Update the descriptions of set and get to indicate the possible
corner cases or rather semantic in such situations.
Update the readme also a bit. The auto save and restore mentioned
has nothing to do with the new data store mechanism.
In the eagerness of initial skeleton, had forgotten that the
root/generic tool call router takes care of parsing the json string
into a object, before calling the tool call, so no need to try
parse again. Fixed the same.
Hadnt converted the object based response from data store related
calls in the db web worker, into json string before passing to the
generic tool response callback, fixed the same.
- Rather the though of making the ChatMsgEx.createAllInOne handle
string or object set aside for now, to keep things simple and
consistant to the greatest extent possible across different flows.
And good news - flow is working atleast for the overall happy path
Need to check what corner cases are lurking like calling set on
same key more than once, seemed to have some flow oddity, which I
need to check later.
Also maybe change the field name to value from data in the response
to get, to match the field name convention of set. GPT-OSS is fine
with it. But worst case micro / nano / pico models may trip up, in
worst case, so better to keep things consistent.
instead of using the shared bearer token as is, hash it with
current year and use the hash.
keep /aum path out of auth check.
in future bearer token could be transformed more often, as well as
with additional nounce/dynamic token from server got during initial
/aum handshake as also running counter and so ...
NOTE: All these circus not good enough, given that currently the
simpleproxy.py handshakes work over http. However these skeletons
put in place, for future, if needed.
TODO: There is a once in a bluemoon race when the year transitions
between client generating the request and server handling the req.
But other wise year transitions dont matter bcas client always
creates fresh token, and server checks for year change to genrate
fresh token if required.
Moved it into Me->tools, so that end user can modify the same as
required from the settings ui.
TODO: Currently, if tc response is got after a tool call timed out
and user submitted default timed out error response, the delayed
actual response when it is got may overwrite any new content in
user query box, this needs to be tackled.