Given toolcall.py maintains ToolCall, ToolManager and MCP related
types and base classes, so rename to toolcalls.py
Also add the bash script with curl used for testing the tools/list
mcp command.
Remove the sample function meta ref, as tools/list is working ok.
Also enforce need for kind of a sane Content-Length header entry
in our case. NOTE: it does allow for 0 or other small content lengths,
which isnt necessarily valid.
As expected dataclass field member mutable default values needing
default_factory.
Dont forget returning after sending error response.
TypeAlias type hinting flow seems to go beyond TYPE_CHECKING.
Also email.message.Message[str,str] not accepted, so keep things
simple wrt HttpHeaders for now.
By default bearer based auth check is done always whether in https
or http mode. However by updating the sec.bAuthAlways config entry
to false, the bearer auth check will be carried out only in https
mode.
Given that there could be other service paths beyond /mcp exposed
in future, and given that it is not necessary that their post body
contain json data, so move conversion to json to within mcp_run
handler.
While retaining reading of the body in the generic do_POST ensures
that the read size limit is implicitly enforced, whether /mcp now or
any other path in future.
Fix a oversight wrt ToolManager.meta, where I had created a dict
of name-keyed toolcall metas, instead of a simple list of toolcall
metas. Rather I blindly duplicated structure I used for storing the
tool calls in the tc_switch in the anveshika sallap client side
code.
Add dataclasses to mimic the MCP tools/list response. However wrt
the 2 odd differences between the MCP structure and OpenAi tools
handshake structure, for now I have retained the OpenAi tools hs
structure.
Add a common helper send_mcp to ProxyHandler given that both
mcp_toolscall and mcp_toolslist and even others like mcp_initialise
in future require a common response mechanism.
With above and bit more implement initial go at tools/list response.
Build the list of tool calls
Trap some of the MCP post json based requests and map to related
handlers. Inturn implement the tool call execution handler.
Add some helper dataclasses wrt expected MCP response structure
TOTHINK: For now maintain id has a string and not int, with idea
to map it directly to callid wrt tool call handshake by ai model.
TOCHECK: For now suffle the order of fields wrt jsonrpc and type
wrt MCP response related structures, assuming the order shouldnt
matter. Need to cross check.
Define a typealias for HttpHeaders and use it where ever needed.
Inturn map this to email.message.Message and dict for now.
If and when python evolves Http Headers type into better one,
need to replace in only one place.
Add a ToolManager class which
* maintains the list of tool calls and inturn allows any given
tool call to be executed and response returned along with needed
meta data
* generate the overall tool calls meta data
* add ToolCallResponseEx which maintains full TCOutResponse for
use by tc_handle callers
Avoid duplicating handling of some of the basic needed http header
entries.
Move checking for any dependencies before enabling a tool call into
respective tc??? module.
* for now this also demotes the logic from the previous fine grained
per tool call based dependency check to a more global dep check at
the respective module level
Will be looking at changing the handshake between AnveshikaSallap
web tech based client logic and this tool calls server to follow
the emerging interoperable MCP standard
Also remember to picks the tagDropREs from passed args object and
not from got http header.
Even TCHtmlText updated to get the tags to drop from passed args
object and not got http header. And inturn allow ai to pass this
optional arg, as it sees fit in co-ordination with user.
Instead of manually setting up rfile and wfile after switching to
ssl mode wrt a client request, now use the builtin setup provided
by the RequestHandler logic, so that these and any other needed
things will be setup as needed after the ssl hs based new socket,
just in case new things are needed in future.
Minimal skeleton to allow dict [] style access to dataclass based
class's attributes/fields. Also get member function similar to dict.
This simplifies the flow and avoids duplicating data between
attribute and dict data related name and data spaces.
Add a helper base class to try map data class's attributes into
underlying dict.
TODO: this potentially duplicates data in both normal attribute
space as well as dict items space. And will require additional
standard helper logics to be overridden to ensure sync between
both space etal. Rather given distance from python internals for
a long time now, on pausing and thinking a bit, better to move
into a simpler arch where attributes are directly worked on for
dict [] style access.
Instead of maintaining the config and some of the runtime states
identified as gMe as a generic literal dictionary which grows at
runtime with fields as required, try create it as a class of classes.
Inturn use dataclass annotation to let biolerplate code get auto
generated.
A config module created with above, however remaining part of the
code not yet updated to work with this new structure.
process_args and load_config moved into the new Config class.
otherwise aum path was not handled immidiately wrt exceptions.
this also ensures any future changes wrt get request handling
also get handled immidiately wrt exceptions, that may be missed
by any targetted exception handling.
Given that default HTTPServer handles only one connection and inturn
request at any given time, so if a client opens connection and then
doesnt do anything with it, it will block other clients by putting their
requests into network queue for long.
So to overcome the above issue switch to ThreadingHTTPServer, which
starts a new thread for each request.
Given that previously ssl wrapping was done wrt the main server socket,
even with switching to ThreadingHTTPServer, the handshake for ssl/tls
still occurs in the main thread before a child thread is started for
parallel request handling, thus the ssl handshake phase blocking other
client requests.
So now avoid wrapping ssl wrt the main server socket, instead wait for
ThreadingHttpServer to start the new thread for a client request ie
after a connection is accepted for the client, before trying to wrap
the connection in ssl. This ensures that the ssl handshake occurs in
this child (ie client request related) thread. So some rogue entity
opening a http connection and not doing ssl handshake wont block.
Inturn in this case the rfile and wfile instances within the proxy
handler need to be remapped to the new ssl wrapped socket.
Implement todo noted in last commit, and bit more.
This brings in clearing of the external ai tool call special chat
session divStream during chat show, which ensures that it gets
hidden by default wrt other chat sessions and inturn only get
enabled if user triggers a new tool call involving external ai
tool call.
This patch also ensures that if ext ai tool call takes too much
time and so logic gives you back control with a timed out response
as a possible response back to ai wrt the tool call, then the
external ai tool call's ai live response is no longer visible in
the current chat session ui. So user can go ahead with the timed
out response or some other user decided response as the response
to the tool call. And take the chat in a different direction of
their and ai's choosing.
Or else, if they want to they can switch to the External Ai
specific special chat session and continue to monitor the response
from the tool call there, to understand what the final response
would have been wrt that tool call.
Rather this should keep the ui flow clean.
ALERT: If the user triggers a new ext ai tool call, when the
old one is still alive in the background, then response from
both will be in a race for user visibility, so beware of it.
Always show all the info wihen show_info is called, inturn avoid
the corresponding all info enable flag wrt show_info as well as
chat_show.
Now chat_show gives the option to its caller to enable showing of
its own chat session divStream. This is in addition to the handle
multipart response also calling corresponding divStream show.
Previously chat_show would have not only cleared corresponding chat
session's divStream contents but would have also hidden divStream.
Now except for the clearChat case, in all other cases own divStream
is unhidden, when chat_show is called.
Without this, when a tool call takes too much time and inturn
a chat session times out the tool call and inturn user switches
between chat sessions, if the tool call was external_ai, then its
related live ai response would no longer be visible in any of the
chat sessions, including the external_ai special chat session, if
the user had switched to this external_ai special chat session.
But now in the external_ai special chat session, the live response
will be visible.
TODO: With this new semantic wrt chat_show, where a end user can
always peek into a chat session's ai live stream response if any,
as long as that chat session's ai server handshake is still active,
So now After tool call timeout, which allows users to switch between
sessions, it is better to disable the external ai live divStream
in other chat sessions, when user switches into them. This ensures
that
1. if user doesnt switch out of the chat session which triggered
external_ai, for now the user can continue to see the ext ai live
response stream.
2. Switching out of the chat session which triggered ext ai, will
automatically disable viewing of external ai live response from
all chat sessions except for the external ai's special chat session.
IE I need to explicitly clear not just the own divStream, but also
the external ai related divStream, which is appened to end of all
chat session's UI.
This will tidy up the usage flow and ui and avoid forcefully showing
external ai tool call's ai live response in other chat sessions,
which didnt trigger the ext ai tool call. And also in the chat
session which triggered ext ai, it will stop showing if user exits
out of that chat session. Same time user can always look at the
ext ai live response stream in the special chat session corresponding
to ext ai.
If user explicitly makes a content text format selection, the
same will be used.
Else based on session settings, a format will be used.
Now when the popover menu is shown, the current message's format
type is reflected in the popover menu.
Add format selection box to the popover.
Update show_message logic to allow refreshing a existing message
ui element, rather than creating a new one.
Trigger refresh of the message ui element, when format selection
changes.
Move all markdown configs into a single object field.
Add always flag, which if set, all roles' message contents will be
treated as markdown, else only ai assistant's messages will be
treated as markdown.
If lines immidately follows a list item, without the list marker
at their begining, but with a offset matching the list item, then
these lines will be appended to that list item.
If a empty line is there between a list item and a new line with
some content, but without a list marker
* if the content offset is less than the last list item, then
unwind the lists before such a line.
* if the content offset is larger than the last list item, then
the line will be added as a new list item at the same level
as the last list item.
* if the content offset is same as the last list tiem, then
unwind the list by one level and then insert this line as a
new list item at this new unwound level.
Given that now fetch_web_url_raw can also fetch local files, if local
file access scheme is enabled in simpleproxy.py, so rename this
tool call by dropping web from its name, given that some ai models
were getting confused because of the same.
Maintain raw and sanitized versions of line.
Make blockquote work with raw line and not the sanitized line.
So irrespective of whether sanitize is enabled or not, the logic
will still work. Inturn re-enable HtmlSanitize.
Similar to listitem before, now also allow a para to have its long
lines split into adjacent lines. Inturn the logic will take care of
merging them into single para.
The common logic wrt both flows moved into its own helper function.