SimpleChatTCRV:Markdown:Process headline and horizline

Also update readme a bit, better satisfying md file format.
This commit is contained in:
hanishkvc 2025-11-26 21:44:23 +05:30
parent 707b719f67
commit 67f971527d
3 changed files with 56 additions and 23 deletions

View File

@ -266,6 +266,10 @@ It is attached to the document object. Some of these can also be updated using t
* NOTE: the latest user message (query/response/...) for which we need a ai response, will also be counted as belonging to the iRecentUserMsgCnt.
* bMarkdown - text contents in the messages are interpreted as Markdown based text and inturn converted to html form for viewing by the end user.
* bMarkdownHtmlSanitize - the text content is sanitized using the browser's dom parser, so that any html tags get converted to normal visually equivalent text representation, before processing by the markdown to html conversion logic.
* bCompletionFreshChatAlways - whether Completion mode collates complete/sliding-window history when communicating with the server or only sends the latest user query/message.
* bCompletionInsertStandardRolePrefix - whether Completion mode inserts role related prefix wrt the messages that get inserted into prompt field wrt /Completion endpoint.
@ -280,13 +284,13 @@ It is attached to the document object. Some of these can also be updated using t
* enabled - control whether tool calling is enabled or not
remember to enable this only for GenAi/LLM models which support tool/function calling.
* remember to enable this only for GenAi/LLM models which support tool/function calling.
* proxyUrl - specify the address for the running instance of bundled local.tools/simpleproxy.py
* proxyAuthInsecure - shared token between simpleproxy.py server and client ui, for accessing service provided by it.
Shared token is currently hashed with the current year and inturn handshaked over the network. In future if required one could also include a dynamic token provided by simpleproxy server during /aum handshake and running counter or so into hashed token. ALERT: However do remember that currently the handshake occurs over http and not https, so others can snoop the network and get token. Per client ui running counter and random dynamic token can help mitigate things to some extent, if required in future.
* Shared token is currently hashed with the current year and inturn handshaked over the network. In future if required one could also include a dynamic token provided by simpleproxy server during /aum handshake and running counter or so into hashed token. ALERT: However do remember that currently the handshake occurs over http and not https, so others can snoop the network and get token. Per client ui running counter and random dynamic token can help mitigate things to some extent, if required in future.
* searchUrl - specify the search engine's search url template along with the tag SEARCHWORDS in place where the search words should be substituted at runtime.
@ -304,9 +308,9 @@ It is attached to the document object. Some of these can also be updated using t
* autoSecs - the amount of time in seconds to wait before the tool call request is auto triggered and generated response is auto submitted back.
setting this value to 0 (default), disables auto logic, so that end user can review the tool calls requested by ai and if needed even modify them, before triggering/executing them as well as review and modify results generated by the tool call, before submitting them back to the ai.
* setting this value to 0 (default), disables auto logic, so that end user can review the tool calls requested by ai and if needed even modify them, before triggering/executing them as well as review and modify results generated by the tool call, before submitting them back to the ai.
this is specified in seconds, so that users by default will normally not overload any website through the proxy server.
* this is specified in seconds, so that users by default will normally not overload any website through the proxy server.
the builtin tools' meta data is sent to the ai model in the requests sent to it.
@ -316,11 +320,11 @@ It is attached to the document object. Some of these can also be updated using t
* apiRequestOptions - maintains the list of options/fields to send along with api request, irrespective of whether /chat/completions or /completions endpoint.
If you want to add additional options/fields to send to the server/ai-model, and or remove them, for now you can do these actions manually using browser's development-tools/console.
* If you want to add additional options/fields to send to the server/ai-model, and or remove them, for now you can do these actions manually using browser's development-tools/console.
For string, numeric, boolean, object fields in apiRequestOptions, including even those added by a user at runtime by directly modifying gMe.apiRequestOptions, setting ui entries will be auto created.
* For string, numeric, boolean, object fields in apiRequestOptions, including even those added by a user at runtime by directly modifying gMe.apiRequestOptions, setting ui entries will be auto created.
cache_prompt option supported by example/server is allowed to be controlled by user, so that any caching supported wrt system-prompt and chat history, if usable can get used. When chat history sliding window is enabled, cache_prompt logic may or may not kick in at the backend wrt same, based on aspects related to model, positional encoding, attention mechanism etal. However system prompt should ideally get the benefit of caching.
* cache_prompt option supported by tools/server is allowed to be controlled by user, so that any caching supported wrt system-prompt and chat history, if usable can get used. When chat history sliding window is enabled, cache_prompt logic may or may not kick in at the backend wrt same, based on aspects related to model, positional encoding, attention mechanism etal. However system prompt should ideally get the benefit of caching.
* headers - maintains the list of http headers sent when request is made to the server. By default

View File

@ -113,8 +113,6 @@ A lightweight simple minded ai chat client with a web front-end that supports mu
- user can update the settings for auto executing these actions, if needed
- external_ai allows invoking a separate optionally fresh by default ai instance
- ai could run self modified targeted versions of itself/... using custom system prompts and user messages as needed
- user can setup an ai instance with additional compute access, which should be used only if needed
- by default in such a instance
- tool calling is kept disabled along with
- client side sliding window of 1,
@ -123,6 +121,14 @@ A lightweight simple minded ai chat client with a web front-end that supports mu
and the default behaviour will get impacted if you modify the settings of this special chat session.
- Restarting this chat client logic will force reset things to the default behaviour,
how ever any other settings wrt TCExternalAi, that where changed, will persist across restarts.
- this instance maps to the current ai server itself by default, but can be changed by user if needed.
- could help with handling specific tasks using targetted personas or models
- ai could run self modified targeted versions of itself/... using custom system prompts and user messages as needed
- user can setup ai instance with additional compute, which should be used only if needed, to keep costs in control
- can enable a modular pipeline with task type and or job instance specific decoupling, if needed
- tasks offloaded could include
- summarising, data extraction, formatted output, translation, ...
- creative writing, task breakdown, ...
- Client side Sliding window Context control, using `iRecentUserMsgCnt`, helps limit context sent to ai model
@ -135,8 +141,9 @@ A lightweight simple minded ai chat client with a web front-end that supports mu
- built using plain html + css + javascript and python
- no additional dependencies that one needs to worry about and inturn keep track of
- except for pypdf, if pdf support needed. automaticaly drops pdf tool call support, if pypdf missing
- fits within ~260KB even in uncompressed source form (including simpleproxy.py)
- fits within ~50KB compressed source or ~284KB in uncompressed source form (both including simpleproxy.py)
- easily extend with additional tool calls using either javascript or python, for additional functionality
as you see fit
Start exploring / experimenting with your favorite ai models and thier capabilities.
@ -149,7 +156,7 @@ One can modify the session configuration using Settings UI. All the settings and
| Group | Purpose |
|---------|---------|
| `chatProps` | ApiEndpoint, streaming, sliding window, ... |
| `chatProps` | ApiEndpoint, streaming, sliding window, markdown, ... |
| `tools` | `enabled`, `proxyUrl`, `proxyAuthInsecure`, search URL/template & drop rules, max data length, timeouts |
| `apiRequestOptions` | `temperature`, `max_tokens`, `frequency_penalty`, `presence_penalty`, `cache_prompt`, ... |
| `headers` | `Content-Type`, `Authorization`, ... |
@ -190,7 +197,7 @@ One can modify the session configuration using Settings UI. All the settings and
- next wrt the last tool message
- set role back to `TOOL-TEMP`
- edit the response as needed
- delete the same
- or delete the same
- user will be given option to edit and retrigger the tool call
- submit the new response

View File

@ -299,10 +299,40 @@ export class MarkDown {
this.html += `</blockquote>\n`
}
this.in.blockQuote = startTok
this.html += `<p>${lineSani}</p>`
this.html += `<p>${lineSani}</p>\n`
return true
}
/**
* Process headline.
* @param {string} line
*/
process_headline(line) {
if (line.startsWith ("#")) {
this.unwind_list()
let startTok = line.split(' ', 1)[0]
let hLevel = startTok.length
this.html += `<h${hLevel}>${line.slice(hLevel)}</h${hLevel}>\n`
return true
}
return false
}
/**
* Process horizontal line.
* @param {string} line
*/
process_horizline(line) {
// 3 or more of --- or ___ or *** followed by space
// some online notes seemed to indicate spaces at end, so accepting same
if (line.match(/^[-]{3,}|[*]{3,}|[_]{3,}\s*$/) != null) {
this.unwind_list()
this.html += "<hr>\n"
return true
}
return false
}
/**
* Process a line from markdown content
* @param {string} lineRaw
@ -316,24 +346,16 @@ export class MarkDown {
} else {
line = lineRaw
}
let lineA = line.split(' ')
if (this.process_pre_fenced(line)) {
return
}
if (this.process_table_line(line)) {
return
}
// 3 or more of --- or ___ or *** followed by space
// some online notes seemed to indicate spaces at end, so accepting same
if (line.match(/^[-]{3,}|[*]{3,}|[_]{3,}\s*$/) != null) {
this.unwind_list()
this.html += "<hr>\n"
if (this.process_horizline(line)) {
return
}
if (line.startsWith ("#")) {
this.unwind_list()
let hLevel = lineA[0].length
this.html += `<h${hLevel}>${line.slice(hLevel)}</h${hLevel}>\n`
if (this.process_headline(line)) {
return
}
if (this.process_blockquote(lineRaw, line)) {