llama.cpp/examples/server/public_simplechat
HanishKVC 0d0a28b4ab SimpleChat:HTML: Add a style for system role message 2024-05-18 03:31:37 +05:30
..
readme.md SimpleChat: Add a simple readme file 2024-05-18 01:06:54 +05:30
simplechat.html SimpleChat:HTML: Add a style for system role message 2024-05-18 03:31:37 +05:30
simplechat.js SimpleChat: Move handling systemprompt into its own func 2024-05-18 03:19:59 +05:30
simplechat.sh SimpleChat:sh: Add simple shell script to run python3 http.server 2024-05-18 00:36:23 +05:30

readme.md

SimpleChat

by Humans for All.

overview

This simple web frontend, allows triggering/testing the server's /completions or /chat/completions endpoints in a simple way with minimal code from a common code base. And also allows trying to maintain a basic back and forth chatting to an extent.

NOTE: Given that the idea is for basic minimal testing, it doesnt bother with any model context length and culling of old messages from the chat. Also currently I havent added input for a system prompt, but may add it.

NOTE: It doesnt set any parameters other than temperature for now. However if someone wants they can update the js file as needed.

usage

first run examples/server

  • bin/server -m path/model.gguf

next run this web front end in examples/server/public_simplechat

  • ./simplechat.sh
  • this uses python3's http.server to host this web front end

Open this simple web front end from your local browser as noted in the message printed when simplechat.sh is run