llama.cpp/.github
Xuan-Son Nguyen 6c2131773c
cli: new CLI experience (#17824)
* wip

* wip

* fix logging, add display info

* handle commands

* add args

* wip

* move old cli to llama-completion

* rm deprecation notice

* move server to a shared library

* move ci to llama-completion

* add loading animation

* add --show-timings arg

* add /read command, improve LOG_ERR

* add args for speculative decoding, enable show timings by default

* add arg --image and --audio

* fix windows build

* support reasoning_content

* fix llama2c workflow

* color default is auto

* fix merge conflicts

* properly fix color problem

Co-authored-by: bandoti <bandoti@users.noreply.github.com>

* better loading spinner

* make sure to clean color on force-exit

* also clear input files on "/clear"

* simplify common_log_flush

* add warning in mtmd-cli

* implement console writter

* fix data race

* add attribute

* fix llama-completion and mtmd-cli

* add some notes about console::log

* fix compilation

---------

Co-authored-by: bandoti <bandoti@users.noreply.github.com>
2025-12-10 15:28:59 +01:00
..
ISSUE_TEMPLATE ggml: initial IBM zDNN backend (#14975) 2025-08-15 21:11:22 +08:00
actions ci : add windows-cuda 13.1 release (#17839) 2025-12-07 14:02:04 +01:00
workflows cli: new CLI experience (#17824) 2025-12-10 15:28:59 +01:00
copilot-instructions.md readme : add RVV,ZVFH,ZFH,ZICBOP support for RISC-V (#17259) 2025-11-14 09:12:56 +02:00
labeler.yml ci : apply model label to models (#16994) 2025-11-04 12:29:39 +01:00
pull_request_template.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00