llama.cpp/tools
hanishkvc 13c1c9d285 SimpleChatTCRV:Vision: DataUrl helpers
Move all dataUrl handling into helper functions.

So that its manipulation is done in a controlled manner, as well as
in future, changes to the semantic can be easily carried out by
updating the helper functions suitably and inturn updating the caller
as needed.

For now avoid push and pop and work with 0th index directly, given
that currently the logic is setup for handling only a single image
with the ai model. This keeps things simple. It can be changed if
required in future easily.
2025-12-04 19:41:39 +05:30
..
batched-bench batched-bench : add "separate text gen" mode (#17103) 2025-11-10 12:59:29 +02:00
cvector-generator cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
export-lora cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
gguf-split ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
imatrix Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
llama-bench bench : cache the llama_context state at computed depth (#16944) 2025-11-07 21:23:11 +02:00
main cli: add migration warning (#17620) 2025-11-30 15:32:43 +01:00
mtmd mtmd: fix --no-warmup (#17695) 2025-12-02 22:48:08 +01:00
perplexity perplexity : show more kl-divergence data (#16321) 2025-09-29 09:30:45 +03:00
quantize ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
rpc Install rpc-server when GGML_RPC is ON. (#17149) 2025-11-11 10:53:59 +00:00
run Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
server SimpleChatTCRV:Vision: DataUrl helpers 2025-12-04 19:41:39 +05:30
tokenize cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
tts model : Apertus model implementation (#15852) 2025-10-02 20:43:22 +03:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00