llama.cpp/tools
Georgi Gerganov b8eb3b3501
wip fix tests
2025-12-06 16:13:27 +02:00
..
batched-bench batched-bench : add "separate text gen" mode (#17103) 2025-11-10 12:59:29 +02:00
cvector-generator refactor : simplify and improve memory management 2025-11-28 16:09:42 +02:00
export-lora cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
gguf-split ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
imatrix refactor : simplify and improve memory management 2025-11-28 16:09:42 +02:00
llama-bench bench : cache the llama_context state at computed depth (#16944) 2025-11-07 21:23:11 +02:00
main Merge branch 'master' into HEAD 2025-12-01 14:47:50 +02:00
mtmd Merge remote-tracking branch 'upstream/master' into backend-sampling 2025-12-04 08:17:50 +01:00
perplexity refactor : simplify and improve memory management 2025-11-28 16:09:42 +02:00
quantize ci : use smaller model (#16168) 2025-09-22 09:11:39 +03:00
rpc Install rpc-server when GGML_RPC is ON. (#17149) 2025-11-11 10:53:59 +00:00
run Manually link -lbsd to resolve flock symbol on AIX (#16610) 2025-10-23 19:37:31 +08:00
server wip fix tests 2025-12-06 16:13:27 +02:00
tokenize cmake : Do not install tools on iOS targets (#15903) 2025-09-16 09:54:44 +07:00
tts refactor : simplify and improve memory management 2025-11-28 16:09:42 +02:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00