| .. |
|
batched-bench
|
batched-bench : add "separate text gen" mode (#17103)
|
2025-11-10 12:59:29 +02:00 |
|
cvector-generator
|
cmake : Do not install tools on iOS targets (#15903)
|
2025-09-16 09:54:44 +07:00 |
|
export-lora
|
cmake : Do not install tools on iOS targets (#15903)
|
2025-09-16 09:54:44 +07:00 |
|
gguf-split
|
ci : use smaller model (#16168)
|
2025-09-22 09:11:39 +03:00 |
|
imatrix
|
Manually link -lbsd to resolve flock symbol on AIX (#16610)
|
2025-10-23 19:37:31 +08:00 |
|
llama-bench
|
bench : cache the llama_context state at computed depth (#16944)
|
2025-11-07 21:23:11 +02:00 |
|
main
|
common : more accurate sampling timing (#17382)
|
2025-11-20 13:40:10 +02:00 |
|
mtmd
|
mtmd-cli: Avoid logging to stdout for model loading messages in mtmd-cli (#17277)
|
2025-11-15 12:41:16 +01:00 |
|
perplexity
|
perplexity : show more kl-divergence data (#16321)
|
2025-09-29 09:30:45 +03:00 |
|
quantize
|
ci : use smaller model (#16168)
|
2025-09-22 09:11:39 +03:00 |
|
rpc
|
Install rpc-server when GGML_RPC is ON. (#17149)
|
2025-11-11 10:53:59 +00:00 |
|
run
|
Manually link -lbsd to resolve flock symbol on AIX (#16610)
|
2025-10-23 19:37:31 +08:00 |
|
server
|
webui: Add a "Continue" Action for Assistant Message (#16971)
|
2025-11-19 14:39:50 +01:00 |
|
tokenize
|
cmake : Do not install tools on iOS targets (#15903)
|
2025-09-16 09:54:44 +07:00 |
|
tts
|
model : Apertus model implementation (#15852)
|
2025-10-02 20:43:22 +03:00 |
|
CMakeLists.txt
|
mtmd : rename llava directory to mtmd (#13311)
|
2025-05-05 16:02:55 +02:00 |