llama.cpp/tools
Johannes Gäßler fbef0fad7a
server: higher timeout for tests (#15621)
2025-08-27 20:58:09 +02:00
..
batched-bench metal : optimize FA vec for large sequences and BS <= 8 (#15566) 2025-08-26 14:22:14 +03:00
cvector-generator llama : deprecate llama_kv_self_ API (#14030) 2025-06-06 14:11:15 +03:00
export-lora mtmd : fix 32-bit narrowing issue in export-lora and mtmd clip (#14503) 2025-07-25 13:08:04 +02:00
gguf-split scripts : make the shell scripts cross-platform (#14341) 2025-06-30 10:17:18 +02:00
imatrix imatrix : warn when GGUF imatrix is saved without .gguf suffix (#15076) 2025-08-04 23:26:52 +02:00
llama-bench llama : remove KV cache defragmentation logic (#15473) 2025-08-22 12:22:13 +03:00
main chat : include kwargs in template example (#15309) 2025-08-14 10:28:29 -07:00
mtmd mtmd : fix mtmd ios build (#15579) 2025-08-26 20:05:50 +02:00
perplexity perplexity: give more information about constraints on failure (#15303) 2025-08-14 09:16:32 +03:00
quantize llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
rpc rpc : Fix build on OpenBSD (#13541) 2025-05-25 15:35:53 +03:00
run cmake : do not search for curl libraries by ourselves (#14613) 2025-07-10 15:29:05 +03:00
server server: higher timeout for tests (#15621) 2025-08-27 20:58:09 +02:00
tokenize llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
tts server : disable context shift by default (#15416) 2025-08-19 16:46:37 +03:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00