llama.cpp/docs
Yuichiro Utsumi e4ae383317
docs: use port 8080 in Docker examples (#17903)
2025-12-11 17:12:07 +08:00
..
backend ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
development common : introduce composable PEG parser combinators for chat parsing (#17136) 2025-12-03 12:45:32 +02:00
multimodal model : support MiniCPM-V 4.5 (#15575) 2025-08-26 10:05:55 +02:00
ops docs : update opencl ops (#17904) 2025-12-10 15:20:00 +01:00
android.md
build-riscv64-spacemit.md ggml-cpu: add ggml_thread_cpu_relax with Zihintpause support (#17784) 2025-12-08 10:41:34 +02:00
build-s390x.md ggml-zdnn: fix #15414, activate FP16 and BF16 acceleration and incorrect zTensor free (#15839) 2025-09-13 02:39:52 +08:00
build.md ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
docker.md docs: use port 8080 in Docker examples (#17903) 2025-12-11 17:12:07 +08:00
function-calling.md server : add documentation for `parallel_tool_calls` param (#15647) 2025-08-29 20:25:40 +03:00
install.md
llguidance.md
multimodal.md mtmd : add support for Voxtral (#14862) 2025-07-28 15:01:48 +02:00
ops.md docs : update opencl ops (#17904) 2025-12-10 15:20:00 +01:00