llama.cpp/docs
Francisco Herrera a787084155
clarify which steps
2025-12-14 09:43:25 -05:00
..
backend clarify which steps 2025-12-14 09:43:25 -05:00
development common : introduce composable PEG parser combinators for chat parsing (#17136) 2025-12-03 12:45:32 +02:00
multimodal model : support MiniCPM-V 4.5 (#15575) 2025-08-26 10:05:55 +02:00
ops docs : update opencl ops (#17904) 2025-12-10 15:20:00 +01:00
android.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
build-riscv64-spacemit.md ggml-cpu: add ggml_thread_cpu_relax with Zihintpause support (#17784) 2025-12-08 10:41:34 +02:00
build-s390x.md ggml-zdnn: fix #15414, activate FP16 and BF16 acceleration and incorrect zTensor free (#15839) 2025-09-13 02:39:52 +08:00
build.md ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
docker.md docs: use port 8080 in Docker examples (#17903) 2025-12-11 17:12:07 +08:00
function-calling.md server : add documentation for `parallel_tool_calls` param (#15647) 2025-08-29 20:25:40 +03:00
install.md docs : add "Quick start" section for new users (#13862) 2025-06-03 13:09:36 +02:00
llguidance.md llguidance build fixes for Windows (#11664) 2025-02-14 12:46:08 -08:00
multimodal.md mtmd : add support for Voxtral (#14862) 2025-07-28 15:01:48 +02:00
ops.md docs : update opencl ops (#17904) 2025-12-10 15:20:00 +01:00