llama.cpp/docs
Sigbjørn Skjæret 63391852b0
docs : update cpu and cuda ops (#17890)
* update cuda ops

* update CPU as well
2025-12-09 23:31:29 +01:00
..
backend ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
development common : introduce composable PEG parser combinators for chat parsing (#17136) 2025-12-03 12:45:32 +02:00
multimodal model : support MiniCPM-V 4.5 (#15575) 2025-08-26 10:05:55 +02:00
ops docs : update cpu and cuda ops (#17890) 2025-12-09 23:31:29 +01:00
android.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
build-riscv64-spacemit.md ggml-cpu: add ggml_thread_cpu_relax with Zihintpause support (#17784) 2025-12-08 10:41:34 +02:00
build-s390x.md ggml-zdnn: fix #15414, activate FP16 and BF16 acceleration and incorrect zTensor free (#15839) 2025-09-13 02:39:52 +08:00
build.md ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
docker.md devops: fix failing s390x docker build (#16918) 2025-11-02 08:48:46 +08:00
function-calling.md server : add documentation for `parallel_tool_calls` param (#15647) 2025-08-29 20:25:40 +03:00
install.md docs : add "Quick start" section for new users (#13862) 2025-06-03 13:09:36 +02:00
llguidance.md llguidance build fixes for Windows (#11664) 2025-02-14 12:46:08 -08:00
multimodal.md mtmd : add support for Voxtral (#14862) 2025-07-28 15:01:48 +02:00
ops.md docs : update cpu and cuda ops (#17890) 2025-12-09 23:31:29 +01:00