llama.cpp/docs/ops
RachelMantel c7358ddf64
sycl: implement GGML_OP_TRI (#19089)
* sycl: implement GGML_OP_TRI

* docs: update ops.md for SYCL TRI

* docs: regenerate ops.md

* docs: update SYCL support for GGML_OP_TRI
2026-01-30 12:00:49 +08:00
..
BLAS.csv docs(ggml): update backend ops (#18734) 2026-01-10 18:48:17 +08:00
CANN.csv docs : update ops.md for CANN backend (#18654) 2026-01-16 13:32:17 +01:00
CPU.csv docs : update cpu and cuda ops (#17890) 2025-12-09 23:31:29 +01:00
CUDA.csv docs : update cpu and cuda ops (#17890) 2025-12-09 23:31:29 +01:00
Metal.csv metal : add count_equal op (#18314) 2025-12-31 10:39:48 +02:00
OpenCL.csv docs : update opencl ops (#17904) 2025-12-10 15:20:00 +01:00
SYCL.csv sycl: implement GGML_OP_TRI (#19089) 2026-01-30 12:00:49 +08:00
Vulkan.csv ops.md: update vulkan support (#17661) 2025-12-01 15:26:21 -06:00
WebGPU.csv ggml webgpu: support for backend sampling (#18880) 2026-01-16 16:12:43 -08:00
ZenDNN.csv ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
zDNN.csv docs(ggml): update backend ops (#18734) 2026-01-10 18:48:17 +08:00