llama.cpp/ggml/src/ggml-opencl
Aaron Teo 046d5fd44e
llama: use host memory if device reports 0 memory (#18587)
2026-01-09 05:34:56 +08:00
..
kernels opencl: add FILL op support (#18682) 2026-01-07 22:04:50 -08:00
CMakeLists.txt opencl: add FILL op support (#18682) 2026-01-07 22:04:50 -08:00
ggml-opencl.cpp llama: use host memory if device reports 0 memory (#18587) 2026-01-09 05:34:56 +08:00