llama.cpp/.devops
R ae9f8df778
fix(docker): add missing libglvnd libraries to Vulkan image (#18664)
Add libglvnd0, libgl1, libglx0, libegl1, libgles2 to the Vulkan
Dockerfile base image. These libraries are required by mesa-vulkan-drivers
to properly initialize the Vulkan ICD and detect GPU devices.

Without these libraries, vkEnumeratePhysicalDevices() returns an empty
list, resulting in "ggml_vulkan: No devices found." error.

Fixes #17761
2026-01-07 16:57:42 +01:00
..
nix Install rpc-server when GGML_RPC is ON. (#17149) 2025-11-11 10:53:59 +00:00
cann.Dockerfile CLI: fixed adding cli and completion into docker containers, improved docs (#18003) 2025-12-16 11:52:23 +01:00
cpu.Dockerfile docker : include legacy llama-completion binary (#17964) 2025-12-12 19:39:23 +01:00
cuda-new.Dockerfile docker : add CUDA 13.1 image build (#18441) 2025-12-30 22:28:53 +01:00
cuda.Dockerfile docker : include legacy llama-completion binary (#17964) 2025-12-12 19:39:23 +01:00
intel.Dockerfile docker : include legacy llama-completion binary (#17964) 2025-12-12 19:39:23 +01:00
llama-cli-cann.Dockerfile CLI: fixed adding cli and completion into docker containers, improved docs (#18003) 2025-12-16 11:52:23 +01:00
llama-cpp-cuda.srpm.spec CLI: fixed adding cli and completion into docker containers, improved docs (#18003) 2025-12-16 11:52:23 +01:00
llama-cpp.srpm.spec CLI: fixed adding cli and completion into docker containers, improved docs (#18003) 2025-12-16 11:52:23 +01:00
musa.Dockerfile docker : include legacy llama-completion binary (#17964) 2025-12-12 19:39:23 +01:00
rocm.Dockerfile docker : include legacy llama-completion binary (#17964) 2025-12-12 19:39:23 +01:00
s390x.Dockerfile docker : include legacy llama-completion binary (#17964) 2025-12-12 19:39:23 +01:00
tools.sh docker : include legacy llama-completion binary (#17964) 2025-12-12 19:39:23 +01:00
vulkan.Dockerfile fix(docker): add missing libglvnd libraries to Vulkan image (#18664) 2026-01-07 16:57:42 +01:00