llama.cpp/ggml/src
Giuseppe Scrivano 1eeb523c3e
vulkan: optimize UMA buffer operations and fix driver hangs (#16059)
* vulkan: optimize UMA buffer operations and fix driver hangs

The previous implementation was blocking the GPU for extended periods,
causing the i915 driver to reset the context due to the hangcheck
protection.

[32628.443070] i915 0000:00:02.0: [drm] GPU HANG: ecode 12:1:85dffffb, in llama-server [194114]
[32628.443091] i915 0000:00:02.0: [drm] llama-server[194114] context reset due to GPU hang

* vulkan: implement deferred_memset on UMA

---------

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
2025-09-21 08:31:55 +02:00
..
ggml-blas rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-cann rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-cpu ggml : refactor forward_dup for cpu backend (#16062) 2025-09-19 06:31:56 +02:00
ggml-cuda CUDA : conditionally add cuda architectures (ggml/1341) 2025-09-20 13:02:14 +03:00
ggml-hip HIP: bump requirement to rocm 6.1 (#15296) 2025-08-13 20:44:30 +02:00
ggml-metal rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-musa CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433) 2025-08-20 16:58:49 +02:00
ggml-opencl opencl: optimize mxfp4 kernels (#16037) 2025-09-18 12:03:34 -07:00
ggml-rpc rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-sycl rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-vulkan vulkan: optimize UMA buffer operations and fix driver hangs (#16059) 2025-09-21 08:31:55 +02:00
ggml-webgpu rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-zdnn rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
CMakeLists.txt cmake : fix static linking for OpenMP on Unix-like systems (#16031) 2025-09-18 23:07:18 +02:00
ggml-alloc.c llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-backend-impl.h rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-backend-reg.cpp ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type (#15797) 2025-09-11 22:47:38 +02:00
ggml-backend.cpp rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (#15379) 2025-08-18 09:23:56 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml : fix padding in timestep embedding kernels (#15932) 2025-09-16 15:25:57 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp gguf: gguf_writer refactor (#15691) 2025-09-05 11:34:28 +02:00