llama.cpp/ggml/src
Matthew Michel 9de9672adb
sycl: use async memory allocation to fix crashes during graph recording (#16644)
* sycl: use async memory allocation to fix graph recording failures

GGML_SYCL_DISABLE_GRAPHS=0 causes crashes because:
  - Host waits are currently unsupported in graph recording mode.
  - SYCL malloc / free calls are unsupported in graph recording mode.

The following changes are made to fix SYCL graph functionality:
  - When graphs are enabled, use the SYCL async memory extension for temp
    buffers which is supported with SYCL graphs.
  - For compiler versions that do not support this extension, skip
    graphs with the affected op.
  - Switch from USM shared to device memory as the async extension
    currently just supports device allocations.

* Address reviewer feedback

* Use global async variable to decide path in sycl_ext_[malloc_device|free]
2025-10-23 09:05:15 +08:00
..
ggml-blas sync : whisper.cpp (ggml/1359) 2025-09-29 17:43:58 +03:00
ggml-cann CANN: format code using .clang-format (#15863) 2025-10-16 16:41:11 +08:00
ggml-cpu Revert "ggml : Leverage the existing GGML_F32_VEC helpers to vectorize ggml_v…" (#16723) 2025-10-22 20:20:55 +02:00
ggml-cuda CUDA: fix bug in topk-moe softmax (#16711) 2025-10-22 12:33:08 +08:00
ggml-hexagon Add experimental ggml-hexagon backend for the Hexagon NPU (#16547) 2025-10-22 13:47:09 -07:00
ggml-hip HIP: fix GPU_TARGETS (#16642) 2025-10-18 14:47:32 +02:00
ggml-metal metal : add `CONV_TRANSPOSE_2D` (#16542) 2025-10-17 09:33:58 +03:00
ggml-musa CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
ggml-opencl opencl: fix warnings and clean up profiling (#16688) 2025-10-20 22:26:17 -07:00
ggml-rpc rpc : report actual free memory (#16616) 2025-10-17 18:02:52 +03:00
ggml-sycl sycl: use async memory allocation to fix crashes during graph recording (#16644) 2025-10-23 09:05:15 +08:00
ggml-vulkan vulkan: Handle FA with all -inf mask values (#16447) 2025-10-20 22:16:08 -05:00
ggml-webgpu ggml webgpu: profiling, CI updates, reworking of command submission (#16452) 2025-10-07 13:48:56 -07:00
ggml-zdnn zdnn: refactor codebase + add docs (#16178) 2025-09-23 14:53:05 +08:00
CMakeLists.txt Add experimental ggml-hexagon backend for the Hexagon NPU (#16547) 2025-10-22 13:47:09 -07:00
ggml-alloc.c ggml-alloc : fix leak when reusing a tensor with a larger size (#16679) 2025-10-20 14:53:50 +02:00
ggml-backend-impl.h rpc : add support for multiple devices (#16276) 2025-10-04 12:49:16 +03:00
ggml-backend-reg.cpp Add experimental ggml-hexagon backend for the Hexagon NPU (#16547) 2025-10-22 13:47:09 -07:00
ggml-backend.cpp llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h ggml: add ggml_can_fuse_subgraph (#16662) 2025-10-21 16:43:14 +08:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml: add ggml_can_fuse_subgraph (#16662) 2025-10-21 16:43:14 +08:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp gguf: gguf_writer refactor (#15691) 2025-09-05 11:34:28 +02:00