llama.cpp/ggml/src
Acly f2a789e334
ggml : split graph allocations according to backend max buffer size (#15815)
* ggml : make gallocr respect the backend's max buffer size

* if the graph requires more memory than can fit into a single allocation, split it into multiple backend buffers
* vulkan: report the actual max  allocation size in buffer type  interface

* fix missing newline, apple-clang warning

* track size of individual chunks in ggml_dyn_tallocr and raise max chunks.
revert to use suballocation_block_size as max chunk size for vulkan.

* track (chunk, offset) pairs instead of "global" offsets through gallocr.

* simpler, don't need loops to map between local/global offsets
* touches more code

* fix dyn_tallocr_max_size and initialization

* fix memory leak when buffers are reused due to same buffer type appearing multiple times

* make vbuffer allocation follow the same logic as backend_buffer did before

* continue to use leftover unallocated space of previous chunks after a new one has been created

* treat free blocks of each chunk as separate list
* they're still allocated together, but start/end of each chunk is tracked, and allocate/free iterate over sub-ranges
* exhaust freed blocks of all chunks before considering their last blocks with unallocated space
* start with 0 chunks/blocks and create chunks as needed
* allow the last chunk to grow beyond max size

* refactor: move adding new free block and new chunk into separate functions

* allocate chunks individually with a separate free-blocks list for each one

* needs a bit more memory/allocations/indirections, but code is simpler

* fix warnings (missing static) & debug checks
2025-09-24 16:17:49 +02:00
..
ggml-blas rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-cann rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-cpu ggml-cpu: Respect cpumask settings (#16164) 2025-09-23 11:58:12 +03:00
ggml-cuda ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml-hip HIP: bump requirement to rocm 6.1 (#15296) 2025-08-13 20:44:30 +02:00
ggml-metal ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml-musa CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433) 2025-08-20 16:58:49 +02:00
ggml-opencl ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml-rpc rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-sycl ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml-vulkan ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml-webgpu ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml-zdnn zdnn: refactor codebase + add docs (#16178) 2025-09-23 14:53:05 +08:00
CMakeLists.txt cmake : fix static linking for OpenMP on Unix-like systems (#16031) 2025-09-18 23:07:18 +02:00
ggml-alloc.c ggml : split graph allocations according to backend max buffer size (#15815) 2025-09-24 16:17:49 +02:00
ggml-backend-impl.h rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-backend-reg.cpp ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type (#15797) 2025-09-11 22:47:38 +02:00
ggml-backend.cpp rename optimize_graph to graph_optimize (#16082) 2025-09-18 13:46:17 -05:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h ggml : split graph allocations according to backend max buffer size (#15815) 2025-09-24 16:17:49 +02:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml : implement set_rows with i32 index (#16159) 2025-09-22 19:13:00 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp gguf: gguf_writer refactor (#15691) 2025-09-05 11:34:28 +02:00