llama.cpp/ggml/include
Johannes Gäßler b1f3a6e5db
llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653)
* llama: automatically fit args to free memory

llama-fit-params tool

* fix CI

* hints for bug reports, ensure no reallocation

* fix segfault with Vulkan

* add llama-fit-params to CI

* fix CI

* fix CI

* fix CI

* minor adjustments

* fix assignment of 1 dense layer

* fix logger not being reset on model load failure

* remove --n-gpu-layer hint on model load failure

* fix llama-fit-params verbosity

* fix edge case

* fix typo [no ci]
2025-12-15 09:24:59 +01:00
..
ggml-alloc.h llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
ggml-backend.h llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
ggml-blas.h
ggml-cann.h
ggml-cpp.h
ggml-cpu.h ggml-cpu : fix RISC-V Q4_0 repack select and RVV feature reporting (#17951) 2025-12-12 16:26:03 +02:00
ggml-cuda.h
ggml-hexagon.h Add experimental ggml-hexagon backend for the Hexagon NPU (#16547) 2025-10-22 13:47:09 -07:00
ggml-metal.h
ggml-opencl.h
ggml-opt.h
ggml-rpc.h rpc : fix alloc size logic (#17116) 2025-12-05 19:39:04 +02:00
ggml-sycl.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-vulkan.h
ggml-webgpu.h
ggml-zdnn.h zdnn: refactor codebase + add docs (#16178) 2025-09-23 14:53:05 +08:00
ggml-zendnn.h ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
ggml.h llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
gguf.h GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00