llama.cpp/ggml
Diego Devesa b617cfd289
ggml-alloc : fix leak when reusing a tensor with a larger size (#16679)
2025-10-20 14:53:50 +02:00
..
cmake ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (#15094) 2025-08-07 13:45:41 +02:00
include rpc : report actual free memory (#16616) 2025-10-17 18:02:52 +03:00
src ggml-alloc : fix leak when reusing a tensor with a larger size (#16679) 2025-10-20 14:53:50 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml webgpu: profiling, CI updates, reworking of command submission (#16452) 2025-10-07 13:48:56 -07:00