llama.cpp/.github
Daniel Bevenius ff02caf9ee
ci : cache ROCm installation in windows-latest-cmake-hip (#15887)
This commit adds caching of the ROCm installation for the windows-latest-cmake-hip job. 

The motivation for this is that the installation can sometimes hang and/or not complete properly leaving an invalid installation which later fails the build. By caching the installation hopefully we can keep a good installation available in the cache and avoid the installation step.

Refs: https://github.com/ggml-org/llama.cpp/pull/15365
2025-09-10 05:23:19 +02:00
..
ISSUE_TEMPLATE ggml: initial IBM zDNN backend (#14975) 2025-08-15 21:11:22 +08:00
actions releases : use arm version of curl for arm releases (#13592) 2025-05-16 19:36:51 +02:00
workflows ci : cache ROCm installation in windows-latest-cmake-hip (#15887) 2025-09-10 05:23:19 +02:00
copilot-instructions.md ci : add copilot-instructions.md (#15286) 2025-08-21 11:47:52 +02:00
labeler.yml ggml: initial IBM zDNN backend (#14975) 2025-08-15 21:11:22 +08:00
pull_request_template.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00