llama.cpp/.github
Daniel Bevenius 77475530b8
ci : use macos-latest for arm64 webgpu build (#16029)
This commit updates the runs-on field for the macOS arm64 webgpu build
job to use macos-latest instead of just latest.

The motivation for this is that this job can wait for a runner to pick
up the job for a very long time, sometimes over 7 hours. This is an
attempt to see if this change can help reduce the wait time.

Refs: https://github.com/ggml-org/llama.cpp/actions/runs/17754163447/job/50454257570?pr=16004
2025-09-16 15:27:52 +02:00
..
ISSUE_TEMPLATE ggml: initial IBM zDNN backend (#14975) 2025-08-15 21:11:22 +08:00
actions releases : use arm version of curl for arm releases (#13592) 2025-05-16 19:36:51 +02:00
workflows ci : use macos-latest for arm64 webgpu build (#16029) 2025-09-16 15:27:52 +02:00
copilot-instructions.md ci : add copilot-instructions.md (#15286) 2025-08-21 11:47:52 +02:00
labeler.yml ggml: initial IBM zDNN backend (#14975) 2025-08-15 21:11:22 +08:00
pull_request_template.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00