llama.cpp/.github
Georgi Gerganov 28baac9c9f
ci : migrate ggml ci to self-hosted runners (#16116)
* ci : migrate ggml ci to a self-hosted runners

* ci : add T4 runner

* ci : add instructions for adding self-hosted runners

* ci : disable test-backend-ops from debug builds due to slowness

* ci : add AMD V710 runner (vulkan)

* cont : add ROCM workflow

* ci : switch to qwen3 0.6b model

* cont : fix the context size
2025-09-21 16:50:45 +03:00
..
ISSUE_TEMPLATE ggml: initial IBM zDNN backend (#14975) 2025-08-15 21:11:22 +08:00
actions releases : use arm version of curl for arm releases (#13592) 2025-05-16 19:36:51 +02:00
workflows ci : migrate ggml ci to self-hosted runners (#16116) 2025-09-21 16:50:45 +03:00
copilot-instructions.md ci : add copilot-instructions.md (#15286) 2025-08-21 11:47:52 +02:00
labeler.yml ggml: initial IBM zDNN backend (#14975) 2025-08-15 21:11:22 +08:00
pull_request_template.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00