llama.cpp/ci
Adrien Gallouët f709c7a33f
ci, tests : use cmake to download models and remove libcurl dependency (#18791)
* ci, tests : use cmake to download models and remove libcurl dependency
* llama_dl_model -> llama_download_model
* use EXPECTED_HASH for robust model downloading
* Move llama_download_model to cmake/common.cmake

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2026-01-14 07:46:27 +01:00
..
README-MUSA.md musa: upgrade musa sdk to 4.3.0 (#16240) 2025-09-26 02:56:38 +02:00
README.md ci : migrate ggml ci to self-hosted runners (#16116) 2025-09-21 16:50:45 +03:00
run.sh ci, tests : use cmake to download models and remove libcurl dependency (#18791) 2026-01-14 07:46:27 +01:00

README.md

CI

This CI implements heavy-duty workflows that run on self-hosted runners. Typically the purpose of these workflows is to cover hardware configurations that are not available from Github-hosted runners and/or require more computational resource than normally available.

It is a good practice, before publishing changes to execute the full CI locally on your machine. For example:

mkdir tmp

# CPU-only build
bash ./ci/run.sh ./tmp/results ./tmp/mnt

# with CUDA support
GG_BUILD_CUDA=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt

# with SYCL support
source /opt/intel/oneapi/setvars.sh
GG_BUILD_SYCL=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt

# with MUSA support
GG_BUILD_MUSA=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt

# etc.

Adding self-hosted runners

  • Add a self-hosted ggml-ci workflow to .github/workflows/build.yml with an appropriate label
  • Request a runner token from ggml-org (for example, via a comment in the PR or email)
  • Set-up a machine using the received token (docs)
  • Optionally update ci/run.sh to build and run on the target platform by gating the implementation with a GG_BUILD_... env