llama.cpp/.github
Tim Neumann 382808c14b
ci : re-enable rocm build on amd64 (#18439)
This was disabled in #9340 due to compiler crash, but seems to build now as confirmed by the latest comments in #11913.

I've also managed to build the image with `docker build -f .devops/rocm.Dockerfile .` (for all three stages, `full`, `server` and `light`).

A quick attempt at trying to build an arm64 image failed. Since none of the other images are build for arm, I only enabled the amd64 one.

The `runs_on` option was added to match the other entries.
2025-12-29 00:29:23 +01:00
..
ISSUE_TEMPLATE github: update issue templates [no ci] (#18410) 2025-12-28 10:50:56 +01:00
actions ci : add windows-cuda 13.1 release (#17839) 2025-12-07 14:02:04 +01:00
workflows ci : re-enable rocm build on amd64 (#18439) 2025-12-29 00:29:23 +01:00
labeler.yml ci : apply model label to models (#16994) 2025-11-04 12:29:39 +01:00
pull_request_template.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00