llama.cpp/.github/workflows
Sigbjørn Skjæret 4849661d98
docker : add CUDA 13.1 image build (#18441)
* add updated cuda-new.Dockerfile for Ubuntu 24.04 compatibilty

* add cuda13 build
2025-12-30 22:28:53 +01:00
..
bench.yml.disabled
build-cache.yml
build-cmake-pkg.yml
build-linux-cross.yml ggml-cpu: add ggml_thread_cpu_relax with Zihintpause support (#17784) 2025-12-08 10:41:34 +02:00
build.yml ci : only save ccache on master (#18207) 2025-12-19 22:29:37 +01:00
check-vendor.yml
close-issue.yml
copilot-setup-steps.yml
docker.yml docker : add CUDA 13.1 image build (#18441) 2025-12-30 22:28:53 +01:00
editorconfig.yml
gguf-publish.yml
labeler.yml
pre-tokenizer-hashes.yml
python-check-requirements.yml
python-lint.yml
python-type-check.yml
release.yml release: update release workflow to store XCFramework as Zip file (#18284) 2025-12-22 20:11:46 +08:00
server-webui.yml ci : clean up webui jobs (#18116) 2025-12-17 10:45:40 +01:00
server.yml ci : separate webui from server (#18072) 2025-12-16 08:17:26 +01:00
update-ops-docs.yml
winget.yml