llama.cpp/.github/workflows
Georgi Gerganov 554c247caf
ggml : remove OpenCL (#7735)
ggml-ci
2024-06-04 21:23:20 +03:00
..
bench.yml Disable benchmark on forked repo (#7034) 2024-05-05 13:38:55 +02:00
build.yml ggml : remove OpenCL (#7735) 2024-06-04 21:23:20 +03:00
close-issue.yml ci : exempt confirmed bugs from being tagged as stale (#7014) 2024-05-01 08:13:59 +03:00
code-coverage.yml
docker.yml [SYCL] fix intel docker (#7630) 2024-05-30 16:19:08 +10:00
editorconfig.yml
gguf-publish.yml
labeler.yml labeler.yml: Use settings from ggerganov/llama.cpp [no ci] (#7363) 2024-05-19 20:51:03 +10:00
nix-ci-aarch64.yml
nix-ci.yml
nix-flake-update.yml
nix-publish-flake.yml
python-check-requirements.yml
python-lint.yml convert.py : add python logging instead of print() (#6511) 2024-05-03 22:36:41 +03:00
server.yml server : fix temperature + disable some tests (#7409) 2024-05-20 22:10:03 +10:00