llama.cpp/.github
Oliver Simons 1da013c66e Build with CCCL 3.2 for CUDA backends
Gives best perf for backend-sampling on CUDA. Flag can be removed once
CCCL 3.2 is bundled within CTK and that CTK version is used in llama.cpp
2025-12-19 16:10:51 +01:00
..
ISSUE_TEMPLATE Github: ask for -v logs for params_fit [no ci] (#18128) 2025-12-17 13:46:48 +01:00
actions ci : add windows-cuda 13.1 release (#17839) 2025-12-07 14:02:04 +01:00
workflows Build with CCCL 3.2 for CUDA backends 2025-12-19 16:10:51 +01:00
copilot-instructions.md readme : add RVV,ZVFH,ZFH,ZICBOP support for RISC-V (#17259) 2025-11-14 09:12:56 +02:00
labeler.yml ci : apply model label to models (#16994) 2025-11-04 12:29:39 +01:00
pull_request_template.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00