llama.cpp/.github
Masato Nakasaka e439700992
ci: Add Windows Vulkan backend testing on Intel (#21292)
* experimenting CI

* Experimenting CI fix for MinGW

* experimenting CI on Windows

* modified script for integration with VisualStudio

* added proxy handling

* adding python version for Windows execution

* fix iterator::end() dereference

* fixed proxy handling

* Fix errors occurring on Windows

* fixed ci script

* Reverted to master

* Stripping test items to simplify Windows test

* adjusting script for windows testing

* Changed shell

* Fixed shell

* Fixed shell

* Fix CI setting

* Fix CI setting

* Fix CI setting

* Experimenting ci fix

* Experimenting ci fix

* Experimenting ci fix

* Experimenting ci fix

* experimenting fix for unit test error

* Changed to use BUILD_LOW_PERF to skip python tests

* Fix CI

* Added option to specify Ninja generator

* Reverted proxy related changes
2026-04-03 20:16:44 +03:00
..
ISSUE_TEMPLATE issues: add openvino backends (#20932) 2026-03-24 14:41:10 +08:00
actions ggml : add OpenVINO backend (#15307) 2026-03-14 07:56:55 +02:00
workflows ci: Add Windows Vulkan backend testing on Intel (#21292) 2026-04-03 20:16:44 +03:00
labeler.yml ci : add AMD ZenDNN label to PR labeler (#21345) 2026-04-03 10:35:15 +08:00
pull_request_template.md contrib: add "Requirements" section to PR template (#20841) 2026-03-23 16:59:02 +01:00