llama.cpp/examples/sycl
Marcel Petrick 92f7da00b4
chore : correct typos [no ci] (#20041)
* fix(docs): correct typos found during code review

Non-functional changes only:
- Fixed minor spelling mistakes in comments
- Corrected typos in user-facing strings
- No variables, logic, or functional code was modified.

Signed-off-by: Marcel Petrick <mail@marcelpetrick.it>

* Update docs/backend/CANN.md

Co-authored-by: Aaron Teo <taronaeo@gmail.com>

* Revert "Auxiliary commit to revert individual files from 846d1c301281178efbc6ce6060ad34c1ebe45af8"

This reverts commit 02fcf0c7db661d5ff3eff96b2b2db9fdb7213256.

* Update tests/test-backend-ops.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Update tests/test-backend-ops.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Signed-off-by: Marcel Petrick <mail@marcelpetrick.it>
Co-authored-by: Aaron Teo <taronaeo@gmail.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
2026-03-05 08:50:21 +01:00
..
CMakeLists.txt `build`: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
README.md chore : correct typos [no ci] (#20041) 2026-03-05 08:50:21 +01:00
build.sh refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
ls-sycl-device.cpp Fix locale-dependent float printing in GGUF metadata (#17331) 2026-03-04 09:30:40 +01:00
run-llama2.sh create test.sh to enhance the parameters for testing, update the guide, rm useless script (#19243) 2026-02-01 18:24:00 +08:00
test.sh create test.sh to enhance the parameters for testing, update the guide, rm useless script (#19243) 2026-02-01 18:24:00 +08:00
win-build-sycl.bat refactor : remove libcurl, use OpenSSL when available (#18828) 2026-01-14 18:02:47 +01:00
win-run-llama2.bat create test.sh to enhance the parameters for testing, update the guide, rm useless script (#19243) 2026-02-01 18:24:00 +08:00
win-test.bat create test.sh to enhance the parameters for testing, update the guide, rm useless script (#19243) 2026-02-01 18:24:00 +08:00

README.md

llama.cpp/example/sycl

This example program provides the tools for llama.cpp for SYCL on Intel GPU.

Tool

Tool Name Function Status
llama-ls-sycl-device List all SYCL devices with ID, compute capability, max work group size, etc. Support

llama-ls-sycl-device

List all SYCL devices with ID, compute capability, max work group size, etc.

  1. Build the llama.cpp for SYCL for the specified target (using GGML_SYCL_TARGET).

  2. Enable oneAPI running environment (if GGML_SYCL_TARGET is set to INTEL -default-)

source /opt/intel/oneapi/setvars.sh
  1. Execute
./build/bin/llama-ls-sycl-device

Check the ID in startup log, like:

found 2 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |                     |
|  |                   |                                       |       |compute|Max work|sub  |mem    |                     |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]|                Intel Arc A770 Graphics|    1.3|    512|    1024|   32| 16225M|            1.3.29138|
| 1| [level_zero:gpu:1]|                 Intel UHD Graphics 750|    1.3|     32|     512|   32| 62631M|            1.3.29138|