This commit is contained in:
Francisco Herrera 2026-02-06 16:18:11 +00:00 committed by GitHub
commit d58a1c0779
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
1 changed files with 2 additions and 0 deletions

View File

@ -355,6 +355,8 @@ You can download it from your Linux distro's package manager or from here: [ROCm
&& cmake --build build -- -j 16
```
If llama.cpp fails to compile, with a "target not supported" or similar error, it means your GPU does not support ROCm due to missing compiler support, even though it is an RDNA GPU. This can happen if you are trying to use an integrated GPU. In this case, build for Vulkan instead to use the integrated GPU.
- Using `CMake` for Windows (using x64 Native Tools Command Prompt for VS, and assuming a gfx1100-compatible AMD GPU):
```bash
set PATH=%HIP_PATH%\bin;%PATH%