Clarify
This commit is contained in:
parent
6740b1c2f1
commit
1f0d90c3dd
|
|
@ -349,7 +349,7 @@ You can download it from your Linux distro's package manager or from here: [ROCm
|
|||
&& cmake --build build -- -j 16
|
||||
```
|
||||
|
||||
If it fails to compile, with a "target not supported" or similar error, it means your GPU does not support ROCm due to missing compiler support, even through it is an RDNA GPU. This can happen if you are trying to use an integrated GPU. In this case, build for Vulkan instead to use the GPU.
|
||||
If llama.cpp fails to compile, with a "target not supported" or similar error, it means your GPU does not support ROCm due to missing compiler support, even though it is an RDNA GPU. This can happen if you are trying to use an integrated GPU. In this case, build for Vulkan instead to use the integrated GPU.
|
||||
|
||||
- Using `CMake` for Windows (using x64 Native Tools Command Prompt for VS, and assuming a gfx1100-compatible AMD GPU):
|
||||
```bash
|
||||
|
|
|
|||
Loading…
Reference in New Issue