diff --git a/docs/build.md b/docs/build.md index f9c9e4d2b4..4e4f4a9964 100644 --- a/docs/build.md +++ b/docs/build.md @@ -349,7 +349,7 @@ You can download it from your Linux distro's package manager or from here: [ROCm && cmake --build build -- -j 16 ``` - If it fails to compile, with a "target not supported" or similar error, it means your GPU does not support ROCm due to missing compiler support, even through it is an RDNA GPU. This can happen if you are trying to use an integrated GPU. In this case, build for Vulkan instead to use the GPU. + If llama.cpp fails to compile, with a "target not supported" or similar error, it means your GPU does not support ROCm due to missing compiler support, even though it is an RDNA GPU. This can happen if you are trying to use an integrated GPU. In this case, build for Vulkan instead to use the integrated GPU. - Using `CMake` for Windows (using x64 Native Tools Command Prompt for VS, and assuming a gfx1100-compatible AMD GPU): ```bash