From e99f1083a0c093f510a1914809056735370cea6f Mon Sep 17 00:00:00 2001 From: Maciej Lisowski <39798354+MaciejDromin@users.noreply.github.com> Date: Wed, 18 Feb 2026 16:50:23 +0100 Subject: [PATCH] docs: Fix broken links for preparing models in Backends (#19684) --- docs/backend/CANN.md | 2 +- docs/backend/SYCL.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/backend/CANN.md b/docs/backend/CANN.md index b03c2a122c..23b6a62763 100755 --- a/docs/backend/CANN.md +++ b/docs/backend/CANN.md @@ -246,7 +246,7 @@ cmake --build build --config release 1. **Retrieve and prepare model** - You can refer to the general [*Prepare and Quantize*](../../README.md#prepare-and-quantize) guide for model prepration. + You can refer to the general [*Obtaining and quantizing models*](../../README.md#obtaining-and-quantizing-models) guide for model prepration. **Notes**: diff --git a/docs/backend/SYCL.md b/docs/backend/SYCL.md index b3cff96604..07c68be5cb 100644 --- a/docs/backend/SYCL.md +++ b/docs/backend/SYCL.md @@ -281,7 +281,7 @@ as `-cl-fp32-correctly-rounded-divide-sqrt` #### Retrieve and prepare model -You can refer to the general [*Prepare and Quantize*](README.md#prepare-and-quantize) guide for model preparation, or download an already quantized model like [llama-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/main/llama-2-7b.Q4_0.gguf?download=true) or [Meta-Llama-3-8B-Instruct-Q4_0.gguf](https://huggingface.co/aptha/Meta-Llama-3-8B-Instruct-Q4_0-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Q4_0.gguf). +You can refer to the general [*Obtaining and quantizing models*](../../README.md#obtaining-and-quantizing-models) guide for model preparation, or download an already quantized model like [llama-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/main/llama-2-7b.Q4_0.gguf?download=true) or [Meta-Llama-3-8B-Instruct-Q4_0.gguf](https://huggingface.co/aptha/Meta-Llama-3-8B-Instruct-Q4_0-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Q4_0.gguf). ##### Check device @@ -569,7 +569,7 @@ Once it is completed, final results will be in **build/Release/bin** #### Retrieve and prepare model -You can refer to the general [*Prepare and Quantize*](README.md#prepare-and-quantize) guide for model preparation, or download an already quantized model like [llama-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q4_0.gguf) or [Meta-Llama-3-8B-Instruct-Q4_0.gguf](https://huggingface.co/aptha/Meta-Llama-3-8B-Instruct-Q4_0-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Q4_0.gguf). +You can refer to the general [*Obtaining and quantizing models*](../../README.md#obtaining-and-quantizing-models) guide for model preparation, or download an already quantized model like [llama-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q4_0.gguf) or [Meta-Llama-3-8B-Instruct-Q4_0.gguf](https://huggingface.co/aptha/Meta-Llama-3-8B-Instruct-Q4_0-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Q4_0.gguf). ##### Check device