llama.cpp/ggml/src
hipudding 1bdd8ae19f
[CANN] Add Ascend NPU backend (#6035)
* [CANN] Add Ascend NPU backend

Ascend is a full-stack AI computing infrastructure for industry
applications and services based on Huawei Ascend processors and
software.

CANN (Compute Architecture of Neural Networks), developped by
Huawei, is a heterogeneous computing architecture for AI.

Co-authored-by: wangshuai09 <391746016@qq.com>

* delete trailing whitespaces

* Modify the code based on review comment

* Rename LLAMA_CANN to GGML_CANN

* Make ggml-common.h private

* add ggml_cann prefix for acl funcs

* Add logging for CANN backend

* Delete Trailing whitespace

---------

Co-authored-by: wangshuai09 <391746016@qq.com>
2024-07-17 14:23:50 +03:00
..
ggml-cann [CANN] Add Ascend NPU backend (#6035) 2024-07-17 14:23:50 +03:00
ggml-cuda cuda : suppress 'noreturn' warn in no_device_code (#8414) 2024-07-11 17:53:42 +02:00
ggml-sycl [SYCL] add concat through dim 1/2 (#8483) 2024-07-15 19:32:15 +08:00
kompute@4565194ed7 llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
kompute-shaders llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
llamafile ggml : move sgemm sources to llamafile subfolder (#8394) 2024-07-10 15:23:29 +03:00
vulkan-shaders Vulkan MMQ Fix (#8479) 2024-07-15 09:38:52 +02:00
CMakeLists.txt [CANN] Add Ascend NPU backend (#6035) 2024-07-17 14:23:50 +03:00
ggml-aarch64.c ggml : suppress unknown pragma 'GCC' on windows (#8460) 2024-07-15 15:48:17 +03:00
ggml-aarch64.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-alloc.c llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-backend-impl.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-backend.c [CANN] Add Ascend NPU backend (#6035) 2024-07-17 14:23:50 +03:00
ggml-blas.cpp ggml : add NVPL BLAS support (#8329) (#8425) 2024-07-11 18:49:15 +02:00
ggml-cann.cpp [CANN] Add Ascend NPU backend (#6035) 2024-07-17 14:23:50 +03:00
ggml-common.h ggml : add AArch64 optimized GEMV and GEMM Q4 kernels (#5780) 2024-07-10 15:14:51 +03:00
ggml-cuda.cu Refactor lora adapter support (#8332) 2024-07-15 20:50:47 +02:00
ggml-impl.h ggml : add AArch64 optimized GEMV and GEMM Q4 kernels (#5780) 2024-07-10 15:14:51 +03:00
ggml-kompute.cpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-metal.m metal : template-ify some of the kernels (#8447) 2024-07-13 18:32:33 +03:00
ggml-metal.metal metal : template-ify some of the kernels (#8447) 2024-07-13 18:32:33 +03:00
ggml-quants.c ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-quants.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-rpc.cpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-sycl.cpp [SYCL] add concat through dim 1/2 (#8483) 2024-07-15 19:32:15 +08:00
ggml-vulkan.cpp Vulkan MMQ Fix (#8479) 2024-07-15 09:38:52 +02:00
ggml.c [CANN] Add Ascend NPU backend (#6035) 2024-07-17 14:23:50 +03:00