llama.cpp/ggml/src
hongruichen 5f3b1ae3b0 fix: try fix graph cache with append the tensors name 2024-07-20 16:39:06 +08:00
..
ggml-cuda cuda : suppress 'noreturn' warn in no_device_code (#8414) 2024-07-11 17:53:42 +02:00
ggml-qnn fix: try fix graph cache with append the tensors name 2024-07-20 16:39:06 +08:00
ggml-sycl [SYCL] add concat through dim 1/2 (#8483) 2024-07-15 19:32:15 +08:00
kompute@4565194ed7 llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
kompute-shaders llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
llamafile ggml : move sgemm sources to llamafile subfolder (#8394) 2024-07-10 15:23:29 +03:00
vulkan-shaders Vulkan MMQ Fix (#8479) 2024-07-15 09:38:52 +02:00
CMakeLists.txt add build step of QNN backend at ggml 2024-07-17 19:43:01 +08:00
ggml-aarch64.c ggml : suppress unknown pragma 'GCC' on windows (#8460) 2024-07-15 15:48:17 +03:00
ggml-aarch64.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-alloc.c llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-backend-impl.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-backend.c register qnn backend 2024-07-17 21:25:55 +08:00
ggml-blas.cpp ggml : add NVPL BLAS support (#8329) (#8425) 2024-07-11 18:49:15 +02:00
ggml-common.h ggml : add AArch64 optimized GEMV and GEMM Q4 kernels (#5780) 2024-07-10 15:14:51 +03:00
ggml-cuda.cu Refactor lora adapter support (#8332) 2024-07-15 20:50:47 +02:00
ggml-impl.h ggml : add AArch64 optimized GEMV and GEMM Q4 kernels (#5780) 2024-07-10 15:14:51 +03:00
ggml-kompute.cpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-metal.m metal : template-ify some of the kernels (#8447) 2024-07-13 18:32:33 +03:00
ggml-metal.metal metal : template-ify some of the kernels (#8447) 2024-07-13 18:32:33 +03:00
ggml-qnn.cpp fix: try fix graph cache with append the tensors name 2024-07-20 16:39:06 +08:00
ggml-quants.c ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-quants.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-rpc.cpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-sycl.cpp [SYCL] add concat through dim 1/2 (#8483) 2024-07-15 19:32:15 +08:00
ggml-vulkan.cpp Vulkan MMQ Fix (#8479) 2024-07-15 09:38:52 +02:00
ggml.c Refactor lora adapter support (#8332) 2024-07-15 20:50:47 +02:00