llama.cpp/ggml/src
R0CKSTAR 33983057d0
musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (#13647)
* musa: fix build warning (unused parameter)

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: upgrade MUSA SDK version to rc4.0.1

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* musa: use mudnn::Unary::IDENTITY op to accelerate D2D memory copy

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* Update ggml/src/ggml-cuda/cpy.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* musa: remove MUDNN_CHECK_GEN and use CUDA_CHECK_GEN instead in MUDNN_CHECK

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

---------

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-05-21 09:58:49 +08:00
..
ggml-blas ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-cann CANN: Support MOE Model MUL_MAT_ID (#13042) 2025-05-19 14:21:17 +08:00
ggml-cpu arm64: optimize q6_k_q8_k kernel with i8mm (#13519) 2025-05-14 21:53:52 +02:00
ggml-cuda musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (#13647) 2025-05-21 09:58:49 +08:00
ggml-hip CUDA/HIP: Share the same unified memory allocation logic. (#12934) 2025-04-15 11:20:38 +02:00
ggml-kompute llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
ggml-metal metal : fix typo in FA kernel comments (#13651) 2025-05-20 10:41:40 +03:00
ggml-musa musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (#13647) 2025-05-21 09:58:49 +08:00
ggml-opencl opencl: remove unnecessary assert for `add` (#13257) 2025-05-12 13:13:49 -07:00
ggml-rpc rpc : add rpc_msg_set_tensor_hash_req (#13353) 2025-05-09 10:31:07 +03:00
ggml-sycl sycl: disable reorder for sycl mulmat (#13536) 2025-05-20 11:34:15 +02:00
ggml-vulkan vulkan: fix warnings (#13626) 2025-05-20 21:35:16 +00:00
CMakeLists.txt cmake : removed stdc++fs (whisper/3097) 2025-05-07 17:28:36 +03:00
ggml-alloc.c ggml: Don't assert fail when tensor data changes (#13222) 2025-05-01 22:46:10 +02:00
ggml-backend-impl.h ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
ggml-backend-reg.cpp ggml-backend : fix backend search path (#12330) 2025-03-11 14:25:17 +01:00
ggml-backend.cpp llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
ggml-common.h musa: fix all warnings, re-enable `-DLLAMA_FATAL_WARNINGS=ON` in ci and update doc (#12611) 2025-03-30 10:59:38 +02:00
ggml-impl.h ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187) 2025-04-11 00:17:47 +03:00
ggml-opt.cpp mnist: fix segmentation fault (ggml/1227) 2025-05-19 13:29:56 +03:00
ggml-quants.c whisper: remove MSVC warnings pragmas (whisper/3090) 2025-05-07 17:28:36 +03:00
ggml-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml : fix apple OS check in ggml_print_backtrace (ggml/1229) 2025-05-19 13:29:56 +03:00
gguf.cpp gguf : use ggml log system (#13571) 2025-05-15 19:13:11 +02:00