llama.cpp/ggml/src
Georgi Gerganov 5783ae4359
metal : batch rows copy in a single threadgroup (#14384)
* metal : batch rows copy in a single threadgroup

ggml-ci

* metal : handle some edge cases when threadgroup size is not a power of 2

ggml-ci
2025-06-26 15:50:15 +03:00
..
ggml-blas cmake : Fix broken CMake error messages (ggml/1252) 2025-06-01 13:43:57 +03:00
ggml-cann CANN: Simplify the environment variable setting(#13104) 2025-06-09 19:47:39 +08:00
ggml-cpu ggml-cpu: enable IBM NNPA Vector Intrinsics (#14317) 2025-06-25 23:49:04 +02:00
ggml-cuda musa: enable fp16 mma (all) and cublas on qy2 (#13842) 2025-06-26 12:11:59 +08:00
ggml-hip HIP: disable rocwmma on gfx12 by default until rocm 7.0 (#14202) 2025-06-16 13:47:38 +02:00
ggml-kompute llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
ggml-metal metal : batch rows copy in a single threadgroup (#14384) 2025-06-26 15:50:15 +03:00
ggml-musa musa: enable fp16 mma (all) and cublas on qy2 (#13842) 2025-06-26 12:11:59 +08:00
ggml-opencl opencl: ref count `ggml_backend_opencl_context` and refactor profiling (#14254) 2025-06-24 11:46:25 -07:00
ggml-rpc rpc : nicer error messages for RPC server crash (#14076) 2025-06-10 09:41:01 +03:00
ggml-sycl sycl: GGML_SYCL_DISABLE_OPT on by default for all Intel Devices (#13973) 2025-06-25 18:09:55 +02:00
ggml-vulkan Add support for VK_EXT_debug_utils to add labels to Vulkan objects. (#13792) 2025-06-21 08:17:12 +02:00
CMakeLists.txt Implement GGML_CPU_ALL_VARIANTS for PowerPC (#14286) 2025-06-20 14:17:32 +02:00
ggml-alloc.c ggml: Don't assert fail when tensor data changes (#13222) 2025-05-01 22:46:10 +02:00
ggml-backend-impl.h ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
ggml-backend-reg.cpp build : suppress gcc15 compile warnings (#14261) 2025-06-19 14:49:48 +02:00
ggml-backend.cpp sched : avoid changing cur_copy when a graph is already allocated (#13922) 2025-05-30 18:56:19 +02:00
ggml-common.h ggml-cpu : split arch-specific implementations (#13892) 2025-06-09 16:47:13 +02:00
ggml-impl.h ggml-cpu: enable IBM NNPA Vector Intrinsics (#14317) 2025-06-25 23:49:04 +02:00
ggml-opt.cpp mnist: fix segmentation fault (ggml/1227) 2025-05-19 13:29:56 +03:00
ggml-quants.c ggml-cpu : split arch-specific implementations (#13892) 2025-06-09 16:47:13 +02:00
ggml-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml-cpu: enable IBM NNPA Vector Intrinsics (#14317) 2025-06-25 23:49:04 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp ggml : do not output unprintable characters on GGUF load failure (#14381) 2025-06-25 23:26:51 +02:00