llama.cpp/ggml/src
Sam/Samuel f4ce81c45e
metal: optimise `GGML_OP_SUM` (#16559)
* optimise GGML_OP_SUM

* add non-contiguous tests by permuting the input

* change tests to require full contiguity of OP_SUM

* cuda : add check GGML_OP_SUM

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-10-15 17:05:56 +03:00
..
ggml-blas sync : whisper.cpp (ggml/1359) 2025-09-29 17:43:58 +03:00
ggml-cann CANN: fix CPU memory leak in CANN backend (#16549) 2025-10-13 17:01:24 +08:00
ggml-cpu ggml : fix build broken with -march=armv9-a on MacOS (#16520) 2025-10-13 15:48:47 +03:00
ggml-cuda metal: optimise `GGML_OP_SUM` (#16559) 2025-10-15 17:05:56 +03:00
ggml-hip CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
ggml-metal metal: optimise `GGML_OP_SUM` (#16559) 2025-10-15 17:05:56 +03:00
ggml-musa CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
ggml-opencl CUDA + openCL: fix bug in accessing rms_norm->src while doing fusion (#16577) 2025-10-14 07:48:08 -07:00
ggml-rpc rpc : check src buffer when copying tensor (#16421) 2025-10-04 16:22:45 +03:00
ggml-sycl [SYCL] fix UT fault cases: count-equal, argsort, pad OPs (#16521) 2025-10-12 21:53:35 +08:00
ggml-vulkan vulkan: Add ACC_TYPE_VEC2 implementation (#16203) 2025-10-14 19:18:05 +02:00
ggml-webgpu ggml webgpu: profiling, CI updates, reworking of command submission (#16452) 2025-10-07 13:48:56 -07:00
ggml-zdnn zdnn: refactor codebase + add docs (#16178) 2025-09-23 14:53:05 +08:00
CMakeLists.txt cmake : Dont define XOPENSOURCE on AIX (#16481) 2025-10-10 11:15:46 +03:00
ggml-alloc.c ggml : fix graph reallocation with multiple chunks (#16396) 2025-10-03 13:49:08 +02:00
ggml-backend-impl.h rpc : add support for multiple devices (#16276) 2025-10-04 12:49:16 +03:00
ggml-backend-reg.cpp ggml-backend : add root cause in error message if loading backend library fails (#16172) 2025-09-29 13:17:09 +02:00
ggml-backend.cpp llama: print memory breakdown on exit (#15860) 2025-09-24 16:53:48 +02:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h model : Apertus model implementation (#15852) 2025-10-02 20:43:22 +03:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml webgpu: add support for soft_max, optimize rms_norm (#16357) 2025-10-02 11:00:31 -07:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp gguf: gguf_writer refactor (#15691) 2025-09-05 11:34:28 +02:00