llama.cpp/ggml/src
Piotr Wilkin 728f365497 Add missing op parameters to the profiler; add support for test-backend-ops to run performance tests with exactly the tensor shapes from the run 2026-04-09 00:04:35 +02:00
..
ggml-blas Add missing op parameters to the profiler; add support for test-backend-ops to run performance tests with exactly the tensor shapes from the run 2026-04-09 00:04:35 +02:00
ggml-cann Fix more missing backend stuff (and Python errors) 2026-04-09 00:04:34 +02:00
ggml-cpu Add missing op parameters to the profiler; add support for test-backend-ops to run performance tests with exactly the tensor shapes from the run 2026-04-09 00:04:35 +02:00
ggml-cuda Add missing op parameters to the profiler; add support for test-backend-ops to run performance tests with exactly the tensor shapes from the run 2026-04-09 00:04:35 +02:00
ggml-hexagon Fix more missing backend stuff (and Python errors) 2026-04-09 00:04:34 +02:00
ggml-hip ggml-cuda: native bf16 flash attention for vec kernel (#20525) 2026-03-22 11:05:51 +01:00
ggml-metal fix builds, integrate vulkan profiler, fix copy events, fix export 2026-04-09 00:04:34 +02:00
ggml-musa ggml-cuda: native bf16 flash attention for vec kernel (#20525) 2026-03-22 11:05:51 +01:00
ggml-opencl add second dimension to reported tensors, fix Mac build, add missing initializer to all backends 2026-04-09 00:04:34 +02:00
ggml-openvino Fix more missing backend stuff (and Python errors) 2026-04-09 00:04:34 +02:00
ggml-rpc add second dimension to reported tensors, fix Mac build, add missing initializer to all backends 2026-04-09 00:04:34 +02:00
ggml-sycl add second dimension to reported tensors, fix Mac build, add missing initializer to all backends 2026-04-09 00:04:34 +02:00
ggml-virtgpu Fix more missing backend stuff (and Python errors) 2026-04-09 00:04:34 +02:00
ggml-vulkan Add missing op parameters to the profiler; add support for test-backend-ops to run performance tests with exactly the tensor shapes from the run 2026-04-09 00:04:35 +02:00
ggml-webgpu fix builds, integrate vulkan profiler, fix copy events, fix export 2026-04-09 00:04:34 +02:00
ggml-zdnn fix builds, integrate vulkan profiler, fix copy events, fix export 2026-04-09 00:04:34 +02:00
ggml-zendnn add second dimension to reported tensors, fix Mac build, add missing initializer to all backends 2026-04-09 00:04:34 +02:00
CMakeLists.txt feat: cool profiler thingy 2026-04-09 00:04:34 +02:00
ggml-alloc.c ggml : make `ggml_is_view` as API (#19539) 2026-02-16 17:43:34 +02:00
ggml-backend-dl.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-dl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-impl.h feat: cool profiler thingy 2026-04-09 00:04:34 +02:00
ggml-backend-reg.cpp ggml : add OpenVINO backend (#15307) 2026-03-14 07:56:55 +02:00
ggml-backend.cpp Add missing op parameters to the profiler; add support for test-backend-ops to run performance tests with exactly the tensor shapes from the run 2026-04-09 00:04:35 +02:00
ggml-common.h ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml-impl.h llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00
ggml-opt.cpp fix: free ctx_copy in ggml_opt_free to plug per-training-session leak (#21592) 2026-04-08 17:40:15 +02:00
ggml-profiler.cpp add second dimension to reported tensors, fix Mac build, add missing initializer to all backends 2026-04-09 00:04:34 +02:00
ggml-quants.c ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml-quants.h ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml: add Q1_0 1-bit quantization support (CPU) (#21273) 2026-04-06 20:55:21 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp llama: fix llama-model-saver (#20503) 2026-03-25 12:53:16 +02:00