..
ggml-blas
vulkan: sort graph to allow more parallel execution ( #15850 )
2025-09-09 02:10:07 +08:00
ggml-cann
CANN: Disable acl_graph for prefill stage ( #15933 )
2025-09-11 15:59:37 +08:00
ggml-cpu
ggml-cpu : add check for ARM MATMUL_INT8/i8mm support ( #15922 )
2025-09-11 14:39:12 +01:00
ggml-cuda
CUDA: some micro-optimizations in mmf.cuh for mul_mat_id ( #15926 )
2025-09-15 17:35:11 +08:00
ggml-hip
HIP: bump requirement to rocm 6.1 ( #15296 )
2025-08-13 20:44:30 +02:00
ggml-metal
metal : remove memory pools ( #15966 )
2025-09-14 22:02:32 +03:00
ggml-musa
CUDA: replace GGML_CUDA_F16 with CUDA arch checks ( #15433 )
2025-08-20 16:58:49 +02:00
ggml-opencl
vulkan: sort graph to allow more parallel execution ( #15850 )
2025-09-09 02:10:07 +08:00
ggml-rpc
vulkan: sort graph to allow more parallel execution ( #15850 )
2025-09-09 02:10:07 +08:00
ggml-sycl
SYCL: Add COUNT_EQUAL operator support ( #15991 )
2025-09-15 18:51:35 +02:00
ggml-vulkan
Vulkan: Clean up mul_mm shader ( #15987 )
2025-09-14 16:56:28 +02:00
ggml-webgpu
vulkan: sort graph to allow more parallel execution ( #15850 )
2025-09-09 02:10:07 +08:00
ggml-zdnn
ggml-zdnn: rm user mapped buffers ( #15965 )
2025-09-14 13:37:03 +08:00
CMakeLists.txt
ggml: initial IBM zDNN backend ( #14975 )
2025-08-15 21:11:22 +08:00
ggml-alloc.c
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
ggml-backend-impl.h
ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type ( #15797 )
2025-09-11 22:47:38 +02:00
ggml-backend-reg.cpp
ggml-backend : add GGML_BACKEND_DEVICE_TYPE_IGPU device type ( #15797 )
2025-09-11 22:47:38 +02:00
ggml-backend.cpp
vulkan: sort graph to allow more parallel execution ( #15850 )
2025-09-09 02:10:07 +08:00
ggml-common.h
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
ggml-impl.h
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
ggml-opt.cpp
finetune: SGD optimizer, more CLI args ( #13873 )
2025-08-14 12:03:57 +02:00
ggml-quants.c
ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors ( #15379 )
2025-08-18 09:23:56 +02:00
ggml-quants.h
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
ggml-threading.cpp
ggml : build backends as libraries ( #10256 )
2024-11-14 18:04:35 +01:00
ggml-threading.h
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ( #10797 )
2024-12-12 19:02:49 +01:00
ggml.c
cuda : fix supports_op condition for get_rows when number of blocks is too large ( #15868 )
2025-09-08 13:56:51 +03:00
ggml.cpp
ggml : Print backtrace on uncaught C++ exceptions (ggml/1232)
2025-06-01 13:43:57 +03:00
gguf.cpp
gguf: gguf_writer refactor ( #15691 )
2025-09-05 11:34:28 +02:00