..
ggml-blas
ggml : fix field name when new ggml_backend ( #14944 )
2025-08-08 14:37:22 +02:00
ggml-cann
CANN: Stream sync between devices for acl_graph ( #15809 )
2025-09-08 10:03:29 +08:00
ggml-cpu
ggml: allow casting between f32 and i32 ( #15783 )
2025-09-08 12:33:01 +02:00
ggml-cuda
cuda : fix supports_op condition for get_rows when number of blocks is too large ( #15868 )
2025-09-08 13:56:51 +03:00
ggml-hip
HIP: bump requirement to rocm 6.1 ( #15296 )
2025-08-13 20:44:30 +02:00
ggml-metal
metal : refactor + optimize ( #15857 )
2025-09-08 13:34:56 +03:00
ggml-musa
CUDA: replace GGML_CUDA_F16 with CUDA arch checks ( #15433 )
2025-08-20 16:58:49 +02:00
ggml-opencl
ggml: add ops for WAN video model (cuda && cpu) ( #15669 )
2025-09-04 10:38:49 +02:00
ggml-rpc
ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others) ( #15188 )
2025-08-13 08:54:30 +03:00
ggml-sycl
ggml: add ops for WAN video model (cuda && cpu) ( #15669 )
2025-09-04 10:38:49 +02:00
ggml-vulkan
ggml: allow casting between f32 and i32 ( #15783 )
2025-09-08 12:33:01 +02:00
ggml-webgpu
ggml WebGPU: remove userdata from request adapter callback ( #15527 )
2025-09-07 11:19:45 +03:00
ggml-zdnn
ggml: initial IBM zDNN backend ( #14975 )
2025-08-15 21:11:22 +08:00
CMakeLists.txt
ggml: initial IBM zDNN backend ( #14975 )
2025-08-15 21:11:22 +08:00
ggml-alloc.c
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
ggml-backend-impl.h
ggml : upgrade init_tensor API to return a ggml_status ( #11854 )
2025-02-28 14:41:47 +01:00
ggml-backend-reg.cpp
ggml: initial IBM zDNN backend ( #14975 )
2025-08-15 21:11:22 +08:00
ggml-backend.cpp
ggml-backend: raise GGML_MAX_SPLIT_INPUTS ( #15722 )
2025-09-01 16:14:55 -07:00
ggml-common.h
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
ggml-impl.h
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
ggml-opt.cpp
finetune: SGD optimizer, more CLI args ( #13873 )
2025-08-14 12:03:57 +02:00
ggml-quants.c
ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors ( #15379 )
2025-08-18 09:23:56 +02:00
ggml-quants.h
llama : add gpt-oss ( #15091 )
2025-08-05 22:10:36 +03:00
ggml-threading.cpp
ggml : build backends as libraries ( #10256 )
2024-11-14 18:04:35 +01:00
ggml-threading.h
remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ( #10797 )
2024-12-12 19:02:49 +01:00
ggml.c
cuda : fix supports_op condition for get_rows when number of blocks is too large ( #15868 )
2025-09-08 13:56:51 +03:00
ggml.cpp
ggml : Print backtrace on uncaught C++ exceptions (ggml/1232)
2025-06-01 13:43:57 +03:00
gguf.cpp
gguf: gguf_writer refactor ( #15691 )
2025-09-05 11:34:28 +02:00