llama.cpp/ggml/src
Gaurav Garg aa8b62105c Support device-specific host buffer types if all underlying backends expose the same type. This allows using pinned memory instead of pageable memory for CUDA.
Fix compilation errors.
2026-02-16 15:39:26 +05:30
..
ggml-blas Remove shfl and AllReduce from backend interface 2026-02-11 14:51:37 +01:00
ggml-cann 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-cpu Support device-specific host buffer types if all underlying backends expose the same type. This allows using pinned memory instead of pageable memory for CUDA. 2026-02-16 15:39:26 +05:30
ggml-cuda fix compilation 2026-02-13 15:13:40 +01:00
ggml-hexagon 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-hip GGML: HIP: add RCCL support 2026-02-11 14:51:33 +01:00
ggml-metal 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-musa CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
ggml-opencl 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-rpc 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-sycl 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-virtgpu 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-vulkan 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-webgpu 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-zdnn 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-zendnn Remove shfl and AllReduce from backend interface 2026-02-11 14:51:37 +01:00
CMakeLists.txt ggml: backend-agnostic tensor parallelism 2026-02-11 14:12:33 +01:00
ggml-alloc.c move allocation workaround out of ggml-alloc.c 2026-02-11 15:31:48 +01:00
ggml-backend-dl.cpp hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-dl.h hexagon: enable offloading to Hexagon on Windows on Snapdragon (#19150) 2026-01-29 12:33:21 -08:00
ggml-backend-impl.h 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-backend-meta.cpp Support device-specific host buffer types if all underlying backends expose the same type. This allows using pinned memory instead of pageable memory for CUDA. 2026-02-16 15:39:26 +05:30
ggml-backend-reg.cpp ggml : use noexcept overload for is_regular_file in backend registration (#19452) 2026-02-10 10:57:48 +01:00
ggml-backend.cpp 2d tensor set/get support 2026-02-11 19:56:35 +01:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h ggml : add ggml_build_forward_select (#18550) 2026-01-19 20:03:19 +02:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml: added cleanups in ggml_quantize_free (#19278) 2026-02-03 08:43:39 +02:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp GGUF: check that tensor size is representable (#19072) 2026-01-24 21:57:51 +01:00