llama.cpp/ggml/src/ggml-cpu
Jeff Bolz bd38ddea01
vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (#11166)
* vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl

Shaders are based on cpy.cu.

* vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32

* ggml: copy q->f32 assumes some contiguity in the destination
2025-01-16 22:47:10 +01:00
..
amx remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
llamafile llamafile : ppc64le MMA INT8 implementation (#10912) 2025-01-08 12:54:19 +02:00
CMakeLists.txt ggml : fixes for AVXVNNI instruction set with MSVC and Clang (#11027) 2024-12-31 15:23:33 +01:00
cpu-feats-x86.cpp ggml : add predefined list of CPU backend variants to build (#10626) 2024-12-04 14:45:40 +01:00
ggml-cpu-aarch64.cpp ggml-backend : only offload from host buffers (fix) (#11124) 2025-01-07 16:11:57 +01:00
ggml-cpu-aarch64.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-hbm.cpp ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-hbm.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-impl.h ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
ggml-cpu-quants.c ggml: aarch64: implement SVE kernels for q4_K_q8_K vector dot (#11227) 2025-01-16 11:11:49 +02:00
ggml-cpu-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu-traits.cpp ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-traits.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu.c vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (#11166) 2025-01-16 22:47:10 +01:00
ggml-cpu.cpp CUDA: backwards pass for misc. ops, add tests (#11257) 2025-01-16 16:43:38 +01:00