llama.cpp/ggml/src/ggml-cpu
vmobilis d6ae2fa061 ggml : ggml_compute_forward_concat() for arbitrary tensor type (ggml/1118)
* ggml_compute_forward_concat() for arbitrary tensor type

* Check that tensors' type match

* ggml-cpu.c: check type of source tensors

* ggml-cpu.c: move tensor type check to ggml_compute_forward_concat()

* ggml.c: check concatenated tensor type

* Remove tensor type check from ggml_compute_forward_concat() in ggml-cpu.c

..., as it was moved to ggml.c.
2025-03-07 14:49:44 +02:00
..
amx ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
kleidiai ggml : fix kleidiai build (#12159) 2025-03-03 13:54:08 +01:00
llamafile llamafile: use member variable instead of constant for iq4nlt (#11780) 2025-02-13 18:05:04 +01:00
CMakeLists.txt ggml-cpu: Faster IQ1 mul_mat_vec on AVX2 using BMI2 instructions (#12154) 2025-03-06 02:26:10 +01:00
cpu-feats-x86.cpp ggml-cpu: Faster IQ1 mul_mat_vec on AVX2 using BMI2 instructions (#12154) 2025-03-06 02:26:10 +01:00
ggml-cpu-aarch64.cpp ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
ggml-cpu-aarch64.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-hbm.cpp ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-hbm.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-impl.h ggml-cpu: Support s390x SIMD Instruction Set (#12019) 2025-02-22 21:39:24 +00:00
ggml-cpu-quants.c ggml-cpu: faster AVX2 variant for IQ1_M (#12216) 2025-03-07 13:54:22 +02:00
ggml-cpu-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu-traits.cpp ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-traits.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu.c ggml : ggml_compute_forward_concat() for arbitrary tensor type (ggml/1118) 2025-03-07 14:49:44 +02:00
ggml-cpu.cpp ggml-cpu: Faster IQ1 mul_mat_vec on AVX2 using BMI2 instructions (#12154) 2025-03-06 02:26:10 +01:00