llama.cpp/ggml
Aman Gupta 0a5a3b5cdf
Add Conv2d for CPU (#14388)
* Conv2D: Add CPU version

* Half decent

* Tiled approach for F32

* remove file

* Fix tests

* Support F16 operations

* add assert about size

* Review: further formatting fixes, add assert and use CPU version of fp32->fp16
2025-06-30 23:57:04 +08:00
..
cmake ggml-cpu : rework weak alias on apple targets (#14146) 2025-06-16 13:54:15 +08:00
include Add Conv2d for CPU (#14388) 2025-06-30 23:57:04 +08:00
src Add Conv2d for CPU (#14388) 2025-06-30 23:57:04 +08:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml-cpu: enable IBM NNPA Vector Intrinsics (#14317) 2025-06-25 23:49:04 +02:00