llama.cpp/ggml/src
Daniel Bevenius 2f7d0ac015
ggml : add CPU backend reference implementation
This commit introduces a CPU reference implementation for GGML,
designed primarily for testing and validation purposes.

The motivation for this addition is to have a pure C CPU backend
implementation that does not use any hardware-specific optimizations
or intrinsics. This will allow for testing the CPU backend variants
against the reference implementation to ensure correctness

Building:
```console
$ cmake -B build \
    -DGGML_CPU_REF_BACKEND=ON
    -DGGML_BACKEND_DL=ON \
    -DGGML_CPU_ALL_VARIANTS=ON
```

List availble cpu architectures/variants:
```console
$ ./build/bin/test-backend-ops cpu-variants --list
CPU variants:
  CPU-haswell     - 12th Gen Intel(R) Core(TM) i7-1260P
  CPU-sse42       - 12th Gen Intel(R) Core(TM) i7-1260P
  CPU-x64         - 12th Gen Intel(R) Core(TM) i7-1260P
  CPU-alderlake   - 12th Gen Intel(R) Core(TM) i7-1260P
  CPU-sandybridge - 12th Gen Intel(R) Core(TM) i7-1260P
```
Run tests:
```console
./build-ref/bin/test-backend-ops cpu-variants --variant CPU-alderlake -o ADD
CPU-ref features:
  SSE2 = 1
CPU-alderlake features:
  SSE2 = 1
  SSE3 = 1
  SSSE3 = 1
  AVX = 1
  AVX_VNNI = 1
  AVX2 = 1
  F16C = 1
  FMA = 1
  BMI2 = 1
  LLAMAFILE = 1
  OPENMP = 1
  REPACK = 1
Testing CPU variant 'CPU-alderlake' against 'CPU-ref' backend...

 ADD(type=f16,ne=[1,1,8,1],nr=[1,1,1,1],nf=1): OK
 ADD(type=f16,ne=[1,1,1,1],nr=[32,1,1,1],nf=1): OK
 ...
```
2026-01-02 11:50:31 +01:00
..
ggml-blas sync : whisper.cpp (ggml/1359) 2025-09-29 17:43:58 +03:00
ggml-cann CANN: implement the SSM_CONV operator (#17737) 2025-12-26 09:12:04 +08:00
ggml-cpu ggml : add CPU backend reference implementation 2026-01-02 11:50:31 +01:00
ggml-cuda cuda : fix copy of large tensors (ggml_nbytes <= INT_MAX assertion) (#18433) 2026-01-02 00:24:20 +01:00
ggml-hexagon ggml-hexagon: create generalized functions for cpu side op (#17500) 2025-12-22 23:13:24 -08:00
ggml-hip HIP: fix AMDGPU_TARGETS, update documentation (#16803) 2025-10-27 21:39:49 +01:00
ggml-metal metal : add count_equal op (#18314) 2025-12-31 10:39:48 +02:00
ggml-musa CUDA: faster tile FA, add oob checks, more HSs (#16492) 2025-10-11 20:54:32 +02:00
ggml-opencl opencl: allow resizing transpose buffers (#18384) 2025-12-27 15:51:14 -08:00
ggml-rpc rpc : use unordered_map::reserve and emplace (#18513) 2026-01-02 12:09:36 +02:00
ggml-sycl sycl: add newline at the end of CMakeLists.txt (#18503) 2025-12-31 14:23:44 +08:00
ggml-vulkan vulkan: extend topk_moe to handle sigmoid w/exp_probs_b for nemotron (#18295) 2026-01-01 08:58:27 +01:00
ggml-webgpu ggml webgpu: unary op suppport, code refactoring, ops support (#17764) 2025-12-05 12:25:51 -08:00
ggml-zdnn zdnn: refactor codebase + add docs (#16178) 2025-09-23 14:53:05 +08:00
ggml-zendnn ggml-zendnn : add ZenDNN backend for AMD CPUs (#17690) 2025-12-07 00:13:33 +08:00
CMakeLists.txt ggml : add CPU backend reference implementation 2026-01-02 11:50:31 +01:00
ggml-alloc.c llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
ggml-backend-impl.h rpc : add support for multiple devices (#16276) 2025-10-04 12:49:16 +03:00
ggml-backend-reg.cpp ggml : add CPU backend reference implementation 2026-01-02 11:50:31 +01:00
ggml-backend.cpp vulkan: extend topk_moe to handle sigmoid w/exp_probs_b for nemotron (#18295) 2026-01-01 08:58:27 +01:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h cmake: Added more x86_64 CPU backends when building with `GGML_CPU_ALL_VARIANTS=On` (#18186) 2025-12-28 09:33:29 +02:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml : fix uninitialized is_on_grid in quantize_row_iq3_xxs_impl (#15928) 2025-09-23 10:25:20 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c llama: automatically set parameters not set by the user in such a way that maximizes GPU utilization (#16653) 2025-12-15 09:24:59 +01:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp ggml, llama : use defaulted constructors/destructors (#17649) 2025-12-03 07:12:18 +01:00