llama.cpp/ggml/src
Chenguang Li 10d8b2b6b0
CANN: Add ROPE sin/cos cache for reuse (#15912)
* CANN: Add ROPE sin/cos cache for reuse

Introduce sin/cos caching mechanism in ROPE to avoid redundant
computation across layers. The cache is built on the first layer
per device and reused by subsequent layers if parameters match.

- Added sin_cache / cos_cache pointers and position_length tracking
- Introduced cache validity flags and properties:
  (ext_factor, theta_scale, freq_scale, attn_factor, is_neox)
- Accelerates ROPE by eliminating repeated sin/cos generation

This change reduces overhead in multi-layer scenarios while
preserving correctness by verifying parameter consistency.

Co-authored-by: hipudding <huafengchun@gmail.com>

* fix typo

Signed-off-by: noemotiovon <757486878@qq.com>

---------

Signed-off-by: noemotiovon <757486878@qq.com>
Co-authored-by: hipudding <huafengchun@gmail.com>
2025-09-10 18:42:00 +08:00
..
ggml-blas vulkan: sort graph to allow more parallel execution (#15850) 2025-09-09 02:10:07 +08:00
ggml-cann CANN: Add ROPE sin/cos cache for reuse (#15912) 2025-09-10 18:42:00 +08:00
ggml-cpu vulkan: sort graph to allow more parallel execution (#15850) 2025-09-09 02:10:07 +08:00
ggml-cuda HIP: use v_dot2_f32_f16 instruction for FA (#15884) 2025-09-09 14:04:43 +02:00
ggml-hip HIP: bump requirement to rocm 6.1 (#15296) 2025-08-13 20:44:30 +02:00
ggml-metal vulkan: sort graph to allow more parallel execution (#15850) 2025-09-09 02:10:07 +08:00
ggml-musa CUDA: replace GGML_CUDA_F16 with CUDA arch checks (#15433) 2025-08-20 16:58:49 +02:00
ggml-opencl vulkan: sort graph to allow more parallel execution (#15850) 2025-09-09 02:10:07 +08:00
ggml-rpc vulkan: sort graph to allow more parallel execution (#15850) 2025-09-09 02:10:07 +08:00
ggml-sycl vulkan: sort graph to allow more parallel execution (#15850) 2025-09-09 02:10:07 +08:00
ggml-vulkan vulkan: throw the oom error instead of no memory type found (#15905) 2025-09-09 22:26:03 +02:00
ggml-webgpu vulkan: sort graph to allow more parallel execution (#15850) 2025-09-09 02:10:07 +08:00
ggml-zdnn vulkan: sort graph to allow more parallel execution (#15850) 2025-09-09 02:10:07 +08:00
CMakeLists.txt ggml: initial IBM zDNN backend (#14975) 2025-08-15 21:11:22 +08:00
ggml-alloc.c llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-backend-impl.h vulkan: sort graph to allow more parallel execution (#15850) 2025-09-09 02:10:07 +08:00
ggml-backend-reg.cpp ggml: initial IBM zDNN backend (#14975) 2025-08-15 21:11:22 +08:00
ggml-backend.cpp vulkan: sort graph to allow more parallel execution (#15850) 2025-09-09 02:10:07 +08:00
ggml-common.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-impl.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-opt.cpp finetune: SGD optimizer, more CLI args (#13873) 2025-08-14 12:03:57 +02:00
ggml-quants.c ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (#15379) 2025-08-18 09:23:56 +02:00
ggml-quants.h llama : add gpt-oss (#15091) 2025-08-05 22:10:36 +03:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c cuda : fix supports_op condition for get_rows when number of blocks is too large (#15868) 2025-09-08 13:56:51 +03:00
ggml.cpp ggml : Print backtrace on uncaught C++ exceptions (ggml/1232) 2025-06-01 13:43:57 +03:00
gguf.cpp gguf: gguf_writer refactor (#15691) 2025-09-05 11:34:28 +02:00