This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
b12a56351d
llama.cpp
/
ggml
/
src
/
ggml-virtgpu
History
Johannes Gäßler
3fdd0b7a6e
2d tensor set/get support
2026-02-11 19:56:35 +01:00
..
backend
…
include
…
CMakeLists.txt
…
apir_cs_ggml-rpc-front.cpp
…
ggml-backend-buffer-type.cpp
…
ggml-backend-buffer.cpp
2d tensor set/get support
2026-02-11 19:56:35 +01:00
ggml-backend-device.cpp
…
ggml-backend-reg.cpp
…
ggml-backend.cpp
Remove shfl and AllReduce from backend interface
2026-02-11 14:51:37 +01:00
ggml-remoting.h
…
ggmlremoting_functions.yaml
…
regenerate_remoting.py
…
virtgpu-apir.h
…
virtgpu-forward-backend.cpp
…
virtgpu-forward-buffer-type.cpp
…
virtgpu-forward-buffer.cpp
…
virtgpu-forward-device.cpp
…
virtgpu-forward-impl.h
…
virtgpu-forward.gen.h
…
virtgpu-shm.cpp
…
virtgpu-shm.h
…
virtgpu-utils.cpp
…
virtgpu-utils.h
…
virtgpu.cpp
…
virtgpu.h
…