This website requires JavaScript.
Explore
Help
Sign In
happyz
/
llama.cpp
mirror of
https://github.com/ggerganov/llama.cpp.git
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
f153e7e7c0
llama.cpp
/
gguf-py
/
gguf
History
Sigbjørn Skjæret
57b93282a2
Merge branch 'master' into multiple-chat-templates
2024-04-18 12:00:51 +02:00
..
__init__.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
constants.py
Merge branch 'master' into multiple-chat-templates
2024-04-18 12:00:51 +02:00
gguf.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
gguf_reader.py
gguf : add support for I64 and F64 arrays (
#6062
)
2024-03-15 10:46:51 +02:00
gguf_writer.py
Merge branch 'master' into multiple-chat-templates
2024-04-18 12:00:51 +02:00
py.typed
convert : various script cleanups/fixes + merges and special token handling (
#2842
)
2023-08-30 11:25:50 +03:00
tensor_mapping.py
llama : add qwen2moe (
#6074
)
2024-04-16 18:40:48 +03:00
vocab.py
Support converting models with multiple chat templates
2024-04-10 15:31:05 +02:00