llama.cpp/scripts
Georgi Gerganov 3b169441df
sync : ggml (#5452)
* ggml-alloc : v3 (ggml/727)

* ggml-alloc v3

ggml-ci

* fix ci

ggml-ci

* whisper : check for backend buffer allocation failures

* whisper : avoid leaks when initialization fails

* cleanup

ggml-ci

* style fixes

ggml-ci

* sync : ggml

* update llama.cpp, clip.cpp, export-lora.cpp

* update finetune.cpp, train-text-from-scratch.cpp

ggml-ci

* ggml-backend : reduce alignment to 32 to match gguf and fix mmap

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-02-12 09:16:06 +02:00
..
LlamaConfig.cmake.in
build-info.cmake
build-info.sh
check-requirements.sh
ci-run.sh
compare-llama-bench.py
convert-gg.sh
gen-build-info-cpp.cmake
get-flags.mk
get-hellaswag.sh
get-pg.sh
get-wikitext-2.sh
get-winogrande.sh
install-oneapi.bat
qnt-all.sh
run-all-perf.sh
run-all-ppl.sh
run-with-preset.py
server-llm.sh
sync-ggml-am.sh
sync-ggml.last sync : ggml (#5452) 2024-02-12 09:16:06 +02:00
sync-ggml.sh
verify-checksum-models.py