llama.cpp/src
Daniel Bevenius a057897ad4
llama : add xcframework build script (#11996)
* llama : add xcframework build script

This commit adds a script to build an XCFramework for Apple
ios, macos, visionos, and tvos platforms.

The generated XCFramework can then be added to a project and used in
the same way as a regular framework. The llama.swiftui example project
has been updated to use the XCFramework and can be started using the
following command:
```console
$ open examples/llama.swiftui/llama.swiftui.xcodeproj/
```

Refs: https://github.com/ggml-org/llama.cpp/issues/10747

* examples : remove llama.cpp (source dir ref) from project.pbxproj

This commit removes the reference to llama.cpp from the project.pbxproj
file since Package.swift has been removed.

* ci : updated build.yml to use build-xcframework.sh

* ci : add xcframework build to github releases

This commit adds the ability to create a GitHub release with the
xcframework build artifact.

* scripts : add apple app validation scripts

This commit adds scripts that can validate the iOS, macOS, tvOS, and
VisionOS applications. The scripts create a simple test app project,
copy the llama.xcframework to the test project, build and archive the
app, create an IPA from the archive, and validate the IPA using altool.

The motivation for this is to provide some basic validation and
hopefully avoid having to manually validate apps in Xcode.

* llama : remove Package.swift

This commit removes the Package.swift file, as we are now building an
XCFramework for the project.

* llama : remove Sources and spm-headers directories

* llama : use TargetConditionals.h for visionOS/tvOS
2025-03-05 06:30:31 +01:00
..
CMakeLists.txt Add Jinja template support (#11016) 2025-01-21 13:18:51 +00:00
llama-adapter.cpp llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama-adapter.h llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama-arch.cpp llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
llama-arch.h Add Jinja template support (#11016) 2025-01-21 13:18:51 +00:00
llama-batch.cpp llama : refactor `src/llama.cpp` (#10902) 2025-01-03 10:18:53 +02:00
llama-batch.h llama : refactor `src/llama.cpp` (#10902) 2025-01-03 10:18:53 +02:00
llama-chat.cpp ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
llama-chat.h llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
llama-context.cpp llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama-context.h llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama-cparams.cpp llama : refactor `src/llama.cpp` (#10902) 2025-01-03 10:18:53 +02:00
llama-cparams.h llama : refactor `src/llama.cpp` (#10902) 2025-01-03 10:18:53 +02:00
llama-grammar.cpp llama : fix indentation in llama-grammar [no ci] (#11943) 2025-02-19 06:16:23 +01:00
llama-grammar.h llama : fix typo in llama-grammar.h [no ci] (#11816) 2025-02-12 09:40:01 +02:00
llama-hparams.cpp llama: add support for QRWKV6 model architecture (#11001) 2025-01-10 09:58:08 +08:00
llama-hparams.h llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama-impl.cpp GGUF: C++ refactor, backend support, misc fixes (#11030) 2025-01-07 18:01:58 +01:00
llama-impl.h cleanup: fix compile warnings associated with gnu_printf (#11811) 2025-02-12 10:06:53 -04:00
llama-kv-cache.cpp llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama-kv-cache.h ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
llama-mmap.cpp llama : add xcframework build script (#11996) 2025-03-05 06:30:31 +01:00
llama-mmap.h llama-mmap: fix missing include (#11796) 2025-02-10 20:58:18 +02:00
llama-model-loader.cpp llama : minor fixes for up llama load model speed (#11448) 2025-01-27 14:42:09 +01:00
llama-model-loader.h llama : add `llama_model_load_from_splits` (#11255) 2025-01-16 13:54:08 +01:00
llama-model.cpp llama : add Phi-4-mini support (supersede #12099) (#12108) 2025-02-28 12:44:11 +01:00
llama-model.h rpc : early register backend devices (#11262) 2025-01-17 10:57:09 +02:00
llama-quant.cpp llama : add `llama_model_load_from_splits` (#11255) 2025-01-16 13:54:08 +01:00
llama-quant.h llama : refactor `src/llama.cpp` (#10902) 2025-01-03 10:18:53 +02:00
llama-sampling.cpp sampling: add Top-nσ sampler (#11223) 2025-02-13 08:45:57 +02:00
llama-sampling.h llama : add `llama_vocab`, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
llama-vocab.cpp ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
llama-vocab.h llama : remove notion of CLS token (#11064) 2025-01-12 12:15:53 +02:00
llama.cpp cont : fix mmap flag print (#11699) 2025-02-08 16:49:38 +02:00
unicode-data.cpp server : better security control for public deployments (#9776) 2024-10-08 13:27:04 +02:00
unicode-data.h llama : reduce compile time and binary size (#9712) 2024-10-02 15:49:55 +02:00
unicode.cpp repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
unicode.h unicode : improve naming style (#10838) 2024-12-16 12:31:45 +02:00