nullname
beff5c4b78
feat: op perf opt ( #38 )
...
* add op define xml
* copy qnn libs in cmake
* fix htp skel path
* add windows copy file list
* wip
* add generated package
* remove unused params
* add cmake list
* set qnn sdk and hexagon sdk path
* wip
* wip
* fix tools version
* fix compiling error
* fix dims calc
* wip
* add mulmat 2d
* wip
* reduction
* wip
* wip
* fix compiling error in x64
* wip
* fix device description in emulator
* wip
* add flag
* copy necessary libs
* wip
* load HtpPrepare first for emulator
* enable custom op for 2d matrix
* verify op config before add to node
* Revert "verify op config before add to node"
This reverts commit 206dec826e560625e053c4c78e023994f993526e.
* wip
* wip
* wip
* revert tool version change
* use hexagon sdk version 5.5.0
https://docs.qualcomm.com/bundle/publicresource/topics/80-77512-2/release-notes-wrapper.html?product=1601111740010422#5.5.0
* wip
* move to sub dir
* add hexagon npu device and server lib
* fix npu lib build
* refactoring: rename QNNBackend enum
* fix compiling error
* wip
* remove qnn/backend.hpp
* add hexagon dsp host layer
* extract rpc_mem from qnn submodule
* fix dsp compiling error
* wip
* wip
* open and lose npu device
* split objects into separated files
* fix linking error
* add npu_tensor
* add host graph
* map rpc buffer before usage
* fix some todos
* add shared module
* split rpc_interface from rpc_mem
* get get_dsp_arch from device
* wip
* rename host classes
* fix hexagon sdk arch getter
* fix device open
* fix linking error
* fix crash
* use tensor_data_type
* fix npu lib crash
* fix debug log print
* skip empty graph
* wip
* add log
* fix unmap fail
* fix tensor set
* remove some logs
* flush back memory after finished
* fix nb
* wip
* wip
* add helper function
* impl add op
* fix some add in test-backend-ops
* add elt wise sub and mul
* fix crash on some inplace op
* wip
* fix elt wise op calc
* wip
* split mul_mat into file
* add caps array
* wip
* wip
* print support/unsupport op
* copy lldb-server for newer android sdk
* add tensor_spec
* add assert
* fix crash when loading model
* rename cmake option
* fix name
* fix device memory and description
* fix compiling error on qnn only build
* fix some potential UBs
* fix comments
2025-04-21 12:06:16 +08:00
hongruichen
9e41f79403
fix compiling error after merge master
2025-04-16 11:16:26 +08:00
hongruichen
a004951bb9
Merge branch 'master' into dev-refactoring
2025-04-16 00:39:25 +08:00
Georgi Gerganov
f8f820cc4d
metal : add FA-vec kernels for head size 96 ( #12952 )
...
ggml-ci
2025-04-15 14:45:05 +03:00
hipudding
54a7272043
CANN: Add x86 build ci ( #12950 )
...
* CANN: Add x86 build ci
* CANN: fix code format
2025-04-15 12:08:55 +01:00
David Huang
84778e9770
CUDA/HIP: Share the same unified memory allocation logic. ( #12934 )
...
Replace compile-time `GGML_HIP_UMA` with environment variable `GGML_CUDA_ENABLE_UNIFIED_MEMORY`. This unifies the usage on NVIDIA and AMD GPUs, and allows a single binary to be shared between integrated and dedicated GPUs.
2025-04-15 11:20:38 +02:00
Akarshan Biswas
510676475f
SYCL: Add ROPE vision kernel ( #12887 )
...
* SYCL: Add ROPE vision kernel
* Add comment about rope mode
2025-04-15 10:37:42 +02:00
Juk Armstrong
daa422881a
llama : DeepSeek V2/V3 MLA implementation ( #12801 )
...
* Merged using squash to remove all noise commit messages
* Force flash attention off for `LLM_ARCH_DEEPSEEK2` - embedding too large
* Removed 3 conts (2x RoPE and 1x RMS-norm)
* Changed to use `<cmath>` instead of `<math.h>`
* Reverted removal of the 3 conts
* Used `reshape` in `llm_graph_context::build_attn_mha()`
* Use `k_pe = ggml_reshape`
* Removed the 3 conts again
* Removed the 3D views of `wk_b` and `wv_b`, and just save and 3D in GGUF
* Removed MQA optimisation from `build_attn_mha()` as no gains now
* Simplified `is_mla` branch in `llm_build_deepseek2()`
* Removed `build_attn_mla` and added `nullptr` to all `build_atnn` calls
* Fixed call to `build_attn` in `llm_build_t5_enc`
2025-04-15 09:49:57 +03:00
Srihari-mcw
eccc7a1602
ggml : Add AVX512 implementation of GEMM - Q4_Kx8 ( #12829 )
...
* Add AVX512 implementation of GEMM - q4kx8
* Update changes to remove unnecessary whitespaces
2025-04-15 09:22:36 +03:00
Chenguang Li
0019279bb5
CANN: Opt ROPE optimization ( #12865 )
...
* [CANN]Opt ROPE optimization
* [CANN]Codestyle adjustment
* [CANN]Fix the ROPE precision issue
* [CANN]codestyle fix
* [CANN]add rope unsupport case
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
2025-04-15 10:09:35 +08:00
Xinpeng Dou
b0c75ac9f9
CANN: Optimize CANN buffer pool memory management ( #12875 )
...
Multiple optional memory pools are provided for CANN, including VMM,
priority queue-based, and traditional memory pools.
1.When the memory pool is available and GGML_CANN_DISABLE_VMM_POOL
is not defined, the VMM pool is selected by default.
2.Otherwise, if GGML_CANN_ENABLE_BUF_PRIO_POOL is defined,
the priority queue-based memory pool is used.
3.If neither condition is met, the default memory pool is used.
2025-04-15 10:04:24 +08:00
Russyyds
d6d2c2ab8c
Add performance print for gemma3 in example ( #12929 )
2025-04-14 19:18:20 +02:00
Akarshan Biswas
75afa0ae31
SYCL: Fix im2col ( #12910 )
...
* SYCL: Fix im2col
* restore local workgroup size adjustments for large inputs
* restore format
2025-04-14 14:23:53 +02:00
Radoslav Gerganov
c772d54926
rpc : use ggml_context_ptr ( #12938 )
2025-04-14 13:59:34 +03:00
Neo Zhang Jianyu
81c7e64fc2
dsiable curl lib check, this action is missed by commit bd3f59f812 ( #12761 ) ( #12937 )
2025-04-14 18:19:07 +08:00
Georgi Gerganov
526739b879
sync : ggml
...
ggml-ci
2025-04-14 09:26:15 +03:00
cmdr2
a25355e264
cpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running test-backend-ops with only the CPU backend (ggml/1190)
2025-04-14 09:26:15 +03:00
SXX
e959d32b1c
ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register ( #12773 )
...
* ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register
* simplifies the codebase by removing redundant functions
2025-04-14 08:47:55 +03:00
Alan Gray
307bfa253d
ggml: disable CUDA graphs for unsupported DUP and CONT node types ( #12891 )
...
Fixes #12798
2025-04-13 23:12:21 +02:00
Ed Addario
71e90e8813
quantize: Handle user-defined quantization levels for additional tensors ( #12511 )
...
* Add llama_model_quantize_params parameters
* Add new quantize parameters parsing and validation
* Update usage
* Add new parameters defaults
* Add new quantization parameters logic
* Add llama_model_quantize_params parameters
* Add new quantize parameters parsing and validation
* Update usage
* Add new parameters defaults
* Add new quantization parameters logic
* Minor refactoring as per the contributors' coding guidelines
* Update descriptions to match existing style
* Add llama_model_quantize_params parameters
* Add new quantize parameters parsing and validation
* Update usage
* Add new parameters defaults
* Add new quantization parameters logic
* Minor refactoring as per the contributors' guidelines
* Implement general --tensor-type instead of tensor-specific command option
* Fix implied type bug
* Restore missing #includes
* Add regex capability for tensor selection
* Refactor function name and update ALLOWED_TENSOR_TYPE
* Add missing #include
* Handle edge case when tensor name is cls.output
* Minor logging improvement
2025-04-13 21:29:28 +03:00
Prajwal B Mehendarkar
bc091a4dc5
common : Define cache directory on AIX ( #12915 )
2025-04-12 17:33:39 +02:00
Jeff Bolz
a4837577aa
vulkan: use aligned loads for flash attention mask ( #12853 )
...
Rewrite the stride logic for the mask tensor in the FA shader to force the
stride to be aligned, to allow using more efficient loads.
2025-04-12 10:44:48 +02:00
Matt Clayton
e59ea539b8
llava: Fix cpu-only clip image encoding sefault ( #12907 )
...
* llava: Fix cpu-only clip image encoding
* clip : no smart ptr for ggml_backend_t
* Fix for backend_ptr push_back
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2025-04-12 07:29:03 +02:00
Georgi Gerganov
c94085df28
server : add VSCode's Github Copilot Chat support ( #12896 )
...
* server : add VSCode's Github Copilot Chat support
* cont : update handler name
2025-04-11 23:37:41 +03:00
yuri@FreeBSD
e8a62631b3
rpc : Set cache directory in rpc-server.cpp on FreeBSD ( #12903 )
2025-04-11 22:04:14 +02:00
Olivier Chafik
b6930ebc42
`tool-call`: fix non-tool-calling grammar crashes w/ Qwen / Hermes 2 templates ( #12900 )
...
* `tool-call`: don't call common_chat_params_init_hermes_2_pro when there aren't tools (or when there's a schema)
* test all chat formats w/o tools
2025-04-11 21:47:52 +02:00
yuri@FreeBSD
68b08f36d0
common : Define cache directory on FreeBSD ( #12892 )
2025-04-11 21:45:44 +02:00
Ewan Crawford
578754b315
sycl: Support sycl_ext_oneapi_limited_graph ( #12873 )
...
The current usage of the SYCL-Graph extension checks for
the `sycl_ext_oneapi_graph` device aspect. However, it is also
possible to support `sycl_ext_oneapi_limied_graph` devices that
don't support update
2025-04-11 15:32:14 +02:00
tastelikefeet
b2034c2b55
contrib: support modelscope community ( #12664 )
...
* support download from modelscope
* support login
* remove comments
* add arguments
* fix code
* fix win32
* test passed
* fix readme
* revert readme
* change to MODEL_ENDPOINT
* revert tail line
* fix readme
* refactor model endpoint
* remove blank line
* fix header
* fix as comments
* update comment
* update readme
---------
Co-authored-by: tastelikefeet <yuze.zyz@alibaba-inc/com>
2025-04-11 14:01:56 +02:00
Yuxuan Zhang
06bb53ad9b
llama-model : add Glm4Model implementation for GLM-4-0414 ( #12867 )
...
* GLM-4-0414
* use original one
* Using with tensor map
* fix bug
* change order
* change order
* format with flask8
2025-04-11 12:10:10 +02:00
Xuan-Son Nguyen
0c50923944
clip : use smart pointer ( ⚠️ breaking change) ( #12869 )
...
* clip : use smart pointers
* fix warmup
* add forward declaration
* misisng include
* fix include (2)
* composite
* simplify batch ptr
* fix conflict
2025-04-11 12:09:39 +02:00
Akarshan Biswas
fccf9cae83
SYCL: Add fp16 type support to unary op kernels ( #12788 )
...
* SYCL: Add fp16 support to some elementwise OP kernels
* remove comment
ggml-ci
* Use static_cast directly
* remove not needed cast from tanh
* Use static cast and remove unneeded castings
* Adjust device_support_op for unary OPs
* Use cast_data and typed_data struct to deduplicate casting code
2025-04-11 16:03:50 +08:00
Daniel Han
ec6c09d0fa
convert : Llama4 RoPE fix ( #12889 )
2025-04-11 09:49:09 +02:00
R0CKSTAR
8ac9f5d765
ci : Replace freediskspace to free_disk_space in docker.yml ( #12861 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-04-11 09:26:17 +02:00
Daniel Bevenius
12e9158f25
xcf : add check for visionos build version ( #12854 )
...
This commit adds a check for the visionos build version used with vtool
in build-xcframework.sh. The script now checks the Xcode version and
determines whether to use "xros" or "visionos" for the build version.
This commit also uses xcrun for the vtool so that the version of vtool
in xcode command line tools is used instead of the one in the system
path.
Refs: https://github.com/ggml-org/whisper.cpp/pull/2994#issuecomment-2773292223
2025-04-11 09:24:34 +02:00
Xuan-Son Nguyen
5b1f13cb64
convert : proper tensor name mapping for llama4 ( #12870 )
...
* Llama-4 mapping
* remove hacky renaming
---------
Co-authored-by: Daniel Han <danielhanchen@gmail.com>
2025-04-11 09:23:37 +02:00
Xuan-Son Nguyen
8b91d5355a
llama : correct rms norm for llama 4 ( #12882 )
2025-04-11 08:49:50 +02:00
Aaron Teo
0fed24c347
ggml: fix compilation error s390x ( #12848 )
...
* ggml: fixes #12846 compilation error
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
* ggml: add documentation for code change
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
* ggml: refactor to type-cast and update documentation
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
* ggml: update documentation to provide full issue link
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
---------
Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
2025-04-11 08:20:07 +03:00
Georgi Gerganov
47ba87d0a4
sync : ggml
2025-04-11 00:17:47 +03:00
Georgi Gerganov
1d2b613445
tests : fix init order ( #0 )
...
ggml-ci
2025-04-11 00:17:47 +03:00
Georgi Gerganov
eb420e1148
sync : ggml
...
ggml-ci
2025-04-11 00:17:47 +03:00
cmdr2
cb79c2e7fa
ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)
...
fix #1186
2025-04-11 00:17:47 +03:00
Diego Devesa
fe92821ea9
ggml : add bilinear upscale support (ggml/1185)
2025-04-11 00:17:47 +03:00
Diego Devesa
459895c326
ggml : add more generic custom op, remove deprecated custom ops (ggml/1183)
...
* ggml : add more generic ggml_custom op
* ggml : remove deprecated custom ops
2025-04-11 00:17:47 +03:00
Georgi Gerganov
e4bf72d631
scripts : fix sync-ggml-am.sh
2025-04-11 00:17:47 +03:00
Xuan-Son Nguyen
8b9cc7cdd8
llava : introduce libmtmd ( #12849 )
...
* wip llava2
* migrated gemma3 to llava2
* add timings
* correct pre/postfix
* fix missing include
* fix compilation unused var warn
* update llava2_tokenize
* change name llava2 --> mtmd
* improve api
* refine helpers
* Update examples/llava/mtmd.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-04-10 22:57:16 +02:00
Xuan-Son Nguyen
64eda5deb9
convert : ability to lazy-load safetensors remotely without downloading to disk ( #12820 )
...
* gguf util : add SafetensorRemote
* fix style
* convert: add --remote option
* convert : allow using lazy remote tensors
It's a bit slow for now since everything is blocking and single-threaded.
* correct metadata.name
* small style fix
* support HF_TOKEN
* convert : use writeable buffer for remote lazy tensors
* convert : fix flake8 lint regarding lamdba assigment
* multithreaded download
* multithread: print debug
* fix style
* Revert "multithreaded download"
This reverts commit 42fc895ace .
* bring back _get_request_headers
---------
Co-authored-by: Francis Couture-Harpin <git@compilade.net>
2025-04-10 17:24:44 +02:00
Chenguang Li
fe5b78c896
CANN: Support more ops ( #12841 )
...
* [CANN]Support Opt LOG && MEAN && PAD_REFLECT_1D
* [CANN]Support COUNT_EQUAL && STEP && SGN
* [CANN]codestyle adjustment
* [CANN]codestyle adjustment
---------
Signed-off-by: noemotiovon <noemotiovon@gmail.com>
2025-04-10 08:51:52 +08:00
Prajwal B Mehendarkar
11d07e1e69
Fixes #12823 ( #12830 )
...
* Including limits file on AIX
* Fixes #12823
2025-04-10 01:18:01 +02:00
Rudi Servo
b0091ecc1e
docker : added all CPU to GPU images ( #12749 )
2025-04-10 01:17:12 +02:00