Ed Addario
89051cda35
Update README.md
2025-08-09 14:49:44 +01:00
Ed Addario
dcac206f8e
Add --activation-statistics logic to avoid doubling the imatrix size by default
2025-08-09 14:49:25 +01:00
Ed Addario
6fe51e12f1
Fix typo in ECS formula
2025-08-09 09:12:23 +01:00
Ed Addario
59af5034f7
Update README.md
2025-08-09 01:26:23 +01:00
Ed Addario
c5ecdaa1a1
Add Euclidean–Cosine Score (ECS)
2025-08-07 19:04:49 +01:00
Ed Addario
5bb2def02d
Add --activation-statistics parameter
2025-08-07 17:41:21 +01:00
Ed Addario
dadd90ef73
Rename report heading
2025-08-07 14:07:48 +01:00
Ed Addario
e0d6471340
Reverse conditional logic to match convention
2025-08-07 12:04:52 +01:00
Ed Addario
3e9d53c61e
Refactor variable names
2025-08-07 12:03:24 +01:00
Ed Addario
c7959edff5
Merge branch 'master' into imatrix
2025-08-07 11:51:33 +01:00
Daniel Bevenius
36d3f00e14
requirements : fix PyTorch uint64 compatibility ( #15134 )
...
This commit addresses an issue with the convert_hf_to_gguf script
which is currently failing with:
```console
AttributeError: module 'torch' has no attribute 'uint64'
```
This occurred because safetensors expects torch.uint64 to be available
in the public API, but PyTorch 2.2.x only provides limited support for
unsigned types beyond uint8 it seems. The torch.uint64 dtype exists but
is not exposed in the standard torch namespace
(see pytorch/pytorch#58734 ).
PyTorch 2.4.0 properly exposes torch.uint64 in the public API, resolving
the compatibility issue with safetensors. This also required torchvision
to updated to =0.19.0 for compatibility.
Refs: https://huggingface.co/spaces/ggml-org/gguf-my-repo/discussions/186#68938de803e47d990aa087fb
Refs: https://github.com/pytorch/pytorch/issues/58734
2025-08-07 05:31:48 +02:00
Juk Armstrong
476aa3fd57
Fixed name `-override-tensors` to `-override-tensor` ( #15129 )
2025-08-06 17:28:48 +01:00
Ed Addario
030ed3c909
Merge branch 'master' into imatrix
2025-08-05 21:58:00 +01:00
Georgi Gerganov
fd1234cb46
llama : add gpt-oss ( #15091 )
...
* oai moe
* compat with new checkpoint
* add attn sink impl
* add rope scaling yarn
* logits match with latest transformers code
* wip chat template
* rm trailing space
* use ggml_scale_bias
* rm redundant is_swa_all
* convert interleaved gate_up
* graph : fix activation function to match reference (#7 )
* vocab : handle o200k_harmony special tokens
* ggml : add attention sinks support (#1 )
* llama : add attn sinks
* ggml : add attn sinks
* cuda : add attn sinks
* vulkan : add support for sinks in softmax
remove unnecessary return
* ggml : add fused swiglu_oai op (#11 )
* ggml : add fused swiglu_oai op
* Update ggml/src/ggml-cpu/ops.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* update CUDA impl
* cont : metal impl
* add vulkan impl
* test-backend-ops : more test cases, clean up
* llama : remove unfused impl
* remove extra lines
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: slaren <slarengh@gmail.com>
* repack mxfp4 upon conversion
* clean up a bit
* enable thinking
* add quick hack to render only some special tokens
* fix bf16 conversion
* remove vocab hack
* webui ok
* support chat parsing for gpt-oss
* fix webui
* direct mapping mxfp4, FINALLY
* force using mxfp4
* properly use lazy tensor
* ggml : add mxfp4
ggml : use e8m0 conversion instead of powf
Co-authored-by: Diego Devesa <slarengh@gmail.com>
change kvalues_mxfp4 table to match e2m1 (#6 )
metal : remove quantization for now (not used)
cuda : fix disabled CUDA graphs due to ffn moe bias
vulkan : add support for mxfp4
cont : add cm2 dequant
* ggml : add ggml_add_id (#13 )
* ggml : add ggml_add_id
* add cuda impl
* llama : add weight support check for add_id
* perf opt
* add vulkan impl
* rename cuda files
* add metal impl
* allow in-place ggml_add_id
* llama : keep biases on CPU with --cpu-moe
* llama : fix compile error
ggml-ci
* cuda : add fallback for __nv_cvt_e8m0_to_bf16raw
ggml-ci
* cleanup
ggml-ci
* sycl : fix supports_op for MXFP4
ggml-ci
* fix Unknown reasoning format
* ggml-cpu : fix AVX build
ggml-ci
* fix hip build
ggml-ci
* cuda : add mxfp4 dequantization support for cuBLAS
ggml-ci
* ggml-cpu : fix mxfp4 fallback definitions for some architectures
ggml-ci
* cuda : fix version required for __nv_cvt_e8m0_to_bf16raw
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Co-authored-by: slaren <slarengh@gmail.com>
2025-08-05 22:10:36 +03:00
Ed Addario
88854c9179
Refactor legacy mode
2025-08-05 14:16:45 +01:00
Ed Addario
4c3fea89d6
Update report layout
2025-08-05 13:32:59 +01:00
Ed Addario
49996a19da
Refactor variable names
2025-08-05 13:32:46 +01:00
Ed Addario
aea9b31db5
Make ZD Score two-tailed
2025-08-05 12:57:13 +01:00
Alex Wu
22f060c9c4
webui: fix markdown table ( #15081 )
...
* webui: fix markdown table
* webui: fix table display with themes
2025-08-05 13:56:44 +02:00
Ed Addario
906548a00a
Update aggregated sum of squared activations per layer
2025-08-05 12:06:19 +01:00
Ed Addario
b37393423d
Compute aggregated (per layer) l2 norm
2025-08-05 08:54:57 +01:00
Ed Addario
5e40cf4f1c
Do not resize if in_sum is null
2025-08-05 00:18:53 +01:00
compilade
19f68fa5a4
imatrix : warn when GGUF imatrix is saved without .gguf suffix ( #15076 )
...
* imatrix : add warning when suffix is not .gguf for GGUF imatrix
* imatrix : only warn about suffix when output format is unspecified
2025-08-04 23:26:52 +02:00
Ed Addario
adbff66394
Merge branch 'master' into imatrix
2025-08-04 22:16:10 +01:00
Ed Addario
c39c4e2a33
Refactor variable name
2025-08-04 22:15:50 +01:00
Sigbjørn Skjæret
2721257e3e
quantize : fix confusing error message if ftype is invalid ( #15071 )
2025-08-04 18:11:02 +02:00
compilade
d31192b4ee
imatrix : use GGUF by default ( #14842 )
...
* imatrix : use GGUF by default
* imatrix : use GGUF regardless of the output filename
The legacy format can only be produced with --output-format dat
2025-08-03 22:00:05 +02:00
compilade
0a2f5496be
imatrix : fix 3d activation handling for hybrid and recurrent models ( #14994 )
...
* imatrix : use a single count for dense 3d tensors
* imatrix : fix 3d activations when model tensor is 2d
* imatrix : fix 3d tensor counts
2025-08-03 21:49:13 +02:00
Ed Addario
f1c2a4ca3f
Fix printing l2 norm when calc_mode = 1
2025-08-03 17:14:46 +01:00
Ed Addario
90cb1be99d
Minor cosmetic changes
2025-08-03 16:57:27 +01:00
Ed Addario
2117c4e54b
Update aggregated statistic report layout
2025-08-03 16:38:02 +01:00
Ed Addario
a6155a8125
Add compute_layer_statistics() function
2025-08-03 16:35:03 +01:00
Ed Addario
be60469f25
Refactor function names
2025-08-03 15:10:17 +01:00
Ed Addario
fce05aac9e
Refactor lambda into compute_tensor_averages() function
2025-08-03 13:03:21 +01:00
Ed Addario
5324558132
Update table layout
2025-08-03 10:28:47 +01:00
Ed Addario
4d1325e1eb
Refactor variables
2025-08-03 10:28:23 +01:00
Ed Addario
a32a2ecbed
Reformat report layout
2025-08-03 00:51:33 +01:00
Ed Addario
4c01f51ae1
Remove inactive
2025-08-03 00:51:12 +01:00
Ed Addario
fc8f92596f
Update table display
2025-08-02 16:46:27 +01:00
Ed Addario
ee2509f563
Adjust threshold
2025-08-02 16:45:56 +01:00
Ed Addario
9b841eb696
Compute l2 norm
2025-08-02 16:45:09 +01:00
Ed Addario
b7fb362d8e
Compute cosine similarity based on activations
2025-08-02 16:43:49 +01:00
Ed Addario
cce514a392
Compute entropy for activations
2025-08-02 16:40:40 +01:00
Ed Addario
9744a4a1c6
Determine calculation mode
2025-08-02 16:36:12 +01:00
Ed Addario
78ddb475de
Fix problem up when GGUF does not have in_sum
2025-08-02 16:31:21 +01:00
R0CKSTAR
3025b621d1
llama-bench: rename DB table name from test to llama_bench ( #15003 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-08-02 17:20:40 +08:00
Johannes Gäßler
f906275537
server: enable token array inputs for OAI API ( #15001 )
2025-08-02 10:12:41 +02:00
Ed Addario
2097f038b0
Refactor variable names
2025-07-31 20:46:40 +01:00
tc-mb
952a47f455
mtmd : support MiniCPM-V 4.0 ( #14983 )
...
* support minicpm-v 4
* add md
* support MiniCPM-o 4.0
* add default location
* temp rm MiniCPM-o 4.0
* fix code
* fix "minicpmv_projector" default path
2025-07-31 17:22:17 +02:00
g2mt
94933c8c2e
server : implement universal assisted decoding ( #12635 )
...
* llama-server : implement universal assisted decoding
* Erase prompt tail for kv-cache
* set vocab_dft_compatible in common_speculative
* rename ctx_main to ctx_tgt
* move vocab_dft_compatible to spec struct
* clear mem_dft, remove mem
* detokenize id_last for incompatible models
* update comment
* add --spec-replace flag
* accept special tokens when translating between draft/main models
* Escape spec-replace
* clamp draft result to size to params.n_draft
* fix comment
* clean up code
* restore old example
* log common_speculative_are_compatible in speculative example
* fix
* Update common/speculative.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Update common/speculative.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Update common/speculative.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-07-31 14:25:23 +02:00