mirror of https://github.com/google/gemma.cpp.git
Supports converting all weight/activation formats to native MulT (bf16/f32) Also: - ConstMat/MutableMat for const correctness - Move RowVectorBatch to allocator.h so it can be used from Matmul - Add matmul.h so MatMulEnv can be used from Activations - Remove kMaxThreads, detect from PerClusterPools - Build fix: -inl.h files must be textual_hdrs, and highway.h should precede -inl.h ``` zen4 new 64, 24576, 3072, add=0, MatTA=bf16, MatTB=sfp: 616.6 GFLOPS. 64, 3072, 24576, add=0, MatTA=bf16, MatTB=sfp: 460.7 GFLOPS. 64, 24576, 3072, add=0, MatTA=f32, MatTB=sfp: 598.6 GFLOPS. 64, 3072, 24576, add=0, MatTA=f32, MatTB=sfp: 435.6 GFLOPS. zen4 old 64, 24576, 3072, add=0, MatTA=f32, MatTB=sfp: 257.5 GFLOPS. 64, 3072, 24576, add=0, MatTA=f32, MatTB=sfp: 231.9 GFLOPS. ``` PiperOrigin-RevId: 663729812 |
||
|---|---|---|
| .. | ||
| python | ||
| BUILD | ||
| analyze.h | ||
| blob_store.cc | ||
| blob_store.h | ||
| compress-inl.h | ||
| compress.h | ||
| compress_weights.cc | ||
| convert_weights.py | ||
| distortion.h | ||
| distortion_test.cc | ||
| io.cc | ||
| io.h | ||
| io_win.cc | ||
| nuq-inl.h | ||
| nuq.h | ||
| nuq_test.cc | ||
| sfp-inl.h | ||
| sfp.h | ||
| sfp_test.cc | ||
| test_util.h | ||
| weights_raw.h | ||