mirror of https://github.com/google/gemma.cpp.git
We use MatVec instead of MatVecLoop for the per-head dense layers,
because we can parallelize more on the rows of the matrix than
on the number of heads. This will be even more efficient after
we rearrange the weights and can have a single MatVec operation.
Benchmark results (summarization with 1600 tokens for prefill
and essay writing with 500 tokens for generation):
```
Prefill speed Generation speed
Num threads BEFORE AFTER BEFORE AFTER
32 58.24 t/s 61.79 t/s 32.11 t/s 32.62 t/s
64 83.62 t/s 92.00 t/s 41.10 t/s 41.80 t/s
```
|
||
|---|---|---|
| .. | ||
| benchmark.cc | ||
| compress_weights.cc | ||
| configs.h | ||
| gemma.cc | ||
| gemma.h | ||
| gemma_test.cc | ||
| ops.h | ||
| ops_test.cc | ||
| run.cc | ||