Jan Wassenberg
b831fa8482
1.3x prefill, 0.95x decode: matmul replacing last matvec
...
Before 38.28, 9.17 (with profiler enabled, prompt = 330 tok)
```
Gen.FFW : 15414 x 4692352 = 24.166318
Gen.Attention.SumHeads : 15414 x 1394804 = 7.183451 !!
Gen.Embedding : 361 x 49961894 = 6.026297
Gen.Attention.QKV : 15414 x 1005125 = 5.176546
Gen.Attention.DotSoftmax : 15414 x 885480 = 4.560357
RopeAndMulBy : 696528 x 11867 = 2.761818
```
After 49.80, 8.68
```
Gen.FFW : 14448 x 5312783 = 25.646868
Gen.Embedding : 338 x 63044815 = 7.119845
Gen.Attention.QKV : 14448 x 1115003 = 5.382557
Gen.Attention.DotSoftmax : 14448 x 897577 = 4.332957
RopeAndMulBy : 673344 x 11886 = 2.674156
Gen.Attention.SumHeads : 14448 x 518291 = 2.501993 !!
```
PiperOrigin-RevId: 662024085
2024-08-12 03:36:01 -07:00
Daniel Keysers
33334ad454
Fix msan uninitialized scale in optimize_test
...
PiperOrigin-RevId: 654817460
2024-07-22 10:50:25 -07:00
Jan Wassenberg
5844e6a1e5
Cleanup: add wrapper functions and rename vars to interleaved
...
Simplifies the TransformerLayer function.
Use interleaved* instead of _and_queries.
PiperOrigin-RevId: 653929449
2024-07-19 02:04:11 -07:00
Jan Wassenberg
3fe79b3876
Fix msan uninitialized scale
...
PiperOrigin-RevId: 653655471
2024-07-18 09:42:31 -07:00
Kan Wu
f519ab6693
Refactor configurables.
...
PiperOrigin-RevId: 651259154
2024-07-10 21:30:58 -07:00
RangerUFO
f7855251ea
Fix compilation errors in clang
...
It will occur in `ubuntu-latest` of GitHub Actions.
2024-06-21 13:40:40 +08:00
Jan Wassenberg
704d936764
Further simplification to ForEachTensor, thanks I.K.
...
PiperOrigin-RevId: 643996210
2024-06-17 07:12:26 -07:00
Jan Wassenberg
7d0720675f
Move raw_weights into separate header, used mainly by compress_weights.
...
Fix warnings in backprop/* (include)
PiperOrigin-RevId: 643983136
2024-06-17 06:17:02 -07:00
The gemma.cpp Authors
7dbfa44794
Refactor CompressedWeights.
...
PiperOrigin-RevId: 643934198
2024-06-17 02:54:54 -07:00
Zoltan Szabadka
a3a75b77f9
Use CompressedWeights<TConfig<float>> in backpropagation.
...
kWeightsAreCompressed are removed and LoadRawWeights is moved
to compress_weights.cc
2024-06-10 14:34:24 +00:00
Jan Wassenberg
f9b390b134
Support all weight types in a single binary.
...
This changes the command line flags, but the default value retains the previous behavior.
Also add a CreateGemma helper to enable extra args without interface changes.
PiperOrigin-RevId: 641266411
2024-06-07 09:04:45 -07:00
Copybara-Service
f7ac7092d6
Merge pull request #212 from szabadka:adam2
...
PiperOrigin-RevId: 641182573
2024-06-07 02:25:18 -07:00
Zoltan Szabadka
c004799cdc
Add Adam optimizer.
...
Drive-by: Fix compilation errors and tests for backprop functions.
2024-06-06 18:41:36 +00:00
Jan Wassenberg
12707ade80
Toward only using compressed weights:
...
CompressedLayer should all be f32 when weights are f32.
PiperOrigin-RevId: 640954519
2024-06-06 11:00:23 -07:00
Jan Wassenberg
57c2cd8b52
Simplifications: remove GemmaInterface and GemmaImpl
...
Split common and weights into separate lib
Remove common-inl (does not have to be SIMD code), activations.cc
Centralize switch(Model) to avoid duplication
Move CompressWeightsT to compress_weights.cc
Move LoadWeights to weights.cc
PiperOrigin-RevId: 640869202
2024-06-06 05:54:21 -07:00
Zoltan Szabadka
8567978541
Adress review comments
2024-06-04 08:37:54 +00:00
Zoltan Szabadka
36e4d8bbfe
Add first version of backpropagation support.
...
This is still in progress / experimental, currently it is only
implemented for normal gemma MQA attention layers, and no
parallelism is added yet for backward pass.
Since we need to remember all activations from all layers, the
forward pass was also reimplemented with a new activation data
structure.
2024-06-04 08:37:49 +00:00