HappyZ happyz
happyz synced new reference test_626055913 to happyz/gemma.cpp from mirror 2024-04-18 11:13:13 -07:00
happyz synced and deleted reference refs/tags/refs/pull/147/merge at happyz/gemma.cpp from mirror 2024-04-18 11:13:12 -07:00
happyz synced and deleted reference refs/tags/test_625738549 at happyz/gemma.cpp from mirror 2024-04-18 11:13:12 -07:00
happyz synced new reference refs/tags/b2692 to happyz/llama.cpp from mirror 2024-04-18 11:13:11 -07:00
happyz synced commits to refs/tags/b2694 at happyz/llama.cpp from mirror 2024-04-18 11:13:11 -07:00
happyz synced new reference refs/tags/b2694 to happyz/llama.cpp from mirror 2024-04-18 11:13:11 -07:00
happyz synced commits to refs/pull/6688/merge at happyz/llama.cpp from mirror 2024-04-18 11:13:10 -07:00
79bbf42495 Add test script
0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505)
03c0946d73 convert : support models with multiple chat templates (#6588)
e11b2e6e1e Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)
Compare 6 commits »
happyz synced commits to refs/pull/6688/head at happyz/llama.cpp from mirror 2024-04-18 11:13:10 -07:00
79bbf42495 Add test script
happyz synced commits to refs/pull/6661/merge at happyz/llama.cpp from mirror 2024-04-18 11:13:10 -07:00
0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505)
03c0946d73 convert : support models with multiple chat templates (#6588)
e11b2e6e1e Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)
c71bfd736e llama : fix compatibility with old 2 expert models (#6735)
Compare 5 commits »
happyz synced commits to refs/tags/b2692 at happyz/llama.cpp from mirror 2024-04-18 11:13:10 -07:00
happyz synced new reference refs/tags/b2691 to happyz/llama.cpp from mirror 2024-04-18 11:13:10 -07:00
happyz synced commits to refs/tags/b2691 at happyz/llama.cpp from mirror 2024-04-18 11:13:10 -07:00
happyz synced commits to refs/pull/6721/merge at happyz/llama.cpp from mirror 2024-04-18 11:13:10 -07:00
0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505)
03c0946d73 convert : support models with multiple chat templates (#6588)
e11b2e6e1e Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)
c71bfd736e llama : fix compatibility with old 2 expert models (#6735)
Compare 5 commits »
happyz synced commits to refs/pull/6707/merge at happyz/llama.cpp from mirror 2024-04-18 11:13:10 -07:00
0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505)
03c0946d73 convert : support models with multiple chat templates (#6588)
e11b2e6e1e Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)
c71bfd736e llama : fix compatibility with old 2 expert models (#6735)
Compare 5 commits »
happyz synced commits to refs/pull/6638/merge at happyz/llama.cpp from mirror 2024-04-18 11:13:09 -07:00
0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505)
03c0946d73 convert : support models with multiple chat templates (#6588)
e11b2e6e1e Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)
c71bfd736e llama : fix compatibility with old 2 expert models (#6735)
Compare 5 commits »
happyz synced commits to refs/pull/6602/merge at happyz/llama.cpp from mirror 2024-04-18 11:13:09 -07:00
0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505)
03c0946d73 convert : support models with multiple chat templates (#6588)
e11b2e6e1e Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)
c71bfd736e llama : fix compatibility with old 2 expert models (#6735)
Compare 5 commits »
happyz synced commits to refs/pull/6588/head at happyz/llama.cpp from mirror 2024-04-18 11:13:09 -07:00
f153e7e7c0 flake--
57b93282a2 Merge branch 'master' into multiple-chat-templates
980bb1637f Add files via upload
a0782056c5 New script to add/modify/remove metadata
c71bfd736e llama : fix compatibility with old 2 expert models (#6735)
Compare 52 commits »
happyz synced commits to refs/pull/6640/merge at happyz/llama.cpp from mirror 2024-04-18 11:13:09 -07:00
0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505)
03c0946d73 convert : support models with multiple chat templates (#6588)
e11b2e6e1e Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)
c71bfd736e llama : fix compatibility with old 2 expert models (#6735)
Compare 5 commits »
happyz synced commits to refs/pull/6644/merge at happyz/llama.cpp from mirror 2024-04-18 11:13:09 -07:00
5691020e21 Merge 1b988855dca2ced3850dbe40812707e639b1dbd6 into 0d56246f4b
0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505)
03c0946d73 convert : support models with multiple chat templates (#6588)
e11b2e6e1e Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)
c71bfd736e llama : fix compatibility with old 2 expert models (#6735)
Compare 6 commits »
happyz synced commits to refs/pull/6648/merge at happyz/llama.cpp from mirror 2024-04-18 11:13:09 -07:00
0d56246f4b ggml : group all experts in a single ggml_mul_mat_id (#6505)
03c0946d73 convert : support models with multiple chat templates (#6588)
e11b2e6e1e Qwen2 : assume tied weights if lm_head/output weights is missing (#6738)
c71bfd736e llama : fix compatibility with old 2 expert models (#6735)
Compare 5 commits »