They obviously changed from using ChatGPT synthetic data to Gemini synthetic data.
The reason is simple — o3 is too expensive and Gemini 2.5 is more reasonable.
But this over reliance in synthetic data for pre training
could be the major cause of its dramatically high hallucination rates (over 10%, still)
駐連燈首席美軍2025-06-02 13:01:18
白痴仔
gemma 3n同llama畀你無視咗?
debugger;2025-06-02 15:23:10
google同fb對上一次幾時放sota open source model?
debugger;2025-06-02 15:24:36
gemini賺到笑,gemma放黎放去最大得27b,用下腦都知點解
facebook llama4 dead on arrival
ニジュー2025-06-02 15:27:27
連講都唔想
來自篤精的你2025-06-02 15:29:36
問非所答,問多2句就變左白痴
駐連燈首席美軍2025-06-02 15:30:09
呢兩個都唔算?
gemma 3n唔單只細到電話都run到,而且仲要可以了解圖像,仲要實際做到嘅嘢多
利申我電腦就係用ollama run gemma嚟做coding agent