trained

  • Blog

    Researchers trained an OpenAI rival in half an hour for less than $50

    To do this, researchers at Stanford and the University of Washington used a method known as distillation — which allows smaller models to draw from the answers produced by larger ones — to refine s1 using answers from Google’s AI reasoning model, Gemini 2.0 Flash Thinking Experimental. Google’s terms of service note that you can’t use Gemini’s API to “develop…

    Read More »
Back to top button
close