Google CEO Sundar Pichai has lauded DeepSeek in the wake of the firm’s bombshell AI model release, noting that the Chinese company has shook up the industry with showcasing efficiency and accessibility gains.
Last month, DeepSeek released its first AI model amid claims it was trained using a fraction of the resources — and therefore costs — of rival models from OpenAI or Google, and that it could operate more efficiently, too.
The DeepSeek R1 model was hailed as a step change in model efficiency, and with users able to run the model on-device, the app swiftly soared in popularity.
That sent shares in key AI companies spiraling, but after the initial panic, American companies have been keen to praise DeepSeek’s methods.
“I think the DeepSeek team has done very, very good work,” Pichai said, speaking at the World Governments Summit in Dubai last week.
“But stepping back, I think it shows how global this AI development is right now — it’s happening all around the world.”
Google is driving AI efficiency
Pichai said the key aspects of DeepSeek that caught attention were its open source design, but also that it was created with efficiency in mind – an area in which Google and other industry counterparts have focused on improving in recent months.
“I think what caught people’s attention with DeepSeek was that you could have an efficient model and open source and something everyone can immediately access, and I think that creates a lot of excitement,” Pichai said.
However, he claimed Google was also working on similar ideas: “But it’s something obviously we are focused on as well. We’ve always realized the models which you serve people around the world, it has to be very efficient.”
Pichai suggested Google’s optimized Gemini models, known as Flash, prove the company had been already considering efficiency and accessibility — but of course the announcement of those models didn’t spark a stock sell-off or wider concerns about rival AI development.
Google unveiled Gemini Flash last year with varieties that are optimized for cost efficiency, followed by an experimental version of Gemini 2.0 Flash in December, with further updates earlier this month.
In a blog post, the company said its new model tier, Gemini 2.0 Flash-Lite, was Google’s “most cost-efficient model yet”, offering better quality than 1.5 Flash at the same speed and cost.
“Like 2.0 Flash, it has a 1 million token context window and multimodal input,” the blog post noted of 2.0 Flash-Lite. “For example, it can generate a relevant one-line caption for around 40,000 unique photos, costing less than a dollar in Google AI Studio’s paid tier.”
AI accessibility for all
In his talk at the Summit, Pichai added that efficiency was important as it allowed AI to be more widely accessible to everyone.
“Artificial intelligence is going to be widely available and it’s going to be at everyone’s fingertips and it’s going to impact the world profoundly,” Pichai said. “I think the DeepSeek innovation reinforces that point.”
This particular talking point was also highlighted by Microsoft CEO Satya Nadella in the wake of the DeepSeek news last month. Following the model release, Nadella took to LinkedIn to highlight a concept in economics known as ‘Jevons paradox’.
The Microsoft chief noted that as AI “gets more efficient and accessible, we will see its use skyrocket”.
Though DeepSeek is touted as a win for open source AI development and efficiency — which is good news for the environment and cutting costs — security analysts have warned about critical safety flaws and the company has been accused of using rival’s models to build its own.
MORE FROM ITPRO
Source link