Gemini 3.0: Google’s most capable AI yet marks the start of a new era for multimodal intelligence

We’re hoping to try it out soon, so stay tuned for our thoughts!

Photo: Google

Note: This article was first published on 19 November 2025.

Google is taking its most decisive step yet into the future of artificial intelligence with the launch of Gemini 3.0, a model that the company describes as its most intelligent, most capable, and most deeply integrated system to date. Unveiled by leaders from Google DeepMind and Search, Gemini 3.0 represents more than an incremental upgrade. It is Google’s clearest vision of what an AI-native ecosystem looks like, one where reasoning, multimodality, and agentic behaviour converge across billions of devices and users.

At the core of the announcement is a simple but sweeping idea. Gemini 3.0 is designed to help people think, build, and create across every Google surface: Search, the Gemini app, Workspace, and the broader Google Cloud and developer stack. It launches immediately across all these platforms, reflecting Google’s confidence that this model is not experimental, but foundational

A full-stack breakthrough two years in the making

For Koray Kavukcuoglu, CTO of Google DeepMind and Google’s Chief AI Architect, Gemini 3.0 is the culmination of years of work across the company’s full-stack AI infrastructure. From custom TPUs to tightly networked data centres, from multimodal research to real-world product integration, Koray describes Gemini 3.0 as the result of a uniquely integrated approach.

He added that it a deliberate step toward an AI system that understands and reasons across text, images, audio, code, and complex multimodal contexts simultaneously. Gemini 3.0 is designed not only to answer questions but to break down academic papers, generate working interactive visualisations, build apps, and assist with complex decision-making.

“It’s the world’s best model for multimodal understanding,” Koray says, adding that Gemini 3.0 represents Google’s strongest advances in coding, reasoning, and agentic behaviour.

In a blog announcing the model, Google states Gemini 3 is its most capable foundation model yet and will be available across the Gemini app, AI Studio, the API, and enterprise platforms like Vertex AI and Gemini Enterprise.

Benchmarks that signal a new performance tier

According to Tulsi Doshi, who leads product for Gemini models at DeepMind, the numbers tell a clear story. Gemini 3.0 outperforms its predecessor, Gemini 2.5 Pro, across virtually every major benchmark.


Among its standout scores:

  • A new high of 15,101 points on the LLM Arena leaderboard
  • 91.9% on GPQA Diamond, the toughest scientific reasoning benchmark
  • 37.5% on Humanity’s Last Exam, without external tool use

These results reflect improvements not only in raw intelligence but in reliability, consistency, and the model’s ability to generalise across tasks.

But Tulsi says the most compelling proof of progress comes from what people can do with the model. One of her favourite moments during testing was watching Gemini transform a dense DeepMind research paper into a complete interactive tutorial app, complete with 3D visualisations and step-by-step explanations. “This is where reasoning meets multimodality meets coding,” she says. “It’s where the model really shines.”

Search + Gemini 3.0: A transformational upgrade

Photo: Google

Google Search, used by more than 2 billion people, is arguably the biggest stage for Gemini 3.0. And Robbie Stein, VP of product for Search, calls this the most significant model upgrade to Search in years.

One of the biggest shifts with Gemini 3 is its immediate integration into Google Search. For the first time, a frontier model is embedded on day one into Search’s AI Mode. A blog post explains that Search now uses a technique called Query Fan-Out, where Gemini creates many sub-queries under the hood to fetch more accurate, deeper, and more relevant results. 

Search users will also benefit from Generative Interfaces, interactive layouts, simulations and tools built in real time based on the user’s query. Want to explore the three-body problem? Gemini 3 in AI Mode can generate a working simulation for you. Want to compare mortgages? It can build a custom calculator on the fly.

“We want Search to feel effortless,” Stein says. “Gemini 3.0 gets us closer.”

The developer revolution: Gemini 3.0 + Anti Gravity

Photo: Google

While Search upgrades matter for billions, developers are getting a major boost via the new Anti Gravity environment powered by Gemini 3 Pro. This agentic development platform offers an agent embedded across the editor, terminal and browser, capable of taking high-level instructions, breaking them down, executing steps, checking results, and confirming actions with the developer.

In another blog covering Gemini in the enterprise, it emphasised how Gemini 3 enables agentic coding and front-end creation: one prompt can prototype full interfaces, build logic, and incorporate multimodal data like images, video and code.

If Gemini 3.0 improves what developers can imagine, Anti Gravity changes how they build. This new agentic development environment is powered by Gemini 3.0 Pro and designed to work directly inside an IDE.

The system can:

  • break down high-level coding tasks
  • run commands in the terminal
  • check its own work
  • browse the web for context
  • and ask for confirmation before major changes

It’s a collaborative assistant rather than a passive tool, one that learns a developer’s style and can handle multi-step builds with surprising autonomy.

Anti Gravity enters public preview tomorrow for Mac, Windows, and Linux — and Google expects it to become the default for AI-powered software development.

Gemini 3.0 in the app: A step toward everyday AI

On the consumer side, Gemini 3.0 arrives in an ecosystem already buzzing: the Gemini app has surpassed 650 million monthly active users. For these users, Gemini 3.0 promises more helpful, better-formatted responses, stronger coding abilities, and support for more global languages, including Malay and Filipino.

But the biggest unlock is Google’s push toward agentic behaviour. Gemini 3.0 introduces:

  • Generative Interfaces, which create explorable lists, itineraries, and visual layouts
  • Gemini Agents, which handle tasks like managing email, prioritising mornings, or searching for tickets
  • The AI becomes less of a chatbot and more of a personal operator

Gemini 3.0 marks a new turning point

During a Q&A with the media, Kavukcuoglu pushed back on the popular narrative that large models are plateauing. He argues that the industry now needs more nuanced benchmarks for progress. The biggest leaps are increasingly in reasoning, alignment, safety, and real-world performance rather than raw scale.

Google argues the improvements are subtle but meaningful: deeper reasoning, better alignment, fewer “hallucinations,” and better real-world performance.

Tulsi adds that Gemini 3.0’s development was full of “aha moments,” from unexpected 3D games to polished web apps appearing in a single run. “The depth is improving,” she says. “Not just the width.”

What makes Gemini 3.0 stand out is not just its intelligence, but its integration. It is not being released as a lab model. It is arriving inside:

  • Search
  • The Gemini app
  • Workspace
  • Vertex AI
  • AI Studio
  • Anti Gravity

The model becomes the connective tissue across Google’s consumer and enterprise ecosystem. It is the reasoning engine behind agents, the creative force behind generative interfaces, the backbone of multimodal education, and the brain inside next-generation developer workflows.

Gemini 3.0 is the version of the model that demonstrates what Google believes AI should become. A system that helps people think more clearly, express ideas more fully, and build more ambitiously, whether they are students, developers, organisations, or everyday users.

Share this article