Beyond AGI: How Google DeepMind is Solving Science’s Most Impossible Problems
A conversation with Pushmeet Kohli reveals the remarkable framework behind AlphaFold, AlphaEvolve, and the future of AI-powered scientific breakthroughs
While the tech world debates the arrival of artificial general intelligence, Google DeepMind is already using AI to solve problems that seemed impossible just years ago. From predicting protein structures in seconds to optimizing data centers and democratizing scientific research, the team’s Science and Strategic Initiatives unit is turning AI’s theoretical promise into tangible impact today.
In a revealing conversation on the “Release Notes” podcast, Pushmeet Kohli, who heads DeepMind’s Science and Strategic Initiatives Team, shared the philosophy and framework behind some of AI’s most transformative achievements—and offered a glimpse into an even more ambitious future.
The Three Types of Intelligence
Kohli starts by breaking down intelligence into three categories that fundamentally shape how we think about AI’s potential:
Level 1: Universal Human Competencies – Tasks most humans can do, like recognizing objects in images or reading handwriting.
Level 2: Expert-Level Intelligence – Specialized skills requiring years of training, like medical diagnosis or writing complex software.
Level 3: Beyond Human Capability – Problems that no single human, regardless of expertise, can solve alone.
It’s this third category where DeepMind’s science team focuses their efforts. “If you give a sequence of amino acids, which is a protein, to any human—one of the smartest human beings—and say, can you figure out what is its 3D structure?” Kohli explains, “We will not be able to do that just by reasoning about it.”
Before AlphaFold, determining a single protein structure could cost a million dollars and take multiple years. AlphaFold can now do it in seconds for mere cents. This isn’t incremental improvement—it’s a fundamental transformation of what’s possible.
The Three-Part Framework for Choosing Impossible Problems
What makes DeepMind’s approach particularly fascinating is their rigorous framework for selecting which “impossible” problems to tackle. Kohli outlines three strict requirements:
1. Transformative and Feasible Impact
“We are not looking for something which is an incremental improvement,” Kohli emphasizes. The problem must promise transformative impact—whether scientific, commercial, or social—and the broader community must believe it’s eventually solvable. “We don’t want to work on time travel. We want to work on things which are actually going to happen.”
2. Consensus Impossibility
There should be near-universal agreement that no one will solve it in the next 5-10 years. If someone thinks they’ll crack it in six months, it’s not for this team.
3. Audacious Timeline
The team believes they can do it in one-third to one-half the time the experts predict.
If all three conditions are met, they take it on.
Three Flavors of Impact: Scientific, Commercial, and Social
DeepMind’s recent launches illustrate how AI can create different types of transformative impact:
Scientific Impact: AlphaFold’s Nobel Prize
AlphaFold stands as the exemplar of scientific impact. Not only did it solve a 50-year-old grand challenge in biology, but it was also recognized as one of the most cited scientific papers in recent years and earned Demis Hassabis and John Jumper the Nobel Prize in Chemistry.
But the impact extends far beyond accolades. Researchers around the world—including those in resource-limited settings working on neglected tropical diseases—can now access structural predictions for proteins they could never afford to characterize experimentally. “That completely democratizes AI,” Kohli notes.
Commercial Impact: AlphaEvolve’s Billion-Dollar Savings
AlphaEvolve, a Gemini-powered coding agent, demonstrates AI’s commercial potential by solving optimization problems that top computer scientists couldn’t crack in decades.
The results? A 0.7% improvement in Google’s entire compute fleet efficiency—which translates to potentially billions of dollars in savings—and faster Gemini training. But here’s what makes it remarkable: AlphaEvolve also achieved scientific breakthroughs, finding state-of-the-art solutions for 75% of open mathematical optimization problems and surpassing the best known solutions for 20%.
Social Impact: SynthID’s Watermarking Revolution
As generative AI becomes more sophisticated, distinguishing synthetic from authentic content becomes critical for maintaining information integrity. SynthID addresses this with imperceptible watermarking technology that works across all modalities—text, images, and video.
Google now watermarks all GenAI content with SynthID, creating accountability in an era where synthetic content can be indistinguishable from reality. It’s a proactive approach to one of AI’s most pressing societal challenges.
From AlphaProof to DeepThink: The IMO Gold Medal Journey
Perhaps no achievement better illustrates the rapid evolution of AI capabilities than DeepMind’s progression from silver to gold at the International Mathematical Olympiad (IMO).
In 2023, the team used two specialized models—AlphaProof and AlphaGeometry—that operated in a formal mathematical language called Lean. These models could generate verifiable proofs, creating a powerful training data generation system: when AlphaProof solved a problem, the team knew the solution was correct.
This year brought a revolutionary shift. The achievement moved from specialized models to DeepThink, built on Gemini 2.5 Pro—a generally available model that anyone can use. The advancement includes:
- Natural language processing – No need to translate problems into specialized mathematical notation
- Broader accessibility – The capability is now available to users worldwide, not locked in a research lab
- Generalized intelligence – The same model that excels at math can handle diverse tasks
“Not only did we move from silver to gold,” Kohli explains, “but we did that using natural language specification… It is almost a model that can be used by anyone on the planet.”
The AI Co-Scientist: Democratizing Discovery
Looking ahead, one of the most ambitious projects might be the “AI co-scientist”—a multi-agent system where Gemini plays multiple roles in the scientific process: hypothesis generator, reviewer, critic, and editor.
The results have been startling. In one case, a professor at Imperial College London submitted a problem to test the system, expecting conventional solutions. The first hypothesis the AI generated matched research his team had been working on for years and had just submitted to a journal. His initial reaction? Suspicion that Google had somehow accessed his unpublished paper.
“When we convinced him that we did not get this paper,” Kohli recounts, “and in fact, look at the other ideas that we have shared, he said, oh yeah, the other ideas also look extremely novel, I should try working on them.”
This points to a future where AI doesn’t just assist scientists—it actively participates in the creative process of discovery, generating novel hypotheses that push research in unexpected directions.
The AGI Question: Intelligence as a Tool, Not an End
When asked about AGI, Kohli reframes the question in a way that cuts through the hype: “Yes, we will have this amazingly powerful general intelligence. The question for our team is, what will we use it for?”
This perspective shifts the conversation from when AGI arrives to how we deploy intelligence—artificial or otherwise—to solve humanity’s most pressing challenges. As AI models become more powerful and general, DeepMind’s role is to “leverage all that progress, add to it to solve the next impossible thing for the benefit of humanity.”
An API for Science?
The interview concludes with a provocative question: Will there be an “API for science” that democratizes scientific discovery the way coding tools have democratized software development?
Kohli sees the parallel but highlights a critical challenge: specification. Just as developers need to clearly define what a program should do, future scientists will need intuitive interfaces to communicate complex research goals to AI systems.
The work of making AI accessible—of building the right interfaces and feedback loops—becomes as crucial as the AI itself. It’s not enough to build powerful models; they must be wielded effectively by people with diverse backgrounds and expertise.
The Proof Is in the Pudding
What sets DeepMind apart in the current AI landscape is something Logan Kilpatrick emphasizes in the conversation: “So much of the narrative about AI right now is that in the future, it will be used for scientific impact, and I feel like DeepMind historically is the only place where it is actually, today, being used for scientific impact across the ecosystem.”
From AlphaFold’s database of predicted protein structures available to any researcher, to AlphaEarth’s geospatial intelligence, to tools that are actively saving compute costs and accelerating model training—these aren’t promises about what AI might do someday. They’re demonstrations of what’s happening now.
As we debate the theoretical implications of AGI, DeepMind reminds us that the more important question might be: How will we use the intelligence we already have to solve the problems that matter most?
*The full conversation reveals even more insights into DeepMind’s collaborative culture, the technical details of projects like AlphaEvolve and AlphaGenome, and the vision for making advanced AI tools accessible to researchers worldwide. For leaders, scientists, and technologists thinking about AI’s real-world impact, it’s essential viewing. 📖 Read the full article → https://blog.google/technology/google-deepmind/ai-release-notes-science/
*