Why SandboxAQ thinks large quantitative models have immense enterprise potential

A new type of AI, known as large quantitative models (LQMs), is set to eclipse large language models (LLMs) in areas such as finance, healthcare, energy, and life sciences — according to SandboxAQ, an Alphabet spin-out company now valued at $5 billion.
Established in 2016 at Google’s Palo Alto headquarters, SandboxAQ began life as Alphabet’s ‘Sandbox’ division, with a focus on developing use cases for quantum technology. However, in 2022 it was established as a separate company, with Eric Schmidt as chairman, and today its primary focus is on generating proprietary data using physics-based methods to train its LQMs.
“When you think about a new medicine, we don’t need an AI that’s as good as humans. We need an AI that’s better than humans. We need an AI that takes us to the next step, beyond what we can do today,” said CEO of SandboxAQ, Jack Hidary, in a recent interview on Fox Business.
“We have to go beyond large language models. They’re very useful for when you want to cut costs in a big company. But if you want to think about a new molecule or a new chemical, we don’t want to use a model trained on Reddit and cat pictures,” explains Hidary. “We want a model trained on chemistry, on pharmacology, and on the biology of these diseases. That’s large quantitative models.”
And it’s not just about physics-based problems. Because the way in which LQMs are trained means that they can be applied to other areas, where the challenge is handling large numerical datasets. ITPro spoke to Stefan Leichenauer, VP of engineering and lead scientist at SandboxAQ, to find out more.
“You can also take the same principles and apply them to other quantitative domains, which are not about the physical world,” explains Leichenauer. “So in areas like the world of finance, where there’s a lot of data, and a lot of imperfect models in relation to how the economy moves, we can use quantitative finance to improve them.”
What makes LQMs better than LLMs?
Large language models owned by companies such as OpenAI, Anthropic, Google, Microsoft, and Deepseek, are trained upon massive amounts of data, which is then used to generate human-like text and visual outputs by interpreting prompts. And whilst they can work well in some cases, they are not universally reliable.
“The problems that we’re trying to tackle are problems like how we find the cure for Alzheimer’s disease or Parkinson’s disease. Or how do we figure out how to build the next generation of batteries. These are very important questions, but these are not questions of language,” says Leichenauer.
“You could read the whole Internet and the answer is not going to be in there,” Leichenauer tells ITPro. “The answer to how to build the next generation of battery technology is not contained within Internet message boards.”
At the end of 2024 SandboxAQ secured a total of $800m in funding, at a valuation of $5.3 billion. And it has already developed a number of real-world use cases in areas including navigation technology, medical imaging, and drug discovery.
Working with the Prusiner Lab, at the University of California San Francisco’s Institute for Neurodegenerative Diseases, SandboxAQ screened 5.5 million compounds in just one month (versus a typical year-long process), with a hit rate 30 times higher than traditional methods.
“We can generate new data based on our equations. And you use that dataset to train an AI model like a neural network, for actually solving the equation. The [high-performance computing (HPC)] is always checking what the neural network is doing, and allowing the neural network to become more and more reliable over time.”
Elsewhere, SandboxAQ is developing quantum-based navigation technology as an alternative to GPS, using the Earth’s natural magnetic field as a “fingerprint” for positioning. And it is also applying similar technology to CardiAQ, a magnetocardiography (MCG) device with the potential for faster, more accurate diagnosis of cardiovascular disease.
The company is also exploring applications in quantitative finance and is particularly excited about developing “agentic LQMs”, which could automate complex processes across various domains.
“You can take the same principles and apply them to other quantitative domains that are not about the physical world, such as quantitative finance,” says Leichenauer. “If you want to try and predict interest rates and unemployment rates at a macro scale, there are models, but those models are imperfect. And our quantitative AI could lead to interesting results in those kinds of domains”
Going beyond AI with quantum
Beyond AI, SandboxAQ also focuses on quantum technology (in the company’s name, the ‘A’ stands for AI, and the ‘Q’ stands for quantum). And whilst LQMs can achieve good results in certain fields, there are others that will require a quantum solution.
“For some problems you never need quantum computing. But there are some areas you’ll find yourselves where the HPC is not reliable,” Leichenauer tells ITPro. “Sometimes it’s just too hard of a problem for any classical computing method to be able to get the right answer. And in those cases, when you come across a case like that, that’s when you would call upon the quantum computer.”
Leichenauer believes that HPC will continue to play an integral role in the problems the company is trying to solve, but that quantum computing could play a vital role in addressing problems that classical computers can’t handle. But quantum computing hasn’t reached this point yet.
“Of course, today’s quantum computers are not yet at a level of scale and fidelity that they’re really helpful in this respect for real world problems,” he says. “But we hope and we’re optimistic that in the coming years they will be, and they plug in very naturally to this kind of process.”
When that point could be has been hotly debated in recent months. At the start of 2025, Nvidia CEO Jensen Huang claimed that a “very useful quantum computer” was at least two decades away. Huang later walked his comments back and, indeed, Leichenauer believes we’re much closer than that.
“I think 20 years is a bit pessimistic,” Leichenauer says. “The quantum computing resources required to do one kind of algorithm are very different from the resources required to do another kind of algorithm. The kind of computing that SandboxAQ would use for drug discovery, where what we’re really trying to do is model quantum physics processes, is one of the easier things that a quantum computer can do. And so I think it’ll be closer to five years when we get quantum computers capable of doing this kind of molecular modelling.”
Source link