Skip to content

Multiverse Computing: Why We Invested

Damien Henault

June 12, 2025

  • Blog Post

I’m excited to announce Forgepoint Capital International’s investment in Multiverse Computing as part of the company’s €189 million ($215 million) Series B round, alongside Bullhound Capital. Additional investors in the round include HP Tech Ventures, Santander Climate VC, Toshiba, Sociedad Española para la Transformación Tecnológica (SETT), CDP VC, and Capital Riesgo de Euskadi – Grupo SPRI.

We are thrilled to partner with the team at Multiverse Computing, global leaders in AI optimization making Large Language Models (LLMs) practical at scale.

Rising AI costs, energy demands, and environmental impacts

As AI technology rapidly advances, LLMs utilize an ever-increasing amount of training data and parameters (values that determine how models learn and process data) to enable more sophisticated, accurate outcomes. While this unlocks new possibilities for AI-enabled innovation, it is also causing dramatic growth in computational and energy demands and negative environmental impacts, not to mention training and inference costs- training frontier models can cost tens of millions of dollars and training costs have increased 2.4x per year since 2016.

Existing AI model compression techniques which seek to reduce model sizes, energy demands, and costs significantly reduce model performance and undermine the utility of AI. Adding to the challenge, there is an ongoing shortage of advanced computer chips which are essential for LLM deployments. These unsolved obstacles have limited the application of LLMs at scale.

A paradigm shift is required to solve this problem. That’s why we’re backing Multiverse Computing, innovators of a quantum computing-inspired approach to AI model compression that makes LLMs more accessible, sustainable, and scalable.

Optimized LLMs for the enterprise, the IoT, the edge, and beyond

Multiverse Computing solves the AI optimization problem with CompactifAI, an AI model compressor which makes popular open-source LLMs including LLaMa, DeepSeek, and Mistral smaller, faster, cheaper to train and run, and more portable thanks to unprecedented improvements: reducing AI model sizes by up to 95%, cutting inference costs and power consumption by 50-80%, and boosting model speeds by 4-12x- all with just 2-3% precision loss.

At the heart of Multiverse Computing’s innovation are tensor networks, a quantum-inspired methodology which simplifies complex neural networks by reducing AI model parameters and limiting spurious correlations (model decisions based on irrelevant information). Multiverse Computing’s tensorized AI models train faster, use less data, and require less energy to operate without sacrificing performance.

“Multiverse Computing’s CompactifAI model compressor is a game changer for AI processing, optimizing AI adoption by reducing model size and resource consumption without sacrificing performance. This innovative technology enables faster, more efficient AI model deployments and use cases across sectors, ultimately making AI more accessible, sustainable, and scalable.”

Damien Henault Managing Director, Forgepoint Capital International

Multiverse Computing enables rapid LLM deployments with its Singularity platform which delivers CompactifAI-compressed LLMs in the cloud or on-premises, including directly on devices like enterprise workstations, phones, autonomous vehicles, and drones. Today, enterprise customers including HP, Bosch, and Iberdrola are unlocking high-impact LLM use cases for the Internet of Things (IoT), the edge, and beyond, from AI-powered cybersecurity threat detection and automated securities trading to renewable energy network stabilization and green transition technologies.

Leading the way in European AI innovation

As an ardent supporter of AI, cybersecurity, and quantum computing innovation, I have closely followed Multiverse Computing’s evolution since its founding in 2019. I first met the three founders- CEO Dr. Enrique Lizaso, Chief Science Officer and founding father of tensor networks Prof. Román Orús, and CTO Dr. Samuel Mugel– after attending an inspiring European Quantum computing conference in Paris where Multiverse Computing presented its breakthrough quantum-inspired AI algorithms. My interest in the company quickly grew as I learned about the team and technology.

At Forgepoint Capital, we look for companies with a set of rare and valuable advantages. Many deep tech startups and scaleups have solid expertise and strong execution, yet few achieve transformative intellectual property (IP). Even more uncommon is a special combination of innovative ideas and technology, visionary founders, and world-class talent to break through with the potential to scale globally. That is precisely what Multiverse Computing has done to drastically reduce spiraling LLM training and inference costs.

Collectively, Enrique, Román, and Samuel bring decades of experience at the forefront of both academic research and commercial application in quantum computing and tensor networks, machine learning, and AI. Positioned at the leading edge of AI innovation in Europe, Multiverse Computing has attracted a 150+ person team of top tier AI, quantum, R&D, and product talent- an impressive 40% of whom hold a PhD, with representation from 27+ nationalities.

The company now plays powerful role as an emerging leader in European AI innovation. Headquartered in San Sebastián, Spain with offices in Paris and Berlin along with a new office in San Francisco, Multiverse Computing is bringing disruptive AI processing to the growing European market while also expanding globally to the U.S. and other key geographies.

Shaping the future of AI and LLMs

We believe that Multiverse Computing has a massive opportunity to disrupt the booming $100+ billion AI inference market. Propelled by valuable IP, a differentiated approach, innovative technology, and unmatched expertise, the company is addressing a huge market gap and has the potential to fundamentally change AI hardware requirements and the proliferation of LLMs. We look forward to working with the team, along with their ecosystem of strategic global partners, as they optimize and shape the future of LLMs and AI.