I’m excited to announce Forgepoint Capital International’s investment in Multiverse Computing as part of the company’s €189 million ($215 million) Series B round, alongside Bullhound Capital. Additional investors in the round include HP Tech Ventures, Santander Climate VC, Toshiba, Sociedad Española para la Transformación Tecnológica (SETT), CDP VC, and Capital Riesgo de Euskadi – Grupo SPRI.
We are thrilled to partner with the team at Multiverse Computing, global leaders in AI optimization making Large Language Models (LLMs) practical at scale.
Rising AI costs, energy demands, and environmental impacts
As AI technology rapidly advances, LLMs utilize an ever-increasing amount of training data and parameters (values that determine how models learn and process data) to enable more sophisticated, accurate outcomes. While this unlocks new possibilities for AI-enabled innovation, it is also causing dramatic growth in computational and energy demands and negative environmental impacts, not to mention training and inference costs- training frontier models can cost tens of millions of dollars and training costs have increased 2.4x per year since 2016.
Existing AI model compression techniques which seek to reduce model sizes, energy demands, and costs significantly reduce model performance and undermine the utility of AI. Adding to the challenge, there is an ongoing shortage of advanced computer chips which are essential for LLM deployments. These unsolved obstacles have limited the application of LLMs at scale.
A paradigm shift is required to solve this problem. That’s why we’re backing Multiverse Computing, innovators of a quantum computing-inspired approach to AI model compression that makes LLMs more accessible, sustainable, and scalable.
Optimized LLMs for the enterprise, the IoT, the edge, and beyond
Multiverse Computing solves the AI optimization problem with CompactifAI, an AI model compressor which makes popular open-source LLMs including LLaMa, DeepSeek, and Mistral smaller, faster, cheaper to train and run, and more portable thanks to unprecedented improvements: reducing AI model sizes by up to 95%, cutting inference costs and power consumption by 50-80%, and boosting model speeds by 4-12x- all with just 2-3% precision loss.