MAGIC Research, a collaborative research program, virtual incubator, and service center for artificial intelligence (AI) and advanced technologies, has officially announced the launch of Fabric Hypergrid, which happens to be the industry’s first private, generative AI distributed computing hypergrid.
According to certain reports, the solution brings forth what would be a multi-model, multimodal, and hardware-agnostic AI platform, designed to cut AI computing costs by as much as 90%. This it will do while simultaneously giving businesses full control over their AI capabilities and data.
To understand the significance of such a development, we must take into account that traditional cloud and AI providers rely on expensive, centralized, and energy-intensive data centers and GPUs when powering large language models. The stated reality unsurprisingly ends up creating barriers for businesses that are originally focused on cost efficiency, data security, and privacy.
Making this even more complicated is the fact that an estimated 80% of AI requests do not even require these resource-heavy models.
In response, Fabric Hypergrid will effectively optimize AI workloads by strategically deploying tasks across a mix of state-of-the-art and/or legacy GPUs, CPUs, and accelerators, for the purpose of reducing costs, while simultaneously maximizing efficiency. Such a setup, like you can guess, makes it possible for organizations to turn their existing hardware into an enterprise-grade AI supercomputer at a fraction of the cost, and that too, without compromising speed, security, or scalability.
More on that would reveal how Fabric Hypergrid powers text, image, audio, and video generation, as well as complex multimodal workflows like course creation and enterprise automation. The platform also arrives on the scene bearing an ability to support complex business operations and research initiatives, including molecular research, protein folding, material physics, mathematical computation, and computational chemistry.
“We created Fabric Hypergrid to fill a critical gap in the market for a scalable, high-performance infrastructure capable of harnessing generative AI and advanced computational intelligence for research. Our customers needed a solution adaptable enough to keep pace with their evolving demands,” said Humberto Farias, founder of MAGIC Research. “In searching for the right AI partner, we found existing platforms were prohibitively expensive, lacked robust privacy features, and weren’t flexible enough. That’s why we built our own affordable, distributed AI infrastructure to ensure organizations of all sizes can access the cost, privacy, and flexibility they need.”
Markedly enough, the technology in question is understood to be built on five core technologies that work together to maximize efficiency and minimize cost at every level.
Talk about these technologies on a slightly deeper level, we begin from router, which dynamically selects the best model for each prompt to ensure high-quality outputs at the lowest cost.
The next technology in line is of orchestrator. This one focuses on intelligently distributing tasks between state-of-the-art and legacy GPUs and CPUs so to reduce dependency on expensive cloud infrastructure.
Then, there is accelerator technology, which speeds up the execution of AI models by almost 20x to maximize model performance and tokens per second.
Joining that would be MAGIC Research’s diffuser technology. You see, Fabric Hypergrid is capable of splitting a single model across a global network of distributed computational resources. This it does to eliminate the need for running large models on expensive, centralized infrastructure, data centers, or heterogeneous systems.
Rounding up would be a technology related adaptive runtime, a technology which enables organizations to bring their existing hardware, like multiple operating systems and legacy GPUs, and combine them to create one generative AI supercomputer.
Among other things, we ought to touch upon the given technology’s promise to cut on costs by almost 90%. The development in question also delivers at your disposal a privacy-first network, a mechanism for smart energy usage and waste reduction, potential for customization and scalability, and flexible deployment
“Getting started with enterprise AI can be daunting, but Fabric Hypergird simplifies the process with a plug-and-play design, ready-to-use models, and easy orchestration. By enabling companies to use their own hardware and private SLM/LLMs, we ensure data remains secure and in-house, making AI more accessible, efficient, and affordable for all organizations – not just the biggest players,” said Farias.