CoreWeave: NVIDIA Grace Blackwell GPUs Power AI

CoreWeave has emerged as a leader in cloud computing, offering unparalleled access to NVIDIA GB200 NVL72 systems. Leading artificial intelligence (AI) organizations, including Cohere, IBM, and Mistral AI, are already leveraging these advanced resources to refine their AI models and applications.

As the first cloud provider to offer NVIDIA Grace Blackwell generally, CoreWeave has demonstrated impressive MLPerf benchmark results using the NVIDIA GB200 NVL72. This powerful platform, specifically designed for reasoning and AI agents, is now available to CoreWeave’s customers, providing access to thousands of NVIDIA Blackwell GPUs.

‘We collaborate closely with NVIDIA to ensure our customers have the most advanced solutions for AI model training and inference,’ stated Mike Intrator, CEO of CoreWeave. ‘With the new Grace Blackwell rack-scale systems, our customers are among the first to experience the performance benefits of AI innovation at scale.’

The deployment of these thousands of NVIDIA Blackwell GPUs accelerates the conversion of raw data into actionable intelligence, and further expansions are already planned to meet the growing demand.

Companies that utilize cloud providers like CoreWeave are actively integrating systems built on NVIDIA Grace Blackwell. These innovative systems are poised to transform traditional data centers into sophisticated AI factories, enabling the production of intelligence at scale and converting raw data into valuable insights with enhanced speed, accuracy, and efficiency.

Global AI leaders are actively harnessing the advanced capabilities of the GB200 NVL72 for diverse applications, ranging from AI agents to the development of cutting-edge AI models.

Personalized AI Agents

Cohere is utilizing Grace Blackwell Superchips to enhance the development of secure enterprise AI applications, leveraging advanced research and model development methodologies. Its enterprise AI platform, North, empowers teams to create personalized AI agents for the secure automation of enterprise workflows and the delivery of real-time insights.

By leveraging NVIDIA GB200 NVL72 on CoreWeave, Cohere has realized a performance increase of up to threefold in training 100 billion-parameter models compared to the previous generation of NVIDIA Hopper GPUs, and this performance boost was achieved even before implementing Blackwell-specific optimizations.

Further optimizations that leverage GB200 NVL72’s unified memory, FP4 precision, and a 72-GPU NVIDIA NVLink domain promise to further enhance throughput. With every GPU operating in concert, greater throughput is achieved with shorter time to first and subsequent tokens, resulting in more performant and cost-effective inference.

‘With access to some of the first NVIDIA GB200 NVL72 systems in the cloud, we are pleased with how easily our workloads port to the NVIDIA Grace Blackwell architecture,’ said Autumn Moulder, vice president of engineering at Cohere. ‘This unlocks incredible performance efficiency across our stack — from our vertically integrated North application running on a single Blackwell GPU to scaling training jobs across thousands of them. We’re looking forward to achieving even greater performance with additional optimizations soon.’

AI Models for Enterprise

IBM is utilizing one of the initial NVIDIA GB200 NVL72 system deployments, scaling to thousands of Blackwell GPUs on CoreWeave, to train its next-generation Granite models. These open-source, enterprise-ready AI models deliver state-of-the-art performance while ensuring safety, speed, and cost-effectiveness. The Granite model family is supported by a robust partner ecosystem, including leading software companies embedding large language models into their technologies.

Granite models serve as the foundation for solutions like IBM watsonx Orchestrate, which enables enterprises to develop and deploy AI agents that automate and accelerate workflows.

CoreWeave’s NVIDIA GB200 NVL72 deployment for IBM also utilizes the IBM Storage Scale System, delivering high-performance storage for AI. CoreWeave customers can access the IBM Storage platform within CoreWeave’s dedicated environments and AI cloud platform.

‘We are excited to see the acceleration that NVIDIA GB200 NVL72 can bring to training our Granite family of models,’ said Sriram Raghavan, vice president of AI at IBM Research. ‘This collaboration with CoreWeave will augment IBM’s capabilities to help build advanced, high-performance and cost-efficient models for powering enterprise and agentic AI applications with IBM watsonx.’

Compute Resources at Scale

Mistral AI is now integrating its first thousand Blackwell GPUs to construct the next generation of open-source AI models.

Mistral AI, a Paris-based leader in open-source AI, is leveraging CoreWeave’s infrastructure, now equipped with GB200 NVL72, to expedite the development of its language models. With models such as Mistral Large delivering robust reasoning capabilities, Mistral requires fast computing resources at scale.

To train and deploy these models effectively, Mistral AI needs a cloud provider that offers large, high-performance GPU clusters with NVIDIA Quantum InfiniBand networking and reliable infrastructure management. CoreWeave’s expertise in deploying NVIDIA GPUs at scale, combined with industry-leading reliability and resiliency through tools like CoreWeave Mission Control, meet these needs.

‘Right out of the box and without any further optimizations, we saw a 2x improvement in performance for dense model training,’ said Thimothee Lacroix, cofounder and chief technology officer at Mistral AI. ‘What’s exciting about NVIDIA GB200 NVL72 is the new possibilities it opens up for model development and inference.’

Broadening Availability of Blackwell Instances

CoreWeave not only offers long-term customer solutions but also provides instances with rack-scale NVIDIA NVLink across 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs, scaling up to 110,000 GPUs with NVIDIA Quantum-2 InfiniBand networking.

These instances, accelerated by the NVIDIA GB200 NVL72 rack-scale accelerated computing platform, deliver the scale and performance required to develop and deploy the next wave of AI reasoning models and agents.

Deep Dive into CoreWeave’s Technological Infrastructure

CoreWeave has firmly established itself as a pivotal player in the cloud computing landscape, largely due to its unwavering commitment to providing cutting-edge hardware solutions and a robust infrastructure meticulously tailored to the exacting demands of AI and machine learning workloads. The seamless integration of the NVIDIA GB200 NVL72 systems underscores this unwavering dedication. These systems represent a monumental leap in computational power and efficiency, empowering organizations to effectively tackle complex challenges previously deemed insurmountable.

The meticulously engineered architecture of the NVIDIA GB200 NVL72 is explicitly designed to maximize performance across a diverse spectrum of AI applications. By seamlessly integrating 72 NVIDIA Blackwell GPUs with 36 NVIDIA Grace CPUs, the platform establishes a balanced and exceptionally potent compute environment. This delicate balance is paramount for workloads that require both intensive computational power and substantial data processing capabilities. The strategic utilization of NVIDIA’s proprietary NVLink technology further amplifies the system’s overall efficiency by enabling high-speed and low-latency communication between individual GPUs, effectively minimizing latency and maximizing throughput, resulting in unparalleled levels of performance.

CoreWeave’s robust infrastructure is also characterized by its inherent scalability. The remarkable ability to scale up to an astounding 110,000 GPUs using NVIDIA Quantum-2 InfiniBand networking enables the platform to seamlessly support even the most demanding and resource-intensive AI projects. This scalability extends beyond mere raw compute power; it also encompasses ensuring that the underlying network infrastructure can efficiently handle the massive data flows intrinsically associated with large-scale AI training and inference processes. The advanced NVIDIA Quantum-2 InfiniBand networking provides the requisite bandwidth and exceptionally low latency necessary to maintain optimal performance as the system undergoes scaling.

The Strategic Importance of Early Adoption

CoreWeave’s proactive and forward-thinking approach to embracing new technologies, such as the NVIDIA Grace Blackwell GPUs, has strategically positioned the company as a valuable and trusted partner for organizations at the forefront of AI innovation. By being among the first cloud providers to offer access to these advanced resources, CoreWeave empowers its customers to gain a significant competitive advantage in their respective markets. This early access enables companies to actively experiment with novel models, optimize their existing workflows, and ultimately accelerate their time to market, resulting in increased revenue and market share.

The multifaceted benefits of early adoption extend far beyond mere access to advanced hardware resources. It also encompasses fostering close collaboration with leading technology providers like NVIDIA, which enables CoreWeave to fine-tune its robust infrastructure and comprehensive software stack to fully leverage the inherent capabilities of the new hardware. This close collaboration results in a highly optimized and exceptionally efficient platform, which subsequently translates into superior performance metrics and significant cost savings for CoreWeave’s discerning customers.

Moreover, CoreWeave’s early adoption strategy fosters a vibrant and dynamic culture of innovation within the company. By consistently pushing the boundaries of what is technologically feasible within the realm of cloud computing, CoreWeave attracts top talent and establishes itself as a recognized leader in the industry. This proactive approach, in turn, reinforces its ability to provide cutting-edge solutions and steadfastly maintain its competitive advantage in the rapidly evolving technological landscape.

Implications for AI Model Development

The strategic deployment of NVIDIA Grace Blackwell GPUs on CoreWeave’s platform holds profound implications for the advancement and development of sophisticated AI models. The enhanced computational power and overall efficiency of these advanced GPUs empower researchers and engineers to seamlessly train larger and more complex models in a significantly shorter timeframe compared to previous generations of hardware. This acceleration of the training process is paramount for maintaining a competitive edge in the rapidly evolving field of AI.

Furthermore, the innovative NVIDIA GB200 NVL72 systems facilitate the development of more sophisticated AI models that can perform a wider range of complex tasks with greater precision and efficiency. These systems are particularly well-suited for training models that require extensive reasoning capabilities, such as those utilized in natural language processing and computer vision applications. The exceptional ability to process vast quantities of data and perform intricate calculations enables these models to achieve enhanced accuracy, reliability, and the capability to effectively handle real-world scenarios.

The impact on specific applications is substantial and far-reaching. In the domain of natural language processing, the new hardware empowers the creation of more powerful language models that can understand and generate human-like text with greater fluency and coherence. This leads to significant improvements in applications such as sophisticated chatbots, intelligent virtual assistants, and advanced machine translation systems. In the field of computer vision, the enhanced computational power allows for the development of more accurate and robust object recognition systems, which are essential for applications like autonomous vehicles, medical imaging, and advanced surveillance systems.

The Role of CoreWeave in Democratizing AI

CoreWeave’s ongoing efforts to make advanced computing resources readily accessible to a wider audience play a pivotal role in the democratization of AI. By providing cost-effective access to cutting-edge hardware resources, CoreWeave empowers smaller companies and research institutions to effectively compete with larger organizations that have traditionally dominated the AI landscape. This democratization of AI fosters innovation and promotes a more diverse range of perspectives in the development and implementation of AI technologies.

The widespread availability of powerful cloud-based resources also significantly lowers the barrier to entry for individuals and startups interested in exploring the vast potential of AI. By eliminating the need for large upfront investments in expensive hardware, CoreWeave allows aspiring AI developers to focus their efforts on their innovative ideas and groundbreaking innovations. This can lead to the creation of novel applications and transformative solutions that might not have been previously possible.

Moreover, CoreWeave’s steadfast commitment to providing a user-friendly platform and comprehensive support services further contributes to the democratization of AI. By making it easier for users to seamlessly access and utilize advanced computing resources, CoreWeave empowers them to achieve their goals and actively contribute to the continued advancement of AI technologies.

Transforming Industries with AI

The significant advancements enabled by CoreWeave’s deployment of NVIDIA Grace Blackwell GPUs are poised to transform a wide range of industries. The enhanced computational power and efficiency of these advanced systems will drive innovation and create new opportunities across diverse sectors, from healthcare to finance and beyond.

In the healthcare industry, AI is being utilized to develop more accurate and reliable diagnostic tools, personalize treatment plans to individual patient needs, and accelerate the crucial drug discovery process. The ready availability of advanced computing resources empowers researchers to analyze vast amounts of intricate medical data and identify subtle patterns that would be nearly impossible to detect manually. This can lead to groundbreaking breakthroughs in the treatment of debilitating diseases and significantly improve patient outcomes.

In the finance sector, AI is being leveraged to detect fraudulent activities, proactively manage financial risk, and automate complex trading processes. The ability to rapidly process large volumes of financial data in real-time enables companies to make more informed decisions and respond quickly to rapidly changing market conditions. This can lead to increased efficiency, reduced operational costs, and improved profitability.

Other industries that are likely to be fundamentally transformed by the widespread adoption of AI include manufacturing, transportation, and retail. In the manufacturing sector, AI is being used to optimize production processes, improve quality control measures, and reduce waste, leading to increased efficiency and reduced costs. In transportation, AI is enabling the development of autonomous vehicles and more efficient logistics systems, revolutionizing the way goods and people are moved. In the retail industry, AI is being used to personalize customer experiences, optimize pricing strategies, and improve supply chain management, leading to increased customer satisfaction and improved profitability.

CoreWeave’s Vision for the Future

CoreWeave’s strategic deployment of NVIDIA Grace Blackwell GPUs is not simply a singular event; it is an integral part of a broader and more ambitious vision for the future of cloud computing and AI. CoreWeave is steadfastly committed to continuously investing in emerging technologies and expanding its robust infrastructure to meet the evolving needs of its diverse customer base. This proactive approach includes exploring innovative new architectures, developing more efficient software solutions, and fostering strategic collaborations with leading technology providers.

CoreWeave’s overarching vision extends beyond simply providing advanced computing resources. It also encompasses creating a vibrant and collaborative ecosystem of developers, researchers, and companies that are actively working to push the boundaries of AI innovation. By fostering innovation and promoting collaboration, CoreWeave aims to accelerate the development and widespread adoption of AI technologies across a multitude of industries.

The company’s unwavering commitment to sustainability is also a key component of its long-term vision. CoreWeave is actively working to minimize its environmental impact by utilizing renewable energy sources and implementing energy-efficient technologies throughout its operations. This reflects a growing awareness of the paramount importance of sustainability in the technology industry and a steadfast commitment to creating a more environmentally responsible and sustainable future for all.