Dell Technologies, in collaboration with NVIDIA, has unveiled a groundbreaking suite of enterprise AI solutions poised to revolutionize artificial intelligence adoption and deployment on a global scale. This strategic alliance marks a significant leap forward in empowering organizations to harness the transformative potential of AI, driving innovation and efficiency across diverse industries.
Dell AI Factory with NVIDIA: A Comprehensive Ecosystem for AI Innovation
The cornerstone of this collaborative effort is the Dell AI Factory with NVIDIA, a comprehensive ecosystem designed to provide organizations with the infrastructure, solutions, and managed services necessary to scale their AI operations effectively. This integrated platform seamlessly combines Dell’s cutting-edge hardware with NVIDIA’s advanced AI software, creating a robust and versatile foundation for AI innovation. The Dell AI Factory with NVIDIA isn’t just a collection of products; it’s a holistic environment meticulously crafted to support every stage of the AI lifecycle. From the initial experimentation and model development to large-scale deployment and ongoing management, the platform provides the tools and resources necessary to succeed. This includes access to a vast library of pre-trained models, optimized software libraries, and expert support, enabling organizations to accelerate their AI initiatives and achieve tangible results. The core principle behind the Dell AI Factory with NVIDIA is to democratize AI, making it accessible to a wider range of organizations, regardless of their size or technical expertise. By simplifying the complexities of AI deployment and management, the platform empowers businesses to focus on leveraging AI to solve real-world problems and create new opportunities. This collaborative approach between Dell and NVIDIA is built on a shared vision of a future where AI is seamlessly integrated into every aspect of business, driving innovation and transforming industries.
PowerEdge Servers: Unleashing Unprecedented AI Performance
At the heart of Dell’s new AI solutions lie its next-generation PowerEdge servers, meticulously engineered to deliver unparalleled performance for AI workloads. These servers represent a significant advancement over their predecessors, offering enhanced processing power, memory capacity, and storage capabilities. The PowerEdge servers are not simply upgraded versions of existing models; they are fundamentally redesigned to meet the unique demands of modern AI applications. This includes support for the latest NVIDIA GPUs, high-bandwidth memory, and advanced networking technologies. These servers are optimized for a wide range of AI workloads, including deep learning, natural language processing, and computer vision. Whether it’s training complex models, running inference on large datasets, or deploying AI-powered applications at the edge, the PowerEdge servers provide the performance and scalability required to succeed. The modular design of the servers allows for customization and flexibility, enabling organizations to tailor their infrastructure to their specific needs. This ensures that they can get the most out of their investment and optimize their AI workloads for maximum performance. Dell’s commitment to innovation is evident in the design of the PowerEdge servers, which incorporate the latest technologies and features to deliver a superior AI experience. These servers are more than just hardware; they are the foundation for a new era of AI-powered innovation.
- Air-cooled Dell PowerEdge XE9780 and XE9785 servers: These models are designed for seamless integration into existing enterprise data centers, providing a straightforward path for organizations to adopt AI without requiring significant infrastructure overhauls. The air-cooled design of these servers makes them ideal for organizations that want to leverage their existing infrastructure investments. The servers are designed to operate efficiently in standard data center environments, without the need for specialized cooling or power infrastructure. This simplifies deployment and reduces the total cost of ownership, making AI more accessible to a wider range of organizations. The PowerEdge XE9780 and XE9785 servers provide a balance of performance, efficiency, and scalability, making them well-suited for a variety of AI workloads.
- Liquid-cooled Dell PowerEdge XE9780L and XE9785L models: Specifically engineered to accelerate rack-scale deployment, these liquid-cooled servers offer superior thermal management, enabling higher densities and improved performance for demanding AI applications. The liquid-cooled design of these servers allows for higher densities and improved performance compared to air-cooled models. Liquid cooling is more efficient at removing heat than air cooling, which enables the servers to operate at higher clock speeds and deliver greater computational power. This is particularly important for demanding AI applications that require massive amounts of processing power. The PowerEdge XE9780L and XE9785L servers are designed for organizations that need the highest levels of performance and scalability for their AI workloads. They are ideal for training large language models, running complex simulations, and powering other computationally intensive applications.
The new server range boasts impressive specifications, supporting up to 192 NVIDIA Blackwell Ultra GPUs with direct-to-chip liquid cooling. For even greater computational power, the servers can be customized with up to 256 NVIDIA Blackwell Ultra GPUs per Dell IR7000 rack. These specifications demonstrate the sheer power and scalability of the new PowerEdge servers. The ability to support a large number of GPUs enables organizations to tackle even the most challenging AI workloads. The direct-to-chip liquid cooling ensures that the GPUs can operate at their maximum performance without overheating. The Dell IR7000 rack provides a high-density, scalable solution for deploying AI infrastructure.
Compared to Dell’s PowerEdge XE9680, these next-generation servers deliver up to four times faster large language model training with the 8-way NVIDIA HGX B300. The Dell PowerEdge XE9712, featuring NVIDIA GB300 NVL72, stands out for its rack-scale efficiency in training, offering up to fifty times more AI reasoning inference output and a fivefold improvement in throughput. These performance improvements are a testament to the advancements in both hardware and software. The NVIDIA HGX B300 provides a significant boost in processing power for large language model training. The NVIDIA GB300 NVL72 is optimized for rack-scale efficiency, enabling organizations to deploy AI infrastructure at scale without sacrificing performance. The increased AI reasoning inference output and throughput demonstrate the ability of the new servers to handle complex AI tasks more efficiently.
Dell has also incorporated its innovative PowerCool technology to further enhance power efficiency within these platforms. This technology optimizes cooling performance, reducing energy consumption and lowering operating costs. PowerCool technology is a key component of Dell’s commitment to sustainability. By reducing energy consumption, PowerCool helps organizations to lower their carbon footprint and reduce their operating costs. This technology is particularly important for large-scale AI deployments, where energy consumption can be a significant concern.
Expanding the Server Portfolio: Catering to Diverse AI Use Cases
Dell’s commitment to providing comprehensive AI solutions is reflected in its expanding server portfolio, which caters to a wide range of AI use cases. Dell recognizes that AI is not a one-size-fits-all solution. Different AI applications have different requirements, and organizations need to be able to tailor their infrastructure to their specific needs. The expanding server portfolio provides a range of options to meet these diverse requirements. From small-scale deployments to large-scale data centers, Dell has a server solution to meet the needs of any organization.
- Dell PowerEdge XE7745: Scheduled for release in July 2025, this platform will be available with NVIDIA RTX Pro 6000 Blackwell Server Edition GPUs. Supported within the NVIDIA Enterprise AI Factory validated design, the PowerEdge XE7745 supports up to eight GPUs in a 4U chassis, making it ideal for physical and agentic AI applications such as robotics, digital twins, and multi-modal AI. The PowerEdge XE7745 is designed for edge computing and other applications that require a compact, high-performance server. The NVIDIA RTX Pro 6000 Blackwell Server Edition GPUs provide the processing power needed for demanding AI applications. The 4U chassis allows for easy deployment in a variety of environments. The PowerEdge XE7745 is particularly well-suited for physical and agentic AI applications, such as robotics, digital twins, and multi-modal AI. These applications require real-time processing of sensor data and integration with physical systems.
Dell’s support for the NVIDIA Vera CPU and the NVIDIA Vera Rubin platform further underscores its dedication to embracing cutting-edge AI technologies. A new PowerEdge XE server, designed for use within Dell Integrated Rack Scalable Systems, is planned to support these platforms. This support demonstrates Dell’s commitment to staying at the forefront of AI innovation. The NVIDIA Vera CPU and the NVIDIA Vera Rubin platform represent significant advancements in AI technology. By supporting these platforms, Dell is ensuring that its customers have access to the latest and greatest AI tools. The new PowerEdge XE server will provide a scalable and reliable platform for running AI applications on the NVIDIA Vera CPU and the NVIDIA Vera Rubin platform.
Connectivity and Networking: Ensuring Seamless Data Flow
To address the ever-increasing demands of AI applications, Dell has expanded its connectivity and networking solutions with the PowerSwitch SN5600 and SN2201 Ethernet switches, both part of the NVIDIA Spectrum-X Ethernet networking platform. The company also introduced the NVIDIA Quantum-X800 InfiniBand switches. High-speed connectivity is essential for AI applications, which often involve the transfer of large amounts of data. The PowerSwitch SN5600 and SN2201 Ethernet switches and the NVIDIA Quantum-X800 InfiniBand switches provide the bandwidth and low latency needed to support these applications. These switches are designed to work seamlessly with Dell’s servers and storage solutions, ensuring a smooth and efficient data flow.
These high-performance switches deliver up to 800 gigabits per second of throughput and are backed by Dell’s ProSupport and Deployment Services, ensuring seamless integration and reliable operation. The high throughput of these switches enables organizations to move data quickly and efficiently. Dell’s ProSupport and Deployment Services provide peace of mind, ensuring that the switches are properly installed and configured. These services also provide ongoing support to help organizations troubleshoot any issues that may arise.
NVIDIA Enterprise AI Factory Validated Design: A Holistic Approach to AI Deployment
The Dell AI Factory with NVIDIA solutions are built to support the NVIDIA Enterprise AI Factory validated design, which incorporates Dell and NVIDIA compute, networking, storage, and NVIDIA AI Enterprise software. This holistic approach provides enterprises with a fully integrated AI solution that streamlines deployment and ensures optimal performance. The NVIDIA Enterprise AI Factory validated design provides a blueprint for deploying AI infrastructure. This blueprint includes best practices for configuring hardware and software. By following this validated design, organizations can ensure that their AI infrastructure is properly optimized for performance and reliability. The holistic approach of this design ensures that all components of the AI infrastructure work together seamlessly.
Dell AI Data Platform: Empowering Data-Driven AI Applications
Recognizing the critical role of data in AI initiatives, Dell has enhanced its AI Data Platform to provide applications with always-on access to high-quality data. Data is the lifeblood of AI. Without high-quality data, AI algorithms cannot learn effectively. The Dell AI Data Platform provides a centralized repository for storing and managing data. This platform ensures that AI applications have access to the data they need, when they need it. The Dell AI Data Platform also includes tools for data governance and data quality, ensuring that the data is accurate and reliable.
Dell ObjectScale now supports large-scale AI deployments, aiming to reduce costs and data center footprint with a denser, software-defined system. Dell ObjectScale is a software-defined storage solution that is optimized for large-scale AI deployments. This solution provides a cost-effective way to store and manage the massive amounts of data that are required for AI applications. The dense design of Dell ObjectScale reduces the data center footprint, saving space and energy.
Integrations with NVIDIA BlueField-3 and Spectrum-4 networking components further boost performance and scalability. These integrations optimize data flow, minimizing latency and maximizing throughput for AI workloads. The NVIDIA BlueField-3 and Spectrum-4 networking components provide high-speed connectivity between servers and storage. These components are designed to minimize latency and maximize throughput, which is essential for AI applications. The integration of these components with the Dell AI Data Platform provides a seamless and efficient data flow.
High-Performance Solution for Large-Scale Inference Workloads
Dell has introduced a new high-performance solution that leverages Dell PowerScale, Dell Project Lightning, and PowerEdge XE servers. This solution utilizes KV cache and NVIDIA’s NIXL Libraries to support large-scale distributed inference workloads. Inference is the process of using a trained AI model to make predictions on new data. Large-scale inference workloads require significant processing power and memory. The new high-performance solution from Dell is designed to meet these requirements. Dell PowerScale provides the storage capacity and performance needed to store and retrieve large datasets. Dell Project Lightning provides the acceleration needed to run inference models efficiently. The PowerEdge XE servers provide the processing power and memory needed to support large-scale inference workloads.
In addition, Dell ObjectScale will support S3 over RDMA, which the company claims can result in up to 230% higher throughput, up to 80% lower latency, and 98% reduced CPU load compared to traditional S3, enabling improved GPU utilization. This innovation significantly enhances the efficiency of AI inference, enabling organizations to derive insights from their data more quickly and effectively. S3 over RDMA is a new technology that allows data to be transferred directly from storage to memory, bypassing the CPU. This results in significant performance improvements, including higher throughput, lower latency, and reduced CPU load. The improved GPU utilization enables organizations to run inference models more efficiently and derive insights from their data more quickly.
Integrated Offering with NVIDIA AI Data Platform: Accelerating Curated Insights
Dell has announced an integrated offering that incorporates the NVIDIA AI Data Platform, targeted at accelerating curated insights from data and agentic AI applications and tools. This offering streamlines the process of data preparation and analysis, enabling organizations to unlock the full potential of their data assets. The NVIDIA AI Data Platform provides a comprehensive set of tools for data preparation, analysis, and visualization. This platform is designed to accelerate the process of extracting insights from data. The integrated offering from Dell combines the NVIDIA AI Data Platform with Dell’s hardware and software solutions, providing a complete solution for AI deployments.
NVIDIA AI Enterprise Platform: Simplifying AI Development and Deployment
The NVIDIA AI Enterprise platform is available directly from Dell and includes NVIDIA NIM, NVIDIA NeMo microservices, NVIDIA Blueprints, NVIDIA NeMo Retriever for RAG, and NVIDIA Llama Nemotron reasoning models. These tools empower organizations to develop agentic workflows and shorten the time to achieve AI outcomes. The NVIDIA AI Enterprise platform provides a comprehensive set of tools for developing and deploying AI applications. This platform includes NVIDIA NIM, which provides optimized inference performance; NVIDIA NeMo microservices, which provide pre-built AI models; NVIDIA Blueprints, which provide templates for building AI applications; NVIDIA NeMo Retriever for RAG, which enables retrieval-augmented generation; and NVIDIA Llama Nemotron reasoning models, which provide advanced reasoning capabilities. These tools empower organizations to develop agentic workflows and shorten the time to achieve AI outcomes.
Streamlined Deployment and Management: Red Hat OpenShift Support and Managed Services
To simplify deployment and management, Dell will offer Red Hat OpenShift support on the Dell AI Factory with NVIDIA. The company has also launched Dell Managed Services for the AI Factory, providing management across the entire NVIDIA AI solutions stack, including ongoing monitoring, reporting, version upgrades, and patching. This comprehensive suite of services ensures that organizations can focus on leveraging AI to drive business value, without being burdened by the complexities of infrastructure management. Red Hat OpenShift is a container orchestration platform that simplifies the deployment and management of AI applications. Dell Managed Services for the AI Factory provides a comprehensive set of services for managing the entire NVIDIA AI solutions stack. These services include ongoing monitoring, reporting, version upgrades, and patching. This comprehensive suite of services ensures that organizations can focus on leveraging AI to drive business value, without being burdened by the complexities of infrastructure management.
Executive Perspectives: A Vision for the Future of AI
Michael Dell, Chairman and Chief Executive Officer at Dell Technologies, emphasized the company’s commitment to democratizing AI, stating, "We’re on a mission to bring AI to millions of customers around the world. Our job is to make AI more accessible. With the Dell AI Factory with NVIDIA, enterprises can manage the entire AI lifecycle across use cases, from training to deployment, at any scale."
Jensen Huang, Founder and Chief Executive Officer at NVIDIA, echoed this sentiment, highlighting the transformative potentialof AI factories: "AI factories are the infrastructure of modern industry, generating intelligence to power work across healthcare, finance and manufacturing. With Dell Technologies, we’re offering the broadest line of Blackwell AI systems to serve AI factories in clouds, enterprises and at the edge."
Availability: Embracing the Future of AI
The new solutions and managed services will become available across 2025 in line with server platform rollouts and future NVIDIA integration support. This staged approach ensures that organizations can seamlessly integrate these cutting-edge technologies into their existing infrastructure.
Dell’s strategic alliance with NVIDIA represents a paradigm shift in the enterprise AI landscape. By combining their respective strengths, Dell and NVIDIA are empowering organizations to embrace the transformative potential of AI, driving innovation and efficiency across diverse industries. With its comprehensive ecosystem, cutting-edge hardware, and robust software, the Dell AI Factory with NVIDIA is poised to revolutionize the way businesses leverage AI to achieve their strategic objectives. It’s a comprehensive step towards a future where AI is accessible, manageable, and impactful for businesses of all sizes. The implications of this partnership reach far and wide, promising to shape the future of numerous industries.