Up to four dual processor Dell EMC PowerEdge C6420 server nodes can be installed in the 2U Dell EMC C6400 chassis. This system is designed to meet the needs of data center customers that require an easy to scale solution that also features powerful performance with low energy costs.
The shared infrastructure of the Dell EMC PowerEdge C6400 chassis provides the power, cooling, and storage, while the server nodes provide the processing power, memory, and I/O. Each of the four nodes can support one or two processors with each processor supporting up to 8 DIMM slots, or 16 DIMMs total with dual processors. The chassis features storage up front that is equally distributed between the server nodes. An optional Air and Direct Contact Liquid Cooling solution helps reduce operational costs for data centers by reducing heat generated by the server. This is a hyper-scale system perfect for high-performance computing, web data analytics, cloud computing, hyper-converged infrastructure, and big data analytics.
Each of the four server nodes can support one or two intel Xeon Scalable Processors (SP) from the Bronze, Silver, Gold, or Platinum families, including processors with up to 28 cores, for up to 56 total cores with two processors. The sled can also be outfitted with Intel’s Scalable OmniPath fabric (OPA) processors in a mixed configuration with a non-fabric processor or a dual Fabric processor configuration. The Intel Scalable OmniPath Fabric processor is an end to end high-bandwidth low-latency solution to optimize performance while simplifying the deployment of HPC clusters. Latency is reduced by eliminating the need for a PCIe card to support the fabric interface. Instead the CPU features an integrated fabric connection that’s part of the CPU architecture enhancing data transmission and security. The host fabric interface also provides four 10Gb/s Ethernet ports for software-defined storage solutions and NVMe over fabric solutions. These processors are easily identified by the “F” preceding the model number and the x16 connector on one side of the processor. The additional processor connector increases the supported PCIe lanes from 48, like with the other Scalable processors, to 64 PCIe 3.0 lanes. With the integrated OmniPath fabric, all the standard 48 PCIe lanes are preserved for other options. The connector on the end of the processor connects to a cable which is held in place by a metal socket retention clip that is integrated on the CPU socket for motherboards that support OPA—and not all do. Most of the processors from the Scalable family are supported but there are a few restrictions. Each of the Scalable processors supports six memory channels for 12 channels total with both processors installed.
There are 8 DDR4 DIMM slots allocated to each processor, for a total of 16 active DIMM slots with dual processors. You can choose from either registered or load-reduced DIMM modules operating at speeds of up to 2666MHz when paired with an appropriate processor that supports that frequency. NVDIMM modules are not supported on this system.
The PowerEdge C6400 chassis features a variety of different configurations. A 2.5-inch chassis that supports up to 24 SAS, SATA or nearline SAS 2.5-inch drives. With the 2.5-inch drive configuration, each server node is allocated six of the front-mounted drives, two of which can be switched out for NVMe drives for even greater performance. The NVMe version also requires a specific backplane to support NVMe drives and PCIe add-in adapter cards. Another variation on the 2.5-inch chassis, supports only two server nodes with 12 drives allocated to each with a specific expander backplane. A second C6400 chassis configuration features 12x 3.5-inch SAS or SATA drives with three drives allocated to each server node. This configuration also features a direct backplane that equally divides the drives between the nodes. Lastly, there is a no backplane option with no external drives. You can also install an M.2 SATA drive with a maximum capacity of 120GB to support the OS in either the x8 slot on the mezzanine riser or the M.2 SATA x16 riser, which installs in an internal PCIe slot adjacent to CPU2’s memory modules. An Optional Secure Digital High Capacity (SDHC) card can be installed on one of the PCIe risers in each compute sled and can also be used to boot the server node. SDHC cards are the same form factor and shape as a standard SD card but features much greater capacity at over 4GB or above, which is the defining difference between an SD card and a SDHC card. SD cards may be compatible with an SDHC device but the SDHC cards are not compatible with an SD card reader.
Each of the four compute sleds on the chassis features up to four PCIe Gen 3.0 slots. There are a variety of different riser options depending on your needs with the x16 buried riser, and a second processor required to support the optional M.2 boot device. Video is provided by an integrated Matrox G200 graphics card with 16GB RAM. Redundant 1600W, 2000W, or 2400W Platinum power supply units are supported in the C6400 chassis each of which will be de-rated to a lower power setting when operating on a low-line power circuit. Optional Open Compute Project (OCP) and OmniPath (OPA) I/O cards support network connection speeds of up to 100Gb/s. They also offer greater server density when used with OCP and OPA network switches.
Management of the Dell EMC C6420 is through the Base Management Controller (BMC). Optionally, Administrators can use the integrated iDRAC9 with lifecycle controller and Redfish API for advanced management both on-site and remotely. Security has also been updated with a cyber-resilient architecture providing for the full life of the server.
The Dell EMC PowerEdge C6400 chassis along with the Dell C6420 server node delivers outstanding performance for high-performance computing applications that require low latency. When outfitted with the Intel Scalable Omni Path fabric solution, latency is further reduced while preserving all PCIe lanes for other options. OPA also increases the scalability of the system and simplifies the deployment of HPC clusters.
One 120GB M.2 SATA RI SSD boot drive One 16/32/64GB MicroSD card
Controllers:
Non RAID:
Hard Drives:
Solid State Drives:
Read Intensive SSDs:
FIPS-140 Self Encrypting Hard Drives:
NVMe Drives:
Dimensions (HxWxD)
Weight
Form factor: Rack (2U)
24 x 2.5” Direct Backplane with up to 6 SAS/SATA drives per C6420 sled 24 x 2.5” Expander Backplane with up to 12 SAS/SATA drives per C6420 sled and 2 C6420 sleds per C6400 chassis 24 x 2.5” NVMe Backplane with up to 2 SAS/SATA/NVMe drives and 4 SAS/SATA drives per C6420 sled 12 x 3.5” Direct Backplane with up to 3 SAS/SATA drives per C6420 sled No Backplane option with no external drives per C6420 sled
OpenManage Connections:
Rear Ports:
If you know what you want but can't find the exact configuration you're looking for, have one of our knowledgeable sales staff contact you. Give us a list of the components you would like to incorporate into the system, and the quantities, if more than one. We will get back to you immediately with an official quote.