While it is not itself an embedded component, the S2U 'King Cobra' rack server highlights the best in embedded system design writes Nick Flaherty .
The S2U developed by General Micro Systems (GMS) can replace up to 15U of equivalent server rack functions by using a modular and scalable architecture that is a breakthrough in electro-mechanical design. Each sub-system is optimized for maximum performance, lowest power, highest efficiency thermal profile and modular replacement in the field. Each subsystem is scalable and upgradable to lower the total cost of ownership (TCO) over the life of the system.
With more and more computing resources needed in the data centre to support the Internet of Things, a compact, scalable rack that can add multiple accelerators is a key technology.
The design starts with the field-removable OpenVPX single-board computer (SBC) motherboard with dual 18 core Intel Xeon E5-2600v4 CPUs and up to 1024GB (1TB) of DD4 DRAM, supported with a completely PCI Express-based storage subsystem with up to 48TB of solid state disks.
The 17-inch deep short rack includes a 24 port (22x 1GigE; 2x 10GigE) Ethernet switch subsystem, a hardware or enterprise-class Cisco intelligent router, two Power Supply Unit (PSU) options with Auxiliary Power Unit (APU), and scalable add-in modules for algorithm coprocessors, sensor interfaces, additional I/O or legacy system interfaces. On-board removable smart fan trays finely manage airflow and noise using side-mounted inlet/outlets.
“This server is not what everyone else is doing. It’s a revolutionary concept in server design,” said Ben Sharfi, GMS CEO and Chief Architect. “We looked at the entire rack, identified the functions most needed in a rugged system—from the server to the switch, and from backup power in our patent-pending APU to the existing system I/O—and designed a ‘total rack’ into a 2U, 17-inch deep box. It’s a future way of thinking about a server solution to the problem.”
Traditional servers have minimal ability to add-in specialty I/O or co-processors. While there may be one or two PCIe card slots for off-the-shelf modules, this is rarely sufficient for systems interfacing with multiple legacy I/O interfaces. S2U offers four different ways to add up to 25 I/O and co-processors:
1) 4x PCIe x16 card slots
2) 3x 3U OpenVPX defence-quality modules
3) GMS’s SAM I/O (miniPCIe) on each drive tray and on the 6U VPX modules
4) Defence-quality XMC card can be added to the OpenVPX SBC motherboard
This allows the S2U to support up to 24 TFLOPS of arithmetic processing via just two Nvidia Quadro P6000 processor GPGPU compute engine modules.
The S2U includes two ways to the power the system: via the three N+1 3U OpenVPX power supplies or via the add-in PSU/APU. The PSU/APU replaces the PCIe card cage and provides battery power to allow orderly suspend-to-disk or shutdown. Regardless of which option the user selects, the power supply is optimized for maximum efficiency. All S2U internal power buses are +12VDC, eliminating the need for multiple in-system power converters. This dramatically differs from traditional servers that use commodity “black box” power supplies with multiple (often unneeded) voltage outputs. S2U’s power distribution system gets the energy where it is needed without superfluous power rails.
Finally, the S2U is designed to be more than a high-performance server with NAS and scalable I/O. There’s also a 22-port intelligent managed Layer 2/3 Ethernet switch with POE+ on all ports, plus an additional four 10GigE ports (SBC plus switch).
The design starts with the field-removable OpenVPX single-board computer (SBC) motherboard with dual 18 core Intel Xeon E5-2600v4 CPUs and up to 1024GB (1TB) of DD4 DRAM, supported with a completely PCI Express-based storage subsystem with up to 48TB of solid state disks.
The 17-inch deep short rack includes a 24 port (22x 1GigE; 2x 10GigE) Ethernet switch subsystem, a hardware or enterprise-class Cisco intelligent router, two Power Supply Unit (PSU) options with Auxiliary Power Unit (APU), and scalable add-in modules for algorithm coprocessors, sensor interfaces, additional I/O or legacy system interfaces. On-board removable smart fan trays finely manage airflow and noise using side-mounted inlet/outlets.
“This server is not what everyone else is doing. It’s a revolutionary concept in server design,” said Ben Sharfi, GMS CEO and Chief Architect. “We looked at the entire rack, identified the functions most needed in a rugged system—from the server to the switch, and from backup power in our patent-pending APU to the existing system I/O—and designed a ‘total rack’ into a 2U, 17-inch deep box. It’s a future way of thinking about a server solution to the problem.”
S2U King Cobra has been awarded or has pending a total of 12 patents. “If it wasn’t unique,” said Sharfi, “the patents wouldn’t have been issued. No server company is even close to this technology, nor will be in the near future.” The patents include the Xeon CPU sockets and PCIe interconnects as well as the distributed airflow that is managed via 12 individual smart fans with tachometers, to the 100 percent only-12Vdc internal power buses that eliminate the inefficiencies of typical server power supplies with standard (and often unneeded) multi-voltage power rails and up/down converters.
The OpenVPX motherboard is also designed by GMS and based upon a proven GMS compute-engine design that is used in defence applications, with the CPUs cooled via a patented version of the company’s RuggedCool technology; the heatsinks are also custom designed for maximum thermal transfer to the finely managed in-box airflow. Dual hot-swappable fan trays each contain six smart fans which blow air in/out of the system to the side—not the front/rear as in typical servers. The side-facing fans operate based upon real-time cooling needs using input from in-system sensors. The advantage of this system is that it minimizes fan noise to the server environment by keeping the fans at the minimum speed to cool the system.
Internal PCIe buses move data to the subsystems, including the 12-tray drive network-attached storage array. While the system supports traditional SATA and SAS SSDs (up to 4TB per tray), S2U is best used with PCIe-based NVMe drives that provide a direct PCIe path between CPU and storage, eliminating the typical server drive controllers that rob performance by converting from PCIe to another interface and protocol.
The advantage of this architecture is that a hardware RAID controller isn’t needed. Instead, Intel’s CPUs support RSTe and RST2 software RAID to directly control NVMe drives without performance penalty. With 12 drive trays available, S2U supports up to 48TB of SSD storage with performance of well over 20X better than standard SAS/SATA drives.
The OpenVPX motherboard is also designed by GMS and based upon a proven GMS compute-engine design that is used in defence applications, with the CPUs cooled via a patented version of the company’s RuggedCool technology; the heatsinks are also custom designed for maximum thermal transfer to the finely managed in-box airflow. Dual hot-swappable fan trays each contain six smart fans which blow air in/out of the system to the side—not the front/rear as in typical servers. The side-facing fans operate based upon real-time cooling needs using input from in-system sensors. The advantage of this system is that it minimizes fan noise to the server environment by keeping the fans at the minimum speed to cool the system.
Internal PCIe buses move data to the subsystems, including the 12-tray drive network-attached storage array. While the system supports traditional SATA and SAS SSDs (up to 4TB per tray), S2U is best used with PCIe-based NVMe drives that provide a direct PCIe path between CPU and storage, eliminating the typical server drive controllers that rob performance by converting from PCIe to another interface and protocol.
The advantage of this architecture is that a hardware RAID controller isn’t needed. Instead, Intel’s CPUs support RSTe and RST2 software RAID to directly control NVMe drives without performance penalty. With 12 drive trays available, S2U supports up to 48TB of SSD storage with performance of well over 20X better than standard SAS/SATA drives.
Traditional servers have minimal ability to add-in specialty I/O or co-processors. While there may be one or two PCIe card slots for off-the-shelf modules, this is rarely sufficient for systems interfacing with multiple legacy I/O interfaces. S2U offers four different ways to add up to 25 I/O and co-processors:
1) 4x PCIe x16 card slots
2) 3x 3U OpenVPX defence-quality modules
3) GMS’s SAM I/O (miniPCIe) on each drive tray and on the 6U VPX modules
4) Defence-quality XMC card can be added to the OpenVPX SBC motherboard
This allows the S2U to support up to 24 TFLOPS of arithmetic processing via just two Nvidia Quadro P6000 processor GPGPU compute engine modules.
The S2U includes two ways to the power the system: via the three N+1 3U OpenVPX power supplies or via the add-in PSU/APU. The PSU/APU replaces the PCIe card cage and provides battery power to allow orderly suspend-to-disk or shutdown. Regardless of which option the user selects, the power supply is optimized for maximum efficiency. All S2U internal power buses are +12VDC, eliminating the need for multiple in-system power converters. This dramatically differs from traditional servers that use commodity “black box” power supplies with multiple (often unneeded) voltage outputs. S2U’s power distribution system gets the energy where it is needed without superfluous power rails.
Finally, the S2U is designed to be more than a high-performance server with NAS and scalable I/O. There’s also a 22-port intelligent managed Layer 2/3 Ethernet switch with POE+ on all ports, plus an additional four 10GigE ports (SBC plus switch).
Related stories:
No comments:
Post a Comment