General Micro Systems (GMS) has launched a rugged, conduction-cooled, commercial off-the-shelf (COTS) deep learning/artificial intelligence (AI) mobile system that offers real-time data analysis and decision in hostile environments.
The X422 “Lighting” integrates two Nvidia V100 Tesla data centre accelerators into a fully sealed, conduction-cooled chassis. It is designed as a dual co-processor companion to GMS Intel Xeon rugged air-cooled or conduction-cooled servers
GMC claims this is an industry first for deep learning and artificial intelligence, as the X422 includes no fans or moving parts, promising wide temperature operation and massive data movement via an external PCI Express fabric in ground vehicles, tactical command posts, UAV/UAS, or other remote locations. It uses the company’s patented RuggedCool thermal technology to adapt the GPGPUs for harsh conditions, extending the temperature operation while increasing environmental MTBF.
“No one besides GMS has done this before because we own the technology that makes it possible. The X422 not only keeps the V100s or other 250 W GPGPU cards cool on the battlefield, but our unique x16 PCIe Gen 3 FlexVPX fabric streams real-time data between the X422 and host processor/server at an astounding 32 GB/s all day long,” said Ben Sharfi, chief architect and CEO, General Micro Systems. “From sensor to deep learning co-processor to host: X422 accelerates the fastest and most complete data analysis and decision making possible.”
The X422, which is approximately 12x12 inches square and under 3 inches high, includes dual x16 PCIe Gen 3 slots for the GMS-ruggedized PCIe deep learning cards. Each card has 5120 CUDA processing cores, giving X422 over 10,200 GPGPU cores and in excess of 225 TFLOPS for deep learning. In addition to using Nvidia GPGPU co-processors, the X422 can accommodate other co-processors, different deep learning cards, and high-performance computers (HPC) based upon FPGAs from Xilinx or Altera, or ASICs up to a total of 250 W per slot (500 W total).
Another industry first brings I/O to X422 via GMS’s FlexVPX bus extension fabric. X422 interfaces with servers and modules from GMS and from One Stop Systems, using industry-standard iPass+ HD connectors offering x16 lanes in and x16 lanes out of PCI Express Gen 3 (8 GT/s) fabric for a total of 256 GT/s (about 32 GB/s) system throughput. X422 deep learning co-processor systems can be daisy-chained up to a theoretical limit of 16 chassis working together.
Unique to X422 are the pair of X422’s two PCIe deep learning cards that can operate independently or work together as a high-performance computer (HPC) using the user-programmable onboard, non-blocking low-latency PCIe switch fabric. For PCIe cards with outputs—such as the Titan V’s DisplayPorts—these are routed to separate A and B front panel connectors.
www.gms4sbc.com/images/Products/Accessories/X422/2018_X422.pdf
Unique to X422 are the pair of X422’s two PCIe deep learning cards that can operate independently or work together as a high-performance computer (HPC) using the user-programmable onboard, non-blocking low-latency PCIe switch fabric. For PCIe cards with outputs—such as the Titan V’s DisplayPorts—these are routed to separate A and B front panel connectors.
www.gms4sbc.com/images/Products/Accessories/X422/2018_X422.pdf
No comments:
Post a Comment