NVIDIA has launched the second generation of its Jetson processor board to provide artificial intelligence (AI) at the network edge with less power consumption
The Jetson TX2 keeps the same 50mm x 87mm credit card form factor as the first generation but updates the processor and memory bandwidth to provide twice the performance, handling four 4K video streams, or a lower power envelope of 7.5W for two streams.
This allows Jetson TX2 to run larger, deeper neural networks on edge devices in the Internet of Things (IoT) in embedded designs from manufacturing, industrial and retail markets to AI in drones.
“Jetson TX2 brings powerful AI capabilities at the edge, making possible a new class of intelligent machines,” said Deepu Talla, vice president and general manager of the Tegra business at NVIDIA. “These devices will enable intelligent video analytics that keep our cities smarter and safer, new kinds of robots that optimize manufacturing, and new collaboration that makes long distance work more efficient.”
The Pascal chip uses 256 GPU cores alongside two of NVIDIA's customised Denver 2 64bit ARM processors and four standard ARM A57 cores. The board adds support for 4K x 2K 60fps encode and decode and 12 CSI lanes supporting up to 6 cameras with 2.5 Gbytes/s per lane. The memory has been increased to 8Gbytes of LPDDR4 and the bandwidth doubled to 58.3 Gbyte/s.
The Pascal chip uses 256 GPU cores alongside two of NVIDIA's customised Denver 2 64bit ARM processors and four standard ARM A57 cores. The board adds support for 4K x 2K 60fps encode and decode and 12 CSI lanes supporting up to 6 cameras with 2.5 Gbytes/s per lane. The memory has been increased to 8Gbytes of LPDDR4 and the bandwidth doubled to 58.3 Gbyte/s.
The board runs NVIDIA's Linux for Tegra operating system and its latest software development kit (SDK) for AI computing, JetPack 3.0. This includes the choice of TensorRT 1.0, a high-performance neural network inference engine for production
deployment of deep learning applications, cuDNN 5.1, a GPU-accelerated library of primitives for deep neural networks or VisionWorks 1.6, a software development package for computer vision and image processing, as well as the latest graphics drivers and APIs, including OpenGL 4.5, OpenGL ES 3.2, EGL 1.4 and Vulkan 1.0, along with CUDA 8, which turns the GPU into a general-purpose massively parallel processor.
The development board ships now with the modules shipping in volume in Q2.
deployment of deep learning applications, cuDNN 5.1, a GPU-accelerated library of primitives for deep neural networks or VisionWorks 1.6, a software development package for computer vision and image processing, as well as the latest graphics drivers and APIs, including OpenGL 4.5, OpenGL ES 3.2, EGL 1.4 and Vulkan 1.0, along with CUDA 8, which turns the GPU into a general-purpose massively parallel processor.
The development board ships now with the modules shipping in volume in Q2.
No comments:
Post a Comment