FPGA designer Xilinx has launched a suite of industry-standard resources for developing advanced embedded-vision systems based on machine learning and machine inference.
Tee reVISION stack and it allows design teams without deep hardware expertise to use a software-defined development flow to combine efficient machine-learning and computer-vision algorithms with Xilinx All Programmable devices to create highly responsive systems.
- NVIDIA pushes artificial intelligence into embedded designs with Jetson TX2
- FPGAs moving to the edge of the IoT
For application-level development, Xilinx supports industry-standard frameworks including Caffe for machine learning and OpenVX for computer vision. The reVISION stack also includes development platforms from Xilinx and third parties, which support various sensor types.
The reVISION development flow starts with a familiar, Eclipse-based development environment; the C, C++, and/or OpenCL programming languages; and associated compilers all incorporated into the Xilinx SDSoC development environment. This will also use the Khronos Group’s OpenVX framework.
For machine learning, designers can use popular frameworks including Caffe to train neural networks. Within one Xilinx Zynq SoC or Zynq UltraScale+ MPSoC, a Caffe-generated .prototxt files can be used to configure a software scheduler running on one of the device’s ARM processors to drive CNN inference accelerators—pre-optimized for and instantiated in programmable logic. For computer vision and other algorithms, the code can be profiled to identify bottlenecks and then specific functions designated for hardware acceleration. The Xilinx system-optimizing compiler then creates an accelerated implementation of the code, automatically including the required processor/accelerator interfaces (data movers) and software drivers.
Initially, embedded-vision developers used the existing Xilinx Verilog and VHDL tools to develop systems, but the reVISION stack enables an even broader set of software and systems engineers to develop intelligent, highly responsive embedded-vision systems faster and more easily using FPGAs in new ways.
The last two years have generated more machine-learning technology than all of the advancements over the previous 45 years and that pace isn't slowing down says Xilinx. Many new types of neural networks for vision-guided systems have emerged along with new techniques that make deployment of these neural networks much more efficient.
Related stories:
No comments:
Post a Comment