White Papers

12 Deep Learning using iAbra stack on Dell EMC PowerEdge Servers with Intel technology
4 Why FPGA?
FPGA (Field Programmable Gate Arrays) allow a blank slate to build the solution that fits the problem best
instead of fitting a solution into a predefined architecture. FPGAs provide flexibility for AI system architects
searching for competitive deep learning accelerators that also support differentiating customization. The
ability to tune the underlying hardware architecture, including variable data precision, and software-defined
processing allows FPGA-based platforms to deploy state-of-the-art deep learning innovations as they
emerge. Other customizations include co-processing of custom user functions adjacent to the software-
defined deep neural network. Underlying applications are in-line image & data processing, front-end signal
processing, network ingest, and IO aggregation. This flexibility allows systems deployed with FPGA to adapt
to new advancements in Deep Learning or other value add tasks without redeployment.
Figure 5a (Left) Arrayed building blocks are connected via interconnect wires; (Right) Fully featured FPGAs
include a variety of advanced building blocks
Figure 5b (Right) illustrates the variety of building blocks available in an FPGA. The core fabric implements
digital logic with Look-up tables (LUTs), Flip-Flops (FFs), Wires, and I/O pads. FPGAs today also include
Multiply-accumulate (MAC) blocks for DSP functions, Off-chip memory controllers, High-speed serial
transceivers, embedded, distributed memories, Phase-locked loops (PLLs), hardened PCIe interfaces, and
range from 1,000 to over 2,000,0000 logic elements.