White Papers
7 Deep Learning using iAbra stack on Dell EMC PowerEdge Servers with Intel technology
2 Why iAbra?
2.1 Introduction to PathWorks
PathWorks Flow Diagram
The use of FPGA in both the data center and embedded / IoT applications for AI inference can deliver greater
efficiency in terms of silicon size, weight and power (SWaP). However, the exploitation of the inherent
benefits that the FPGA can deliver in terms of SWaP requires that the machine learnt models to be inferred
can “fit” into the resources available on the target FPGA.
To enable this iAbra has developed a framework, PathWorks, that not only creates smaller yet equally
capable neural networks that “fit” the FPGA, but also creates novel network architectures depending on the
training data, further optimizing the model required.
This is possible due to the evolutionary architecture approach used but most importantly, since the reduced
number of neurons have multiple (3D) connections to their neighbors, the search space for regression while
larger, it remains possible to regress the model due to the scale out architecture inherent in PathWorks which
is accelerated using FPGA for this model training.
The result is a framework that both obfuscates the complexity of machine learning for the data citizen (rather
than relying on data scientists) while targeting the highly optimized FPGA silicon for inference.