White Papers

17 Deep Learning using iAbra stack on Dell EMC PowerEdge Servers with Intel technology
6.1.1 CPU & FPGA utilization (Average during training time)
The iAbra application has been tuned to balance the load across CPU and FPGA resulting in utilizations
of 82% on the CPU(40 cores) and 94% on the FPGA.
6.1.2 Top 1% accuracy & Average individual run time
The iAbra application developed a network and weights resulting in 96.2% accuracy in 42mins.
The figure above shows the network convergence over time.
Y axis is Root squared mean error and Y axis number of iterations, as can be sheen iAbra PathWorks
learning algorithm converges very rapidly over a small number of iterations enabling practical evolutionary
training
6.1.3 Throughput images/sec Vs batch size
2280 batch size 2000
6.1.4 Inference Images within 7ms window
16 images (0.795 megapixels) accelerator using 60 mW
6.1.5 Megapixels per watt
This resulted in a power efficiency of 14.25 megapixels per watt which scales linearly with image size,
another advantage of the iAbra neural network architecture.