White Papers

8 Real Time Streaming Analytics with Megh Computing on Dell EMC PowerEdge Servers
SPE (Stream Processing Engine) for transforming the data which includes multi-channel H.264
decoder and image re-sizing
DLE (Deep Leaning Engine) for image classification using CNN topologies like Resent50.
These are managed by the Arka Runtime which exposes Accelerator Functions-as-a-Service via high level
APIs for integration with the applications frameworks like Spark, TensorFlow, kdb+, with no changes.
Megh has developed the Deep Learning Engine (DLE) from the ground up for streaming inference. It
consists of a library of high performance, mixed precision DL primitives that are drop-in replacements
for TensorFlow and PyTorch. Megh provides a DLE compiler that directly parses TensorFlow and PyTorch
models, creating an optimal DLE configuration for loading on the FPGA. The deployment flow for the DLE
model is shown in the figure below.
Dell EMC PowerEdge Server
We used the Dell EMC PowerEdge R740/R740xd servers to host the PAC boards. The PowerEdge
R740/R740xd is a general-purpose platform with highly expandable memory (up to 3TB) and impressive
I/O capability to match both read-intensive and write-intensive operations. The Dell EMC PowerEdge
R740 is capable of handling demanding workloads and applications such as data warehouses, E-
commerce, databases, high-performance computing (HPC), and Deep learning workloads.