Main Content

Get Started with Deep Learning FPGA Deployment on Intel Arria 10 SoC

This example shows how to create, compile, and deploy a dlhdl.Workflow object that has a handwritten character detection series network object by using the Deep Learning HDL Toolbox™ Support Package for Intel FPGA and SoC. Use MATLAB® to retrieve the prediction results from the target device.

Prerequisites

  • Intel Arria™ 10 SoC development kit

  • Deep Learning HDL Toolbox™ Support Package for Intel FPGA and SoC

  • Deep Learning HDL Toolbox™

  • Deep Learning Toolbox™

Load the Pretrained SeriesNetwork

To load the pretrained network, that has been trained on the Modified National Institute Standards of Technology (MNIST) database[1], enter:

net = getDigitsNetwork;

To view the layers of the pretrained series network, enter:

analyzeNetwork(net)

Create Target Object

Create a target object that has a custom name for your target device and an interface to connect your target device to the host computer. Interface options are JTAG and Ethernet. To use JTAG, install Intel™ Quartus™ Prime Standard Edition 22.1. Set up the path to your installed Intel Quartus Prime executable if it is not already set up. For example, to set the toolpath, enter:

% hdlsetuptoolpath('ToolName', 'Altera Quartus II','ToolPath', 'C:\altera\22.1\quartus\bin64');
hTarget = dlhdl.Target('Intel')
hTarget = 
  Target with properties:

       Vendor: 'Intel'
    Interface: JTAG

Create Workflow Object

Create an object of the dlhdl.Workflow class. When you create the object, specify the network and the bitstream name. Specify the saved pretrained MNIST neural network, snet, as the network. Make sure that the bitstream name matches the data type and the FPGA board that you are targeting. In this example, the target FPGA board is the Intel Arria 10 SOC board and the bitstream uses a single data type.

hW = dlhdl.Workflow('network', net, 'Bitstream', 'arria10soc_single','Target',hTarget);

Compile the MNIST Series Network

To compile the MNIST series network, run the compile function of the dlhdl.Workflow object.

dn = hW.compile;
### Compiling network for Deep Learning FPGA prototyping ...
### Targeting FPGA bitstream arria10soc_single.
### An output layer called 'Output1_softmax' of type 'nnet.cnn.layer.RegressionOutputLayer' has been added to the provided network. This layer performs no operation during prediction and thus does not affect the output of the network.
### Optimizing network: Fused 'nnet.cnn.layer.BatchNormalizationLayer' into 'nnet.cnn.layer.Convolution2DLayer'
### Notice: The layer 'imageinput' of type 'ImageInputLayer' is split into an image input layer 'imageinput' and an addition layer 'imageinput_norm' for normalization on hardware.
### The network includes the following layers:
     1   'imageinput'        Image Input         28×28×1 images with 'zerocenter' normalization                (SW Layer)
     2   'conv_1'            2-D Convolution     8 3×3×1 convolutions with stride [1  1] and padding 'same'    (HW Layer)
     3   'relu_1'            ReLU                ReLU                                                          (HW Layer)
     4   'maxpool_1'         2-D Max Pooling     2×2 max pooling with stride [2  2] and padding [0  0  0  0]   (HW Layer)
     5   'conv_2'            2-D Convolution     16 3×3×8 convolutions with stride [1  1] and padding 'same'   (HW Layer)
     6   'relu_2'            ReLU                ReLU                                                          (HW Layer)
     7   'maxpool_2'         2-D Max Pooling     2×2 max pooling with stride [2  2] and padding [0  0  0  0]   (HW Layer)
     8   'conv_3'            2-D Convolution     32 3×3×16 convolutions with stride [1  1] and padding 'same'  (HW Layer)
     9   'relu_3'            ReLU                ReLU                                                          (HW Layer)
    10   'fc'                Fully Connected     10 fully connected layer                                      (HW Layer)
    11   'softmax'           Softmax             softmax                                                       (SW Layer)
    12   'Output1_softmax'   Regression Output   mean-squared-error                                            (SW Layer)
                                                                                                             
### Notice: The layer 'softmax' with type 'nnet.cnn.layer.SoftmaxLayer' is implemented in software.
### Notice: The layer 'Output1_softmax' with type 'nnet.cnn.layer.RegressionOutputLayer' is implemented in software.
### Compiling layer group: conv_1>>maxpool_2 ...
### Compiling layer group: conv_1>>maxpool_2 ... complete.
### Compiling layer group: conv_3>>relu_3 ...
### Compiling layer group: conv_3>>relu_3 ... complete.
### Compiling layer group: fc ...
### Compiling layer group: fc ... complete.

### Allocating external memory buffers:

          offset_name          offset_address     allocated_space 
    _______________________    ______________    _________________

    "InputDataOffset"           "0x00000000"     "368.0 kB"       
    "OutputResultOffset"        "0x0005c000"     "4.0 kB"         
    "SchedulerDataOffset"       "0x0005d000"     "220.0 kB"       
    "SystemBufferOffset"        "0x00094000"     "76.0 kB"        
    "InstructionDataOffset"     "0x000a7000"     "28.0 kB"        
    "ConvWeightDataOffset"      "0x000ae000"     "28.0 kB"        
    "FCWeightDataOffset"        "0x000b5000"     "100.0 kB"       
    "EndOffset"                 "0x000ce000"     "Total: 824.0 kB"

### Network compilation complete.

Program Bitstream onto FPGA and Download Network Weights

To deploy the network on the Intel Arria 10 SoC hardware, run the deploy function of the dlhdl.Workflow object. This function uses the output of the compile function to program the FPGA board by using the programming file. It also downloads the network weights and biases. The deploy function starts programming the FPGA device, displays progress messages, and the time it takes to deploy the network.

hW.deploy
### Programming FPGA Bitstream using JTAG...
### Programming the FPGA bitstream has been completed successfully.
### Loading weights to Conv Processor.
### Conv Weights loaded. Current time is 18-Jul-2024 10:54:36
### Loading weights to FC Processor.
### FC Weights loaded. Current time is 18-Jul-2024 10:54:37

Run Prediction for Example Image

To load the example image, execute the predict function of the dlhdl.Workflow object, and then display the FPGA result, enter:

inputImg = imread('five_28x28.pgm');
inputImg = dlarray(single(inputImg), 'SSCB');

Run prediction with the profile 'on' to see the latency and throughput results.

[prediction, speed] = hW.predict(inputImg,'Profile','on');
### Finished writing input activations.
### Running single input activation.


              Deep Learning Processor Profiler Performance Results

                   LastFrameLatency(cycles)   LastFrameLatency(seconds)       FramesNum      Total Latency     Frames/s
                         -------------             -------------              ---------        ---------       ---------
Network                      31905                  0.00021                       1              32854           4565.7
    imageinput_norm           2913                  0.00002 
    conv_1                    6819                  0.00005 
    maxpool_1                 4493                  0.00003 
    conv_2                    5200                  0.00003 
    maxpool_2                 3549                  0.00002 
    conv_3                    6045                  0.00004 
    fc                        2854                  0.00002 
 * The clock frequency of the DL processor is: 150MHz
[val, idx] = max(prediction);
fprintf('The prediction result is %d\n', idx-1);
The prediction result is 5

Bibliography

  1. LeCun, Y., C. Cortes, and C. J. C. Burges. "The MNIST Database of Handwritten Digits." http://yann.lecun.com/exdb/mnist/.

Related Topics