Borrar filtros
Borrar filtros

How do I use GPU Coder generated CUDA code with the NVIDIA Docker Containers?

1 visualización (últimos 30 días)
I'm using GPU Coder to generate code that runs on NVIDIA Jetson boards. My company has a process where we need to deploy them via docker containers. How can I do so?

Respuesta aceptada

Bill Chou
Bill Chou el 26 de Feb. de 2021
Editada: Bill Chou el 15 de Jul. de 2022
It is possible to integrate GPU Coder generated code with an NVIDIA docker image. This workflow uses the GPU Coder Support Package for NVIDIA GPUs to remotely build the binaries on the Jetson board, So we assume a Jetson board has already been set up for use by GPU Coder. You can find instructions for setting up the board here.
There are four steps involved in this workflow:
  1. Generate CUDA code and build it on the Jeston board
  2. Create/Customize a docker-file
  3. Build a docker image
  4. Run the GPU Coder generated executable in the docker
Generate and build binaries on an NVIDIA Jetson
For simplicity, we will use an executable as the example for this workflow. To build the example, we need a design function (.m file) and the main file (.cu file) that calls the generated code with input.
Design function
Here is a simple design function that we will use for this example. It takes an input, multiplies it by 2, and returns it. Of course, you can use any complex function instead of this:
foo.m
function output = foo(input)
output = input * 2;
end
Main file
To create an executable from the generated code, we need a wrapper function with the name 'main' that calls the generated 'foo' function. Here is an example hand-coded main file:
Note: MATLAB Coder and GPU Coder both support generating the main function automatically for you. You can find the related documentation here.
main.cu
// main.cu
#include <iostream>
#include "foo.h"
using namespace std;
int main() {
double input[5] = {1, 2, 3, 4, 5};
double output[5];
foo(input, output); // call the generated function 'foo'
cout << "Input :";
for (int i = 0; i < 5; ++i) {
cout << " " << input[i];
}
cout << endl;
cout << "Output:";
for (int i = 0; i < 5; ++i) {
cout << " " << output[i];
}
cout << endl;
Generate CUDA code and build the executable using the MATLAB Coder Support Package for NVIDIA Jetson and NVIDIA DRIVE Platforms (previously named GPU Coder Support Package for NVIDIA GPUs)
Note that this step assumes that you already have a Jetson board set up for GPU Coder. If not, please follow the instructions here to set up and test.
%% create hardware object. this step requires the Jetson board to be ready and set up
% syntax hwObj = jetson(<hostname/ip address>, <login user name>, <password>)
% this is a one time step. MATLAB caches settings, and the next time, you could simply say 'Jetson()' to create an object with previously cached values.
hwobj = jetson('my-xavier-board', 'ubuntu', 'ubuntu');
%% Generate code and build it
cfg = coder.gpuConfig('exe');
cfg.CustomSource = fullfile('main.cu');
%Use Jetson hardware object (this uses cached values from previously created from Jetson(..) command
cfg.Hardware = coder.Hardware('NVIDIA Jetson');
% Generated artifacts and exe will copied to following directly on the Jetson board. set it up as you need
cfg.Hardware.BuildDir = '~/remoteBuildDir';
input = [1,2,3,4,5];
%Generate and build the exe
codegen -config cfg -args {input} foo
If this step is successful, you will find generated code and executable files on both the host (where you are generating code from) and the target board. On the target, it will be located inside the directory specified in cfg.Hardware.BuildDir.
Create/customize the docker (configuration) file
Once we have all the necessary artifacts, we are ready to create the docker image. The first step in the process is to create a docker file, a specification for docker builder.
Here is the example docker file named dockerfile_foo:
dockerfile_foo
FROM nvcr.io/nvidia/l4t-base:r32.3.1
WORKDIR /work
COPY :~/remoteBuildDir/MATLAB_ws/local-ssd2/work/docker /work/
CMD export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH && ./foo.elf
Line 1: Grab the base image.
Line 3: For example cases, all generated artifacts are located in '~/remoteBuildDir/MATLAB_ws/local-ssd2/work/docker', so we are copying it to work directory. You will have to change this string based on your settings. This should be the directory that contains the executable file.
Line 4: The docker base has multiple CUDA versions in it and the default version is 10.0. Since R2020b of GPU Coder needs CUDA 10.2, we need to export CUDA 10.2 libraries through LD_LIBRARY_PATH. We are also invoking generated exe.
If you're creating the file on the host, we need to copy this to the board where you're building the docker. The MATLAB Coder Support Package for NVIDIA Jetson and NVIDIA DRIVE Platforms (previously named GPU Coder Support Package for NVIDIA GPUs) has putFile method that does remote copy to the target. You can use it to copy the docker file to the board using the command:
hwobj.putFile('dockerfile_foo', hwobj.workspaceDir)
If you're are directly create the file on the target, the above copy step could be skipped.
Build docker image and run
1. Log into the board using the command:
ssh -X ubuntu@gpucoder-xavier-2
2. Change to the workspacedir:
cd ~/remoteBuildDir/MATLAB_ws/R2021b/local-ssd2/work/docker/
3. Build the docker image (notice the period "." is needed in the command below):
sudo docker build -f dockerfile_foo -t foo_image .
4. Run it:
sudo docker run --runtime nvidia foo_image
Reference

Más respuestas (0)

Productos

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by