Radar Target Classification Using Machine Learning and Deep Learning

This example shows how to classify radar returns using feature extraction followed by a support vector machine (SVM) classifier.

In the second portion of the example, a deep learning workflow for the same data set is illustrated using a convolutional neural network and a Long Short-Term Memory (LSTM) recurrent neural network. Note that the data set used in this example does not require advanced techniques but the workflow is described because it can be used for more complex data sets.

This example requires:

  • Phased Array System Toolbox

  • Wavelet Toolbox

  • Statistics and Machine Learning Toolbox

  • Deep Learning Toolbox

Introduction

Target classification is an important function in modern radar systems. Because of the recent success of using machine learning techniques for classification, there is a lot of interest in applying similar techniques to classify radar returns. This example starts with a workflow where the SVM techniques are used to classify radar echoes from a cylinder and a cone. Although this example uses the synthesized I/Q samples, the workflow is applicable to real radar returns.

RCS Synthesis

The next section shows how to create synthesized data to train the network.

The following code simulates the RCS pattern of a cylinder with a radius of 1 meter and a height of 10 meters. The operating frequency of the radar is 850 MHz.

c = 3e8;
fc = 850e6;
[cylrcs,az,el] = rcscylinder(1,1,10,c,fc);
helperTargetRCSPatternPlot(az,el,cylrcs);

The pattern can then be applied to a backscatter radar target to simulate returns from different aspects angles.

cyltgt = phased.BackscatterRadarTarget('PropagationSpeed',c,...
    'OperatingFrequency',fc,'AzimuthAngles',az,'ElevationAngles',el,'RCSPattern',cylrcs);

The following plot shows how to simulate 100 returns of the cylinder over time. It is assumed that the cylinder under goes a motion that causes small vibrations around bore sight, as a result, the aspect angle changes from one sample to the next.

rng(2017);
N = 100;
az = 2*randn(1,N);                  % model vibration with +/- 2 degrees around boresight
el = 2*randn(1,N);
cylrtn = cyltgt(ones(1,N),[az;el]); % generate target echo

clf
plot(mag2db(abs(cylrtn)));
xlabel('Time Index')
ylabel('Target Return (dB)');
title('Target Return for Cylinder');

The return of the cone can be generated similarly. To create the training set for the SVM classifier, the above process is repeated for 5 arbitrarily selected cylinder radius. In addition, for each radius, 10 motion profiles are simulated by varying the incident angle following 10 randomly generated sinusoid curve around boresight. There are 701 samples in each motion profile, so there are 701x50 samples for each shape in total. Because of the long computation time, the training data is precomputed and loaded below.

load('RCSClassificationReturnsTraining');

As an example, the next plot shows the return for one of the motion profiles from each shape. The plots show how the values change over time for both the incident azimuth angles and the target returns.

clf;
subplot(2,2,1); plot(cylinderAspectAngle(1,:)); ylim([-90 90]);
title('Cylinder Aspect Angle vs. Time'); xlabel('Time Index'); ylabel('Aspect Angle (degrees)');
subplot(2,2,3); plot(RCSReturns.Cylinder_1); ylim([-50 50]);
title('Cylinder Return'); xlabel('Time Index'); ylabel('Target Return (dB)');
subplot(2,2,2); plot(coneAspectAngle(1,:)); ylim([-90 90]);
title('Cone Aspect Angle vs. Time'); xlabel('Time Index'); ylabel('Aspect Angle (degrees)');
subplot(2,2,4); plot(RCSReturns.Cone_1); ylim([-50 50]);
title('Cone Return'); xlabel('Time Index'); ylabel('Target Return (dB)');

Feature Extraction

To improve the matching performance of learning algorithms, the learning algorithms often work on extracted features rather than the original signal. The features make it easier for the classification algorithm to discriminate between returns from different targets. In addition, the features are often smaller in size compared to the original signal so it requires less computational resources to learn.

There are a variety of ways to extract features for this type of data set. To obtain the right feature, it is often useful to take a look at a time-frequency view of data where the frequency due to motion is varying across radar pulses. The time-frequency signature of the signal can be derived by either Fourier transform (the spectrogram) or wavelet transforms. Specifically, this example uses wavelet packet representation of the signal. The following plots show the wavelet packet signatures for both the cone and the cylinder. These signatures provide some insight that the learning algorithms will be able to distinguish between the two. Specifically, there is separation between the frequency content over time between the two signatures.

levels = 3;
[wpt,~,F] = modwpt(RCSReturns{:,1},'fk6',levels,'TimeAlign',true);
clf;
contour(1:size(wpt,2),F,abs(wpt).^2); grid on;
xlabel('Time Index'); ylabel('Cycles per sample'); title('Wavelet Packet for Cylinder Return');

[wpt,~,F,E,RE] = modwpt(RCSReturns{:,51},'fk6',levels,'TimeAlign',true);
clf;
contour(1:size(wpt,2),F,abs(wpt).^2); grid on;
xlabel('Time Index'); ylabel('Cycles per sample'); title('Wavelet Packet for Cone Return');

The apparent frequency separation between the cylinder and cone returns suggests using a frequency-domain measure to classify the signals. This example uses the maximal overlap discrete wavelet packet transform (MODWPT) to compare relative subband energies. The MODWPT at level partitions the signal energy into equal-width subbands and does this in a way that the total signal energy is preserved. To see this, you can do the following.

T = array2table([F E RE*100],'VariableNames',{'CenterFrequency','Energy','PercentEnergy'});
disp(T)
    CenterFrequency      Energy      PercentEnergy
    _______________    __________    _____________

        0.03125        1.9626e+05         42.77   
        0.09375             82923        18.071   
        0.15625             65162          14.2   
        0.21875             46401        10.112   
        0.28125             37044        8.0728   
        0.34375             20725        4.5166   
        0.40625              8952        1.9509   
        0.46875            1405.4       0.30626   

The table shows the subband center frequencies in cycles/sample, the energy in those subbands, and the percentage of the total energy in each subband. Note that MODWPT preserves the signal energy, which is an important property that is very difficult to achieve with conventional bandpass filtering. Specifically, you have a signal decomposition into subbands, which mimics an orthogonal transform. In signal classification problems where there is frequency separation between classes, this property significantly improves the ability of a classifier to accurately distinguish the classes.

Using wavelet transform, the extracted features consists of 8 predictors per target return. Comparing to the original time domain signal of 701 points, it is a significant reduction in the data. The number of levels for the wavelet transform can be tuned to improve performance of the classification algorithms.

trainingData = varfun(@(x)helperWPTFeatureExtraction(x,'fk6',levels),RCSReturns);
trainingData = array2table(table2array(trainingData)');
% 50 cylinders followed by 50 cones
shapeTypes = categorical({'Cylinder';'Cones'});
trainingData.Type = shapeTypes([zeros(50,1); ones(50,1)]+1);

Model Training

The Classification Learner app can be used to train the classifier. Once the training data is loaded, it can help apply different learning algorithms against the data set and report back the classification accuracy. The following picture is a snapshot of the app.

Based on the result, this example uses the SVM technique and then generates the corresponding training algorithm, helperTrainClassifier, from the app. The output of the training algorithm is a configured classifier, trainedClassifier, ready to perform the classification.

[trainedClassifier, validationAccuracy] = helperTrainClassifier(trainingData);

Target Classification

Once the model is ready, the network can process the received target return and perform classification using predictFcn method. The next section of the example creates a test data set using the approach similar to creating training data. This data is passed through the derived classifier to see if it can correctly classify the two shapes. The test data contains 25 cylinder returns and 25 cone returns. These cylinders and cones consist of 5 sizes for each shape and 5 motion profiles for each size. The generation process is the same as the training data, but with the specific values are slightly different due to randomness of the size values and incident angle values. The total number of samples for each shape is 701x25.

load('RCSClassificationReturnsTest');

testData = varfun(@(x)helperWPTFeatureExtraction(x,'fk6',levels),RCSReturnsTest);
testData = array2table(table2array(testData)');
testResponses = shapeTypes([zeros(25,1); ones(25,1)]+1); % 25 cylinders followed by 25 cones

testPredictions = trainedClassifier.predictFcn(testData);
cmat = confusionmat(testResponses, testPredictions)
cmat =

    16     9
     0    25

From the confusion matrix, it can be computed that the overall accuracy is about 82%.

classacc = sum(diag(cmat))/sum(cmat(:))
classacc =

    0.8200

It is possible to improve the performance of the classifier by increasing the quality and quantity of the training data. In addition, the feature extraction process can be improved to further distinguish characteristics of each target within the classification algorithm. Note that different features may have different optimal classification algorithms.

For more complex data sets, a deep learning workflow using a convolutional neural network and a Long Short-Term Memory (LSTM) recurrent neural network will also improve performance. For simplicity, the same data set will be used to demonstrate these workflows as well.

clear all;
close all;
load RCSClassificationReturnsTraining;
load RCSClassificationReturnsTest;

Transfer Learning

AlexNet is a deep CNN designed by Alex Krizhevsky and used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Accordingly, AlexNet has been trained to recognize images in 1,000 classes. In this example, we reuse the pre-trained AlexNet to classify radar returns belonging to one of two classes. To use AlexNet, you must install the Deep Learning Toolbox™ Model for AlexNet Network support package. To do this use the MATLAB™ Add-On Explorer. If you have successfully installed AlexNet, you can execute the following code to load the network and display the network layers.

anet = alexnet;
anet.Layers
ans = 

  25x1 Layer array with layers:

     1   'data'     Image Input                   227x227x3 images with 'zerocenter' normalization
     2   'conv1'    Convolution                   96 11x11x3 convolutions with stride [4  4] and padding [0  0  0  0]
     3   'relu1'    ReLU                          ReLU
     4   'norm1'    Cross Channel Normalization   cross channel normalization with 5 channels per element
     5   'pool1'    Max Pooling                   3x3 max pooling with stride [2  2] and padding [0  0  0  0]
     6   'conv2'    Grouped Convolution           2 groups of 128 5x5x48 convolutions with stride [1  1] and padding [2  2  2  2]
     7   'relu2'    ReLU                          ReLU
     8   'norm2'    Cross Channel Normalization   cross channel normalization with 5 channels per element
     9   'pool2'    Max Pooling                   3x3 max pooling with stride [2  2] and padding [0  0  0  0]
    10   'conv3'    Convolution                   384 3x3x256 convolutions with stride [1  1] and padding [1  1  1  1]
    11   'relu3'    ReLU                          ReLU
    12   'conv4'    Grouped Convolution           2 groups of 192 3x3x192 convolutions with stride [1  1] and padding [1  1  1  1]
    13   'relu4'    ReLU                          ReLU
    14   'conv5'    Grouped Convolution           2 groups of 128 3x3x192 convolutions with stride [1  1] and padding [1  1  1  1]
    15   'relu5'    ReLU                          ReLU
    16   'pool5'    Max Pooling                   3x3 max pooling with stride [2  2] and padding [0  0  0  0]
    17   'fc6'      Fully Connected               4096 fully connected layer
    18   'relu6'    ReLU                          ReLU
    19   'drop6'    Dropout                       50% dropout
    20   'fc7'      Fully Connected               4096 fully connected layer
    21   'relu7'    ReLU                          ReLU
    22   'drop7'    Dropout                       50% dropout
    23   'fc8'      Fully Connected               1000 fully connected layer
    24   'prob'     Softmax                       softmax
    25   'output'   Classification Output         crossentropyex with 'tench' and 999 other classes

You see that AlexNet consists of 25 layers. Like all DCNNs, AlexNet cascades convolutional operators followed by nonlinearities and pooling, or averaging. AlexNet expects an image input of size 227-by-227-by-3, which you can see with the following code.

anet.Layers(1)
ans = 

  ImageInputLayer with properties:

                Name: 'data'
           InputSize: [227 227 3]

   Hyperparameters
    DataAugmentation: 'none'
       Normalization: 'zerocenter'
        AverageImage: [227×227×3 single]

Additionally, AlexNet is configured to recognized 1,000 different classes, which you can see with the following code.

anet.Layers(23)
ans = 

  FullyConnectedLayer with properties:

          Name: 'fc8'

   Hyperparameters
     InputSize: 4096
    OutputSize: 1000

   Learnable Parameters
       Weights: [1000×4096 single]
          Bias: [1000×1 single]

Use properties method to see a list of all properties.

In order to use AlexNet on our binary classification problem, we must change the fully connected and classification layers.

layers = anet.Layers;
layers(23) = fullyConnectedLayer(2);
layers(25) = classificationLayer;

Continuous Wavelet Transform

AlexNet is designed to discriminate differences in images and classify the results. Therefore, in order to use AlexNet to classify radar returns, we must transform the 1-D radar return time series into an image. A common way to do this is to use a time-frequency representation (TFR). There are a number of choices for a time-frequency representation of a signal and which one is most appropriate depends on the signal characteristics. To determine which TFR may be appropriate for this problem, randomly choose and plot a few radar returns from each class.

  rng default;
  idxCylinder = randperm(50,2);
  idxCone = randperm(50,2)+50;

It is evident that the radar returns previously shown are characterized by slowing varying changes punctuated by large transient decreases as described earlier. A wavelet transform is ideally suited to sparsely representing such signals. Wavelets shrink to localize transient phenomena with high temporal resolution and stretch to capture slowly varying signal structure. Obtain and plot the continuous wavelet transform of one of the cylinder returns.

cwt(RCSReturns{:,idxCylinder(1)},'VoicesPerOctave',8)

The CWT simultaneously captures both the slowly varying (low frequency) fluctuations and the transient phenomena. Contrast the CWT of the cylinder return with one from a cone target.

cwt(RCSReturns{:,idxCone(2)},'VoicesPerOctave',8);

Because of the apparent importance of the transients in determining whether the target return originates from a cylinder or cone target, we select the CWT as the ideal TFR to use. After obtaining the CWT for each target return, we make images from the CWT of each radar return. These images are resized to be compatible with AlexNet's input layer and we leverage AlexNet to classify the resulting images.

Image Preparation

The helper function, helpergenWaveletTFImg, obtains the CWT for each radar return, reshapes the CWT to be compatible with AlexNet, and writes the CWT as a jpeg file. To run helpergenWaveletTFImg, choose a parentDir where you have write permission. This example uses tempdir, but you may use any folder on your machine where you have write permission. The helper function creates Training and Test set folders under parentDir as well as creating Cylinder and Cone subfolders under both Training and Test. These folders are populated with jpeg images to be used as inputs to AlexNet.

parentDir = tempdir;
helpergenWaveletTFImg(parentDir,RCSReturns,RCSReturnsTest)
Generating Time-Frequency Representations...Please Wait
   Creating Cylinder Time-Frequency Representations ... Done
   Creating Cone Time-Frequency Representations ... Done
   Creating Cylinder Time-Frequency Representations ... Done
   Creating Cone Time-Frequency Representations ... Done

Now use imageDataStore to manage file access from the folders in order to train AlexNet. Create datastores for both the training and test data.

trainingData= imageDatastore(fullfile(parentDir,'Training'), 'IncludeSubfolders', true,...
    'LabelSource', 'foldernames');
testData = imageDatastore(fullfile(parentDir,'Test'),'IncludeSubfolders',true,...
    'LabelSource','foldernames');

Set the options for re-training AlexNet. Set the initial learn rate to 10^(-4), set the maximum number of epochs to 15, and the minibatch size to 10. Use stochastic gradient descent with momentum.

ilr = 1e-4;
mxEpochs = 15;
mbSize =10;
opts = trainingOptions('sgdm', 'InitialLearnRate', ilr, ...
    'MaxEpochs',mxEpochs , 'MiniBatchSize',mbSize, ...
    'Plots', 'training-progress','ExecutionEnvironment','cpu');

Train the network. If you have a compatible GPU, trainNetwork automatically utilizes the GPU and training should complete in less than one minute. If you do not have a compatible GPU, trainNetwork utilizes the CPU and training should take around five minutes. Training times do vary based on a number of factors. In this case, the training takes place on a cpu by setting the ExecutionEnvironment parameter to cpu.

CWTnet = trainNetwork(trainingData,layers,opts);
Initializing input data normalization.
|========================================================================================|
|  Epoch  |  Iteration  |  Time Elapsed  |  Mini-batch  |  Mini-batch  |  Base Learning  |
|         |             |   (hh:mm:ss)   |   Accuracy   |     Loss     |      Rate       |
|========================================================================================|
|       1 |           1 |       00:00:02 |       50.00% |       1.9349 |      1.0000e-04 |
|       5 |          50 |       00:00:39 |      100.00% |   1.9074e-07 |      1.0000e-04 |
|      10 |         100 |       00:01:15 |      100.00% |   5.0068e-07 |      1.0000e-04 |
|      15 |         150 |       00:01:50 |      100.00% |   7.1526e-08 |      1.0000e-04 |
|========================================================================================|

Use the trained network to predict target returns in the held-out test set.

predictedLabels = classify(CWTnet,testData);
accuracy = sum(predictedLabels == testData.Labels)/50*100
accuracy =

   100

Plot the confusion chart along with the precision and recall. In this case, 100% of the test samples are classified correctly.

figure('Units','normalized','Position',[0.2 0.2 0.5 0.5]);
ccDCNN = confusionchart(testData.Labels,predictedLabels);
ccDCNN.Title = 'Confusion Chart';
ccDCNN.ColumnSummary = 'column-normalized';
ccDCNN.RowSummary = 'row-normalized';

LSTM

In the final section of this example, an LSTM workflow is described. First the LSTM layers are defined:

LSTMlayers = [ ...
    sequenceInputLayer(1)
    bilstmLayer(100,'OutputMode','last')
    fullyConnectedLayer(2)
    softmaxLayer
    classificationLayer
    ];
options = trainingOptions('adam', ...
    'MaxEpochs',30, ...
    'MiniBatchSize', 150, ...
    'InitialLearnRate', 0.01, ...
    'GradientThreshold', 1, ...
    'plots','training-progress', ...
    'Verbose',false,'ExecutionEnvironment','cpu');
trainLabels = repelem(categorical({'cylinder','cone'}),[50 50]);
trainLabels = trainLabels(:);
trainData = num2cell(table2array(RCSReturns)',2);
testData = num2cell(table2array(RCSReturnsTest)',2);
testLabels = repelem(categorical({'cylinder','cone'}),[25 25]);
testLabels = testLabels(:);
RNNnet = trainNetwork(trainData,trainLabels,LSTMlayers,options);

The accuracy for this system is also plotted.

predictedLabels = classify(RNNnet,testData,'ExecutionEnvironment','cpu');
accuracy = sum(predictedLabels == testLabels)/50*100
accuracy =

   100

Conclusion

This example presents a workflow for performing radar target classification using machine learning techniques. Although this example used synthesized data to do training and testing, it can be easily extended to accommodate real radar returns.

In addition, the workflows for both a deep convolutional neural network with transfer learning and an LSTM network are described. This workflow can be applied to more complex data sets. Because of the signal characteristics, we chose the continuous wavelet transform as our ideal time-frequency representation.

See Also

|