Main Content

Maritime Clutter Suppression with Neural Networks

This example shows how to train and evaluate neural networks to suppress maritime clutter returns from radar images using the Deep Learning Toolbox™. The Deep Learning Toolbox provides a framework for designing and implementing deep neural networks with algorithms, pretrained models, and apps.

Two example scenarios are shown here. In the first scenario, a denoising convolutional neural network is used to suppress clutter in a plan position indicator (PPI) image. The Simulate a Maritime Radar PPI example demonstrates how to use Radar Toolbox™ to create PPI images for a rotating radar at sea. In the second scenario, a convolutional autoencoder is used to suppress clutter in a range-time image. The Simulate a Coastal Surveillance Radar example shows how to use Radar Toolbox to create range-time images for a stationary coastal surveillance radar.

Scenario 1: Clutter Suppression for a Maritime Radar PPI

In this first scenario, a rotating radar on a tall platform in open water has been simulated to create PPI radar images with clutter and target returns. A denoising convolutional network is used to suppress the clutter returns.

Set the random seed for repeatability.

rng default

The Maritime Radar PPI Dataset

The dataset contains 84 pairs of synthetic radar images. Each pair consists of an input image, which has both sea clutter and extended target returns, and a desired response image, which includes only the target returns. The images were created using a radarScenario simulation with a radarTransceiver and a rotating uniform linear array (ULA). Each image contains two nonoverlapping cuboid targets with one representing a small container ship and the other representing a larger container ship.

The following parameters are fixed from image to image:

  • Frequency (10 GHz)

  • Pulse length (80 ns)

  • Range resolution (7.5 m)

  • PRF (1 kHz)

  • Azimuth beamwidth (1 deg)

  • Radar height (55 m)

  • Rotation rate (50 RPM)

  • Small target dimensions (120-by-18-by-22 m)

  • Large target dimensions (200-by-32-by-58 m)

  • Small target fixed RCS (30 dBsm)

  • Large target fixed RCS (40 dBsm)

The following parameters are randomized from image to image:

  • Wind speed (7 to 17 m/s)

  • Wind direction (0 to 180 deg)

  • Target position (anywhere on the surface)

  • Target heading (0 to 360 deg)

  • Target speed (4 to 19 m/s)

This variation ensures that a network trained on this data will be applicable to a fairly wide range of target profiles and sea states for this radar configuration. For more information on sea states, see the Maritime Radar Sea Clutter Modeling example.

Download the Maritime Radar PPI Images dataset and unzip the data and license file into the current working directory.

dataURL = 'https://ssd.mathworks.com/supportfiles/radar/data/MaritimeRadarPPI.zip';
unzip(dataURL)

Load the image data and pretrained network into a struct called imdata. This struct will have fields img1 through img84 and resp1 through resp84.

imdata = load('MaritimeRadarPPI.mat');

Prepare the Data

You can use the pretrained network to run the example without having to wait for training. To perform the training steps, set the doTrain variable to true by selecting the check box below. The training for this network takes about 5 minutes on a GPU.

doTrain = false;
if ~doTrain
    load PPIDeclutterNetwork.mat
end

Image sets 1-70 are used for training and 71-80 for validation. The last 4 images will be used for evaluation of the network.

Format the data as a 4D array for use with the network trainer and training options. The first two dimensions are considered spatial dimensions. The third dimension is for channels (such as color channels). The separate images are arranged along the 4th dimension. The cluttered inputs are simply referred to as images, and the desired output is known as the response. Single precision is used since that is native to the neural network trainer.

imgs  = zeros(626,626,1,84,'single');
resps = zeros(626,626,1,84,'single');
for ind = 1:84
   imgs(:,:,1,ind) = imdata.(sprintf('img%d',ind));
   resps(:,:,1,ind) = imdata.(sprintf('resp%d',ind));
end

After formatting, clear the loaded data struct to save RAM.

clearvars imdata

Network Architecture

A network is defined by a sequence of layer objects, including an input and output layer. An imageInputLayer is used as the input layer so that the images may be used without any reformatting. A regressionLayer is used for the output to evaluate a simple mean-squared-error (MSE) loss function. A cascade of 2D convolution layers with normalizations and nonlinear activations is used for the hidden layers.

Start by creating the input layer. Specify the spatial size of the input images along with the number of channels (1).

layers = imageInputLayer([626 626 1]);

Add 3 sets of convolution+normalization+activation. Each convolution layer consists of a set of spatial filters. The batchNormalizationLayer biases and scales each mini batch to improve numerical robustness and speed up training. The leakyReluLayer is a nonlinear activation layer that scales values below 0 while leaving values greater than 0 unmodified.

Care must be taken to ensure the spatial and channel dimensions are consistent from layer to layer and that the size and number of channels of the output from the last layer matches the size and number of channels of the desired response images. Set the Padding property of the convolution layers to 'same' so that the filtering process does not change the spatial size of the images.

layers = [layers;
          convolution2dLayer([5 5],1,Padding='same');
          batchNormalizationLayer;
          leakyReluLayer(0.2);
          convolution2dLayer([6 6],4,Padding='same');
          batchNormalizationLayer;
          leakyReluLayer(0.2);
          convolution2dLayer([5 5],1,Padding='same');
          batchNormalizationLayer;
          leakyReluLayer(0.2)];

Finally, add the output layer, which is a simple regression layer.

layers = [layers; regressionLayer];

Train the Network

Use the trainingOptions function to configure exactly how the network is trained. In addition to specifying the training method to use, this provides control over things like learn-rate scheduling and the size of mini batches. The trainingOptions can also be used to specify a validation data set, which is used to determine the running performance. Since the performance of a network may not improve monotonically with iterations, this also provides a way to return the network at whichever iteration yielded the lowest validation error.

Define the IDs of the sets to use for training and for validation.

trainSet = 1:70;
valSet = 71:80;

Now create the trainingOptions. Use the adaptive moment estimation (Adam) solver. Train for a maximum of 80 epochs with a mini batch size of 20. Set the initial learn rate to 0.1. The validation set is specified with a 1-by-2 cell array containing the validation image and response arrays. Set the ValidationFrequency to 25 to evaluate the loss for the validation set every 25 iterations. Specify OutputNetwork as 'best-validation-loss' to return the network at the iteration which had the least validation loss. Set Verbose to true to print the training progress.

opts = trainingOptions("adam", ...
    MaxEpochs=80, ...
    MiniBatchSize=20, ...
    Shuffle="every-epoch", ...
    InitialLearnRate=0.1, ...
    ValidationData={imgs(:,:,:,valSet),resps(:,:,:,valSet)}, ...
    ValidationFrequency=25, ...
    OutputNetwork='best-validation-loss', ...
    Verbose=true);

Training is initiated with the trainNetwork method. Input the 4D training image and response arrays, the vector of network layers, and the training options. This will only run if the doTrain flag is set to true.

A compatible GPU is used by default if available. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). Otherwise, trainNetwork uses the CPU.

if doTrain
    [net,info] = trainNetwork(imgs(:,:,:,trainSet),resps(:,:,:,trainSet),layers,opts);
end

Use the provided helper function to plot the training and validation loss on a log scale.

helperPlotTrainingProgress(info)

Figure contains an axes object. The axes object with title Training Progress, xlabel Iteration, ylabel Loss (dB) contains 2 objects of type line. One or more of the lines displays its values using only markers These objects represent Training, Validation.

The training and validation loss decreased steadily until an error floor was hit at around iteration 200. More training samples are needed to improve the performance of this network.

Evaluate the Network

Now that the network has been trained, use the last 4 images to evaluate the network.

evalSet = 81:84;

Use the provided helper function to plot the input images alongside the responses output by the network. The results are normalized and pixels below -60 dB are clipped for easier comparison.

helperPlotEvalResults(imgs(:,:,:,evalSet),net);

Figure contains 2 axes objects. Axes object 1 with title Input contains an object of type image. Axes object 2 with title Output contains an object of type image.

Figure contains 2 axes objects. Axes object 1 with title Input contains an object of type image. Axes object 2 with title Output contains an object of type image.

Figure contains 2 axes objects. Axes object 1 with title Input contains an object of type image. Axes object 2 with title Output contains an object of type image.

Figure contains 2 axes objects. Axes object 1 with title Input contains an object of type image. Axes object 2 with title Output contains an object of type image.

The network completely removes the sea clutter below a certain threshold of returned power while retaining the target signals with only a small dilation effect due to the size of the convolution filters used. The remaining high-power clutter near the center of the images could be removed by a spatially-aware layer, such as a fully-connected layer, or by preprocessing the original images to remove the range-dependent losses.

Scenario 2: Clutter Suppression for a Coastal Surveillance Radar

In this second scenario demonstrating maritime clutter suppression, a stationary radar viewing moving targets at sea has been simulated to create range-time radar images with clutter and target returns. A convolutional autoencoder is used to suppress the clutter returns.

Set the random seed for repeatability.

rng default

The Coastal Surveillance Radar Dataset

The dataset contains 10,000 pairs of synthetic range-time-intensity radar images. Each pair consists of an input image, which has both sea clutter and target returns, and a desired response image, which includes only the target returns. The images were created using a radarScenario simulation with a radarTransceiver and uniform linear array (ULA). Each image contains a single small extended target. A fairly high range resolution and a low PRF were used so that target range migration is visible. A short pulse is used for simplicity.

The following parameters are fixed from image to image:

  • Frequency (10 GHz)

  • Pulse length (21 ns)

  • Range resolution (3.2 m)

  • PRF (100 Hz)

  • Azimuth beamwidth (4 deg)

  • Number of pulses (80)

  • Number of range gates (80)

  • Radar height above sea surface (11 m)

The following parameters are randomized from image to image:

  • Sea state (1 to 5, proportional wind speeds from 3 to 13 m/s)

  • Target position (anywhere on the surface)

  • Target heading (0 to 360 deg)

  • Target speed (5 to 25 m/s)

  • Target RCS (-14 to 6 dBsm)

  • Target dimensions (length from 10 to 30 m, with proportional beam and height)

This variation ensures that a network trained on this data will be applicable to a fairly wide range of target profiles and sea states for this radar configuration.

Download the Coastal Surveillance Radar Images dataset and unzip the data and license file into the current working directory.

dataURL = 'https://ssd.mathworks.com/supportfiles/radar/data/CoastalSurveillanceRadarImages.zip';
unzip(dataURL)

Load the image data into a struct called imdata. This struct will have fields X and Y where X contains cluttered images and Y contains the ideal response images. Both are single-valued matrices of size 80-by-80-by-10,000 with each page representing an image. The images are formatted with range along the first dimension and slow time along the second dimension.

imdata = load('CoastalSurveillanceRadarImages.mat');

Prepare the Data

The data will be randomly segmented into a training set (80%), a validation set (10%), and an evaluation set (10%). Start by using helperSegmentData to get an index for each set.

numSets = size(imdata.X,3);
[trainIdx,valIdx,evalIdx] = helperSegmentData(numSets);

Format the sets into 1-by-2 cell arrays as expected by the network object.

trainData = {imdata.X(:,:,trainIdx), imdata.Y(:,:,trainIdx)};
valData   = {imdata.X(:,:,valIdx),   imdata.Y(:,:,valIdx)};
evalData  = {imdata.X(:,:,evalIdx),  imdata.Y(:,:,evalIdx)};

The network uses the 3rd dimension for channels and the 4th for different images, so swap the 3rd and 4th array dimensions now.

trainData = cellfun(@(t) permute(t,[1 2 4 3]),trainData,'UniformOutput',false);
valData   = cellfun(@(t) permute(t,[1 2 4 3]),valData,'UniformOutput',false);
evalData  = cellfun(@(t) permute(t,[1 2 4 3]),evalData,'UniformOutput',false);

Lastly, the data should be normalized for good training performance. Perform normalization on all of the training and validation input and output images using a column norm. Only the input images from the evaluation set need to be normalized. Save the scaling factor used for each column in each cluttered evaluation image so the normalization can be reversed on the output image for better comparison to the original.

trainData{1} = normalize(trainData{1},'norm');
trainData{2} = normalize(trainData{2},'norm');
valData{1}   = normalize(valData{1},'norm');
valData{2}   = normalize(valData{2},'norm');
[evalData{1},~,evalNormScale]  = normalize(evalData{1},'norm');

You can use the pretrained network to run the example without having to wait for training. To perform the training steps, set the doTrain variable to true by selecting the check box below. The training in this section can be expected to take about 10-15 minutes on a GPU.

doTrain = false;
if ~doTrain
    load CoastSurvDeclutterNetwork.mat
end

After formatting, clear the loaded data struct to save RAM.

clearvars imdata

Network Architecture

An autoencoder uses input and output layers with the same data size but uses a smaller data size for the hidden layers to find compressed versions of the input. If the desired output response is set equal to the input then the compressed (hidden) data can be thought of as a latent space representation of the features that make up the input. By using clutter-free data for the desired response, the network can learn a latent space representation that omits the clutter return, thus filtering out clutter from the images.

Start by creating the input layer. The input images have size 80-by-80.

inputLayer = imageInputLayer([80 80 1]);

The autoencoder consists of a cascade of uniform "encoding layers" that perform strided convolutions and max-pooling followed by a cascade of "decoding layers" that perform the inverse operation. After the decoding layers is a simple cascade of 3 noise-reducing convolutional layers, along with batch normalization and activations. All of the encoding and decoding convolution layers use filters of size 3-by-3, and the number of filters per encode/decode layer is stepped up closer to the bottleneck to implement the logic needed to identify clutter and target signal features. Each encoding layer uses a stride of 2, reducing the number of pixels by 1/2 in each direction. The final compressed image then has size 10-by-10.

encodingLayers = [convolution2dLayer(3,8, Padding="same"); leakyReluLayer; maxPooling2dLayer(2,Padding="same",Stride=2);
                  convolution2dLayer(3,16,Padding="same"); leakyReluLayer; maxPooling2dLayer(2,Padding="same",Stride=2);
                  convolution2dLayer(3,32,Padding="same"); leakyReluLayer; maxPooling2dLayer(2,Padding="same",Stride=2)];

Transposed convolutional layers are used for decoding.

decodingLayers = [transposedConv2dLayer(3,32,Stride=2,Cropping="same"); leakyReluLayer;
                  transposedConv2dLayer(3,16,Stride=2,Cropping="same"); leakyReluLayer;
                  transposedConv2dLayer(3,8, Stride=2,Cropping="same"); leakyReluLayer];

A simple noise-reducing cascade can be formed with more convolution layers. Use a single filter for the last layer so there is only one output channel.

postProcessingLayers = [convolution2dLayer(3,4,Padding="same"); leakyReluLayer;
                        convolution2dLayer(3,4,Padding="same"); leakyReluLayer;
                        convolution2dLayer(3,1,Padding="same")];

Use a regression layer as the output layer.

outputLayer = regressionLayer;

Put the layers together in a single vector of layer objects.

layers = [inputLayer; encodingLayers; decodingLayers; postProcessingLayers; outputLayer];

Train the Network

Use the trainingOptions function to configure exactly how the network is trained. In addition to specifying the training method to use, this provides control over things like learn-rate scheduling and the size of mini batches. The trainingOptions can also be used to specify a validation data set, which is used to determine the running performance.

Now create the trainingOptions. Use the adaptive moment estimation (Adam) solver. Train for a maximum of 20 epochs with a mini batch size of 128. Start with an initial learn rate of 0.01 and drop this by a factor of 50% every 5 epochs. The validation set is specified with a 1-by-2 cell array containing the validation arrays. Set the ValidationFrequency to 50 to evaluate the loss for the validation set every 50 iterations. Set Verbose to true to print the training progress.

opts = trainingOptions('adam', ...
    'InitialLearnRate',0.01, ...
    'LearnRateSchedule','piecewise', ...
    'LearnRateDropFactor',0.5, ...
    'LearnRateDropPeriod',5, ...
    'Verbose',true, ...
    'MiniBatchSize',128, ...
    'Shuffle','every-epoch', ...
    'MaxEpochs',20, ...
    'OutputNetwork','last-iteration', ...
    'ValidationData',valData, ...
    'ValidationFrequency',50);

Training is initiated with the trainNetwork method. Input the 4D training image and response arrays, the vector of network layers, and the training options. This will only run if the doTrain flag is set to true.

A compatible GPU is used by default if available. Using a GPU requires Parallel Computing Toolbox™ and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). Otherwise, trainNetwork uses the CPU.

if doTrain
    [net,info] = trainNetwork(trainData{1},trainData{2},layers,opts);
end

Use the provided helper function to plot the training and validation loss on a log scale.

helperPlotTrainingProgress(info)

Figure contains an axes object. The axes object with title Training Progress, xlabel Iteration, ylabel Loss (dB) contains 2 objects of type line. One or more of the lines displays its values using only markers These objects represent Training, Validation.

The training and validation loss decreases quickly at first but slows around iteration 400.

Evaluate the Network

Now that the network is trained, view the results for 6 pre-selected evaluation images.

showEvalResultsIdx = [1 3 7 22 25 40];

For each of these evaluation images, plot the original cluttered image, the desired response image, and the network output. The images are all plotted on a log scale with 60 dB of dynamic range.

for idx = showEvalResultsIdx
   helperPlotResult(net,evalData,evalNormScale,idx)
end

Figure contains 3 axes objects. Axes object 1 with title Original Image contains an object of type image. Axes object 2 with title Ideal Response contains an object of type image. Axes object 3 with title Decluttered Image contains an object of type image.

Figure contains 3 axes objects. Axes object 1 with title Original Image contains an object of type image. Axes object 2 with title Ideal Response contains an object of type image. Axes object 3 with title Decluttered Image contains an object of type image.

Figure contains 3 axes objects. Axes object 1 with title Original Image contains an object of type image. Axes object 2 with title Ideal Response contains an object of type image. Axes object 3 with title Decluttered Image contains an object of type image.

Figure contains 3 axes objects. Axes object 1 with title Original Image contains an object of type image. Axes object 2 with title Ideal Response contains an object of type image. Axes object 3 with title Decluttered Image contains an object of type image.

Figure contains 3 axes objects. Axes object 1 with title Original Image contains an object of type image. Axes object 2 with title Ideal Response contains an object of type image. Axes object 3 with title Decluttered Image contains an object of type image.

Figure contains 3 axes objects. Axes object 1 with title Original Image contains an object of type image. Axes object 2 with title Ideal Response contains an object of type image. Axes object 3 with title Decluttered Image contains an object of type image.

The network does a good job of removing unwanted clutter signals while retaining the target signal at its true power level. In cases where the target signal drops out entirely, the network is able to extrapolate out to fill in some of the missing data. Some background patches of noise are still present, which might be mitigated by a longer post processing network and further training.

Conclusion

In this example, you saw how to train and evaluate cascaded convolutional neural networks on PPI and range-time images to remove sea clutter while retaining target returns. You saw how to configure the input and output layers, the hidden convolution, normalization, and activation layers, and the training options.

Reference

[1] Vicen-Bueno, Raúl, Rubén Carrasco-Álvarez, Manuel Rosa-Zurera, and José Carlos Nieto-Borge. “Sea Clutter Reduction and Target Enhancement by Neural Networks in a Marine Radar System.” Sensors (Basel, Switzerland) 9, no. 3 (March 16, 2009): 1913–36.

[2] Zhang, Qi, Yuming Shao, Sai Guo, Lin Sun and Weidong Chen. “A Novel Method for Sea Clutter Suppression and Target Detection via Deep Convolutional Autoencoder.” (2017).

Supporting Functions

helperPlotTrainingProgress

function helperPlotTrainingProgress(info)
% Plot training progress

figure
plot(10*log10(info.TrainingLoss))
hold on
plot(10*log10(info.ValidationLoss),'*')
hold off
grid on
legend('Training','Validation')
title('Training Progress')
xlabel('Iteration')
ylabel('Loss (dB)')

end

helperPlotEvalResults

function helperPlotEvalResults(imgs,net)
% Plot original images alongside desired and actual responses

for ind = 1:size(imgs,4)
   
    resp_act = predict(net,imgs(:,:,1,ind));
    resp_act(resp_act<0) = 0;
    resp_act = resp_act/max(resp_act(:));
    
    fh = figure;
    
    subplot(1,2,1)
    im = imgs(:,:,1,ind);
    im = im/max(im(:));
    imagesc(20*log10(im))
    clim([-60 0])
    colorbar
    axis equal
    axis tight
    title('Input')
    
    subplot(1,2,2)
    imagesc(20*log10(resp_act))
    clim([-60 0])
    colorbar
    axis equal
    axis tight
    title('Output')
    
    fh.Position = fh.Position + [0 0 560 0];
end

end

helperSegmentData

function [trainIdx,valIdx,testIdx] = helperSegmentData(numSets)

% 80% train, 10% validation, 10% test
props = [0.8 0.1 0.1];

% Training samples
trainIdx = randsample(numSets,floor(props(1)*numSets));

% Get remaining samples
valAndTestIdx = setdiff(1:numSets,trainIdx);

% Validation samples
valIdx = randsample(valAndTestIdx,floor(props(2)*numSets));

% Remaining samples for test
testIdx = setdiff(valAndTestIdx,valIdx);

end

helperPlotResult

function helperPlotResult(net,data,scale,idx)

% Get predicted output using scaled test data input
in = data{1}(:,:,1,idx);
out = predict(net,in);

% Denormalize the input and output for plotting
in = in.*scale(1,:,1,idx);
out = out.*scale(1,:,1,idx);

% It's possible for the network to output negative values for some pixels,
% so we need to take an abs
out = abs(out);

% The ideal response containing only the target image
ideal = data{2}(:,:,1,idx);

% Get color axis limits
mxTgtPwr = 20*log10(max(ideal(:)));
cl = [mxTgtPwr-60, mxTgtPwr];

fh = figure;
fh.Position = [fh.Position(1:2) 3*fh.Position(3) fh.Position(4)];

subplot(1,3,1)
imagesc(20*log10(in))
set(gca,'ydir','normal')
colorbar
clim(cl)
title('Original Image')

subplot(1,3,2)
imagesc(20*log10(ideal))
set(gca,'ydir','normal')
colorbar
clim(cl)
title('Ideal Response')

subplot(1,3,3)
imagesc(20*log10(out))
set(gca,'ydir','normal')
colorbar
clim(cl)
title('Decluttered Image')

end