imageLIME
Syntax
Description
uses the locally-interpretable model-agnostic explanation (LIME) technique to compute a map
of the importance of the features in the input image scoreMap
= imageLIME(net
,X
,channelIdx
)X
when the network
net
evaluates the activation score for the channel given by
channelIdx
. For classification tasks, specify the
channelIdx
as the channel in the softmax layer corresponding to the
class label of interest.
The LIME technique approximates the behavior of the net
using a
simpler, more interpretable model. By generating synthetic data from input
X
, computing network predictions for the synthetic data using
net
, and then using the results to fit a simple regression model, the
imageLIME
function determines the importance of each feature of
X
to the network's activation score for the channel given by
channelIdx
.
This function requires Statistics and Machine Learning Toolbox™.
[
also returns a map of the features used to compute the LIME results and the calculated
importance of each feature.scoreMap
,featureMap
,featureImportance
] = imageLIME(net
,X
,channelIdx
)
___ = imageLIME(___,
specifies options using one or more name-value arguments in addition to the input arguments
in previous syntaxes. For example, Name=Value
)NumFeatures=100
sets the target number
of features to 100.
Examples
Visualize Which Parts of an Image are Important for Classification
Use imageLIME
to visualize the parts of an image are important to a network for a classification decision.
Import the pretrained network SqueezeNet.
[net, classNames] = imagePretrainedNetwork("squeezenet");
Import the image and resize to match the input size for the network.
X = imread("laika_grass.jpg");
inputSize = net.Layers(1).InputSize(1:2);
X = imresize(X,inputSize);
Display the image. The image is of a dog named Laika.
imshow(X)
Compute the channel corresponding to the maximum class score. For single observation input, make predictions using the predict
function. To make predictions using the GPU, first convert the data to gpuArray
. Making predictions on a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox).
score = predict(net,single(X)); [~, channel] = max(score);
Use imageLIME
to determine which parts of the image are important to the classification result.
scoreMap = imageLIME(net,X,channel);
Plot the result over the original image with transparency to see which areas of the image affect the classification score.
figure imshow(X) hold on imagesc(scoreMap,AlphaData=0.5) colormap jet
The network focuses predominantly on Laika's head and back to make the classification decision. Laika's eye and ear are also important to the classification result.
Visualize Only the Most Important Features
Use imageLIME
to determine the most important features in an image and isolate them from the unimportant features.
Load a pretrained SqueezeNet network and the corresponding class names. For a list of all available networks, see Pretrained Deep Neural Networks.
[net, classNames] = imagePretrainedNetwork("squeezenet");
Import the image and resize to match the input size for the network.
X = imread("sherlock.jpg");
inputSize = net.Layers(1).InputSize(1:2);
X = imresize(X,inputSize);
Classify the image. To make prediction with a single observation, use the predict
function. To convert the prediction scores to labels, use the scores2label
function. To use a GPU, first convert the data to gpuArray
. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox).
if canUseGPU X = gpuArray(X); end scores = predict(net,single(X)); [label,score] = scores2label(scores,classNames);
Compute the map of the feature importance and also obtain the map of the features and the feature importance. Set the image segmentation method to 'grid'
, the number of features to 64
, and the number of synthetic images to 3072
.
channel = find(label == categorical(classNames)); [scoreMap,featureMap,featureImportance] = imageLIME(net,X,channel,'Segmentation','grid','NumFeatures',64,'NumSamples',3072);
Plot the result over the original image with transparency to see which areas of the image affect the classification score.
figure imshow(X) hold on imagesc(scoreMap,'AlphaData',0.5) colormap jet colorbar
Use the feature importance to find the indices of the most important five features.
numTopFeatures = 5; [~,idx] = maxk(featureImportance,numTopFeatures);
Use the map of the features to mask out the image so only the most important five features are visible. Display the masked image.
mask = ismember(featureMap,idx); maskedImg = uint8(mask).*X; figure imshow(maskedImg);
View Important Features Using Custom Segmentation Map
Use imageLIME
with a custom segmentation map to view the most important features for a classification decision.
Import the pretrained network GoogLeNet.
[net,classNames] = imagePretrainedNetwork("googlenet");
Import the image and resize to match the input size for the network.
X = imread("sherlock.jpg");
inputSize = net.Layers(1).InputSize(1:2);
X = imresize(X,inputSize);
Classify the test images. To make predictions with multiple observations, use the minibatchpredict
function. To convert the prediction scores to labels, use the scores2label
function. The minibatchpredict
function automatically uses a GPU if one is available. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). Otherwise, the function uses the CPU.
scores = minibatchpredict(net,X); label = scores2label(scores,classNames);
Create a matrix defining a custom segmentation map which divides the image into triangular segments. Each triangular segment represents a feature.
Start by defining a matrix with size equal to the input size of the image.
segmentationMap = zeros(inputSize(1));
Next, create a smaller segmentation map which divides a 56-by-56 pixel region into two triangular features. Assign values 1 and 2 to the upper and lower segments, representing the first and second features, respectively.
blockSize = 56;
segmentationSubset = ones(blockSize);
segmentationSubset = tril(segmentationSubset) + segmentationSubset;
% Set the diagonal elements to alternate values 1 and 2.
segmentationSubset(1:(blockSize+1):end) = repmat([1 2],1,blockSize/2)';
To create a custom segmentation map for the whole image, repeat the small segmentation map. Each time you repeat the smaller map, increase the feature index values so that the pixels in each triangular segment correspond to a unique feature. In the final matrix, value 1 indicates the first feature, value 2 the second feature, and so on for each segment in the image.
blocksPerSide = inputSize(1)/blockSize; subset = 0; for i=1:blocksPerSide for j=1:blocksPerSide xidx = (blockSize*(i-1))+1:(blockSize*i); yidx = (blockSize*(j-1))+1:(blockSize*j); segmentationMap(xidx,yidx) = segmentationSubset + 2*subset; subset = subset + 1; end end
View the segmentation map. This map divides the image into 32 triangular regions.
figure imshow(X) hold on imagesc(segmentationMap,'AlphaData',0.8); title('Custom Segmentation Map') colormap gray
Use imageLIME
with the custom segmentation map to determine which parts of the image are most important to the classification result.
channel = find(label == categorical(classNames)); scoreMap = imageLIME(net,X, channel, ... 'Segmentation',segmentationMap);
Plot the result of imageLIME
over the original image to see which areas of the image affect the classification score.
figure; imshow(X) hold on title('Image LIME (Golden Retriever)') colormap jet; imagesc(scoreMap, "AlphaData", 0.5);
Red areas of the map have a higher importance — when these areas are removed, the score for the golden retriever class goes down. The most important feature for this classification is the ear.
Input Arguments
net
— Trained network
dlnetwork
object
Trained network, specified as a dlnetwork
object.
net
must contain a single input layer. The input layer must be an imageInputLayer
.
X
— Input image
numeric array
Input image, specified as a numeric array.
The image must be the same size as the image input size of the network
net
. The input size is specified by the
InputSize
property of the imageInputLayer
in the network.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
channelIdx
— Channel index
numeric index | vector of numeric indices
Channel index, specified as a scalar or a vector of channel indices. The possible
choices for channelIdx
depend on the selected layer. The function
computes the scores using the layer specified by the OutputNames
property of the dlnetwork
object net
and the channel
specified by channelIdx
.
If channelIdx
is specified as a vector, the feature importance
map for each specified channel is calculated independently. In that case,
scoreMap(:,:,i)
corresponds to the map for the
i
th element in channelIdx
.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Example: NumFeatures=100,Segmentation="grid",OutputUpsampling="bicubic",ExecutionEnvironment="gpu"
segments the input image into a grid of approximately 100 features, executes the calculation
on the GPU, and upsamples the resulting map to the same size as the input image using
bicubic interpolation.
NumFeatures
— Target number of features
49
(default) | positive integer
Target number of features to divide the input image into, specified as a positive integer.
A larger value divides the input image into more, smaller features. To get the
best results when using a larger number of features, also increase the number of
synthetic images using the NumSamples
option.
The exact number of features depends on the input image and segmentation method
specified using the Segmentation
option and can be less than the
target number of features.
When you specify
Segmentation
as"superpixels"
, the actual number of features can be greater or less than the number specified usingNumFeatures
.When you specify
Segmentation
as"grid"
, the actual number of features can be less than the number specified usingNumFeatures
. If your input image is square, specifyNumFeatures
as a square number.When you specify
Segmentation
assegmentation
, wheresegmentation
is a two-dimensional array,NumFeatures
is the same as the number of unique elements in the array.
Example: NumFeatures=100
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
NumSamples
— Number of synthetic images
2048
(default) | positive integer
Number of synthetic images to generate, specified as a positive integer.
A larger number of synthetic images gives better results but takes more time to compute.
Example: NumSamples=1024
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Segmentation
— Segmentation method
"superpixels"
(default) | "grid"
| numeric matrix
Segmentation method to use to divide the input image into features, specified as
"superpixels"
, "grid"
, or a two-dimensional
segmentation matrix.
The function segments the input image into features in the following way:
"superpixels"
— Input image is divided into superpixel features, using thesuperpixels
(Image Processing Toolbox) function. Features are irregularly shaped, based on the value of the pixels. This option requires Image Processing Toolbox™."grid"
— Input image is divided into a regular grid of features. Features are approximately square, based on the aspect ratio of the input image and the specified value ofNumFeatures
. The number of grid cells can be smaller than the specified value ofNumFeatures
. If the input image is square, specifyNumFeatures
as a square number.Numeric matrix — Input image is divided into custom features, using the numeric matrix as a map, where the integer value of each pixel specifies the feature of the corresponding pixel.
NumFeatures
is the same as the number of unique elements in the matrix. The size of the matrix must match the size of the input image.
For photographic image data, the "superpixels"
option usually
gives better results. In this case, features are based on the contents of the image,
by segmenting the image into regions of similar pixel value. For other types of
images, such as spectrograms, the more regular "grid"
option or a
custom segmentation map can provide more useful results.
Example: Segmentation="grid"
Model
— Type of simple model
"tree"
(default) | "linear"
Type of simple model to fit, specified as "tree"
or
"linear"
.
The imageLIME
function generates a network prediction for the
synthetic images using the network net
and then uses the results
to fit a simple, interpretable model. The methods used to fit the results and
determine the importance of each feature depend on the type of simple model used.
"tree"
— Fit a regression tree usingfitrtree
(Statistics and Machine Learning Toolbox) then compute the importance of each feature usingpredictorImportance
(Statistics and Machine Learning Toolbox)"linear"
— Fit a linear model with lasso regression usingfitrlinear
(Statistics and Machine Learning Toolbox) then compute the importance of each feature using the weights of the linear model.
Example: Model="linear"
Data Types: char
| string
OutputUpsampling
— Output upsampling method
"nearest"
(default) | "bicubic"
| "none"
Output upsampling method, specified as:
"bicubic"
— Use bicubic interpolation to produce a smooth map the same size as the input data."nearest"
— Use nearest-neighbor interpolation to resize the map to have the same resolution as the input data."none"
— Use no upsampling. The map can be smaller than the input data.
If OutputUpsampling
is "bicubic"
or "nearest"
, the computed map is upsampled to the size of the input data using the imresize
function.
Example: OutputUpsampling="none"
MiniBatchSize
— Size of mini-batch
128 (default) | positive integer
Size of the mini-batch to use to compute the score map, specified as a positive integer.
The mini-batch size specifies the number of images that are passed to the network at once. Larger mini-batch sizes lead to faster computation, at the cost of more memory.
Example: MiniBatchSize=256
ExecutionEnvironment
— Hardware resource
"auto"
(default) | "gpu"
| "cpu"
Hardware resource, specified as one of these values:
"auto"
— Use a GPU if one is available. Otherwise, use the CPU."gpu"
— Use the GPU. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information about supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error."cpu"
— Use the CPU.
Output Arguments
scoreMap
— Map of feature importance
numeric array
Map of feature importance, returned as a numeric array. Areas in the map with higher positive values correspond to regions of input data that contribute positively to the total activation.
The value of scoreMap(i,j)
denotes the importance of the image
pixel (i,j)
to the simple model, except when you use the options
Segmentation="grid"
, and
OutputUpsampling="none"
. In that case, the
scoreMap
is smaller than the input image, and the value of
scoreMap(i,j)
denotes the importance of the feature at position
(i,j)
in the grid of features.
If channelIdx
is specified as a vector, then the change in total
activation for each specified channel is calculated independently. In that case,
scoreMap(:,:,i)
corresponds to the score map for the
i
th element in channelIdx
.
featureMap
— Map of features
numeric array
Map of features, returned as a numeric array.
For each pixel (i,j)
in the input image, idx =
featureMap(i,j)
is an integer corresponding to the index of the feature
containing that pixel.
featureImportance
— Feature importance
numeric array
Feature importance, returned as a numeric array.
The value of featureImportance(idx)
is the calculated importance
of the feature specified by idx
. If you provide
channelIdx
as a vector of numeric indices, then
featureImportance(idx,k)
corresponds to the importance of feature
idx
for channelIdx(k)
.
More About
LIME
The locally interpretable model-agnostic explanations (LIME) technique is an explainability technique used to explain the decisions made by a deep neural network.
Given the decision of deep network for a piece of input data, the LIME technique calculates the importance of each feature of the input data with respect to the network output.
The LIME technique approximates the behavior of a deep neural network using a simpler,
more interpretable model, such as a regression tree. To map the importance of different
parts of the input image, the imageLIME
function of performs the
following steps.
Segment the image into features.
Generate synthetic image data by randomly including or excluding features. Each pixel in an excluded feature is replaced with the value of the average image pixel.
Fit a regression model using the presence or absence of image features for each synthetic image as binary regression predictors for the scores of the target channel.
Compute the importance of each feature using the regression model.
The resulting map can be used to determine which features were most important to a particular output. This can be especially useful for making sure your network is focusing on the appropriate features when making predictions.
Extended Capabilities
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
The imageLIME
function
fully supports GPU arrays. To run the function on a GPU, specify the input data as a gpuArray
(Parallel Computing Toolbox). For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2020bR2024a: DAGNetwork
and SeriesNetwork
objects are not recommended
DAGNetwork
and SeriesNetwork
objects are not recommended.
Use dlnetwork
objects instead.
The syntax scoreMap = imageLIME(net,X,label)
is supported for
DAGNetwork
and SeriesNetwork
objects only, where
label
is the class label used to calculate change in classification
score, specified as a categorical, a character array, or a string array. To use a
dlnetwork
object with the imageLIME
function, you
must specify the channel index instead. To find the channel index, you must know the order of
the classes that the network was trained on.
Use the trainnet
function to create a dlnetwork
object. To convert an existing DAGNetwork
or SeriesNetwork
object to a dlnetwork
object, use the dag2dlnetwork
function.
This table shows how to convert code that uses a DAGNetwork
object to
code that uses a dlnetwork
object. You can use the same syntaxes to convert
a SeriesNetwork
object.
Not recommended (DAGNetwork object) | Recommended (dlnetwork object) |
---|---|
map = imageLIME(DAGnet,X,label); | net = dag2dlnetwork(DAGnet); channelIdx = find(label == classNames); map = imageLIME(net,X,channelIdx); classNames
contains the classes on which the network was trained. For example, you can
extract the class names from a trained classification
DAGNetwork using this
code.classNames = DAGnet.Layers(end).Classes; |
R2024a: Score maps for nonclassification tasks
Starting in R2024a, you can use the imageLIME
function to generate scores maps for nonclassification tasks, such as regression.
R2021a: Custom segmentation maps
The Segmentation
name-value argument of
imageLIME
now accepts a two-dimensional segmentation matrix the
same size as the input image. Custom segmentation maps are useful for using LIME on tasks
involving non-natural images, such as spectrogram or floor plan data.
See Also
dlnetwork
| testnet
| minibatchpredict
| scores2label
| occlusionSensitivity
| gradCAM
| predict
| forward
Topics
- Understand Network Predictions Using LIME
- Investigate Spectrogram Classifications Using LIME
- Interpret Deep Network Predictions on Tabular Data Using LIME
- Understand Network Predictions Using Occlusion
- Grad-CAM Reveals the Why Behind Deep Learning Decisions
- Investigate Network Predictions Using Class Activation Mapping
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)