Experiment Manager
Design and run experiments to train and compare deep learning networks
Since R2020a
Description
You can use the Experiment Manager app to create deep learning experiments to train networks under multiple initial conditions and compare the results. For example, you can use Experiment Manager to:
Sweep through a range of hyperparameter values or use Bayesian optimization to find optimal training options. Bayesian optimization requires Statistics and Machine Learning Toolbox™.
Use the built-in function
trainNetwork
or define your own custom training function.Compare the results of using different data sets or test different deep network architectures.
To set up your experiment quickly, you can start with a preconfigured template. The experiment templates support workflows that include image classification and regression, sequence classification, audio classification, semantic segmentation, and custom training loops.
Experiment Manager provides visualizations, filters, and annotations to help you manage your experiment results and record your observations. To improve reproducibility, Experiment Manager stores a copy of the experiment definition every time that you run an experiment. You can access past experiment definitions to keep track of the combinations of hyperparameters that produce each of your results.
Experiment Manager organizes your experiments and results in projects.
You can store several experiments in the same project.
Each experiment contains a set of results for each time that you run the experiment.
Each set of results consists of one or more trials that correspond to a different combination of hyperparameters.
More
The Experiment Browser pane displays the hierarchy of experiments and results in the project. For example, this project has three experiments, each of which has several sets of results.
The orange round-bottom flask indicates a general-purpose experiment that you can run in
MATLAB® without a Deep Learning Toolbox™ license. The blue Erlenmeyer flask
indicates a built-in training experiment for deep learning
that uses the training function
trainNetwork
. The green beaker
indicates a custom training experiment for deep learning or
machine learning that relies on a custom training function. For more information about
general-purpose experiments, see Manage Experiments.
By default, Experiment Manager runs one trial at a time. If you have Parallel Computing Toolbox™, you can run multiple trials at the same time or run a single trial on multiple GPUs, on a cluster, or in the cloud. If you have MATLAB Parallel Server™, you can also offload experiments as batch jobs in a remote cluster so that you can continue working or close your MATLAB session while your experiment runs. For more information, see Use Experiment Manager to Train Networks in Parallel and Offload Deep Learning Experiments as Batch Jobs to a Cluster.
Required Products
Deep Learning Toolbox to run built-in or custom training experiments for deep learning and to view confusion matrices for these experiments
Statistics and Machine Learning Toolbox to run custom training experiments for machine learning and experiments that use Bayesian optimization
Parallel Computing Toolbox to run multiple trials at the same time or a single trial at a time on multiple GPUs, on a cluster, or in the cloud
MATLAB Parallel Server to offload experiments as batch jobs in a remote cluster

Open the Experiment Manager App
MATLAB Toolstrip: On the Apps tab, under MATLAB, click the Experiment Manager icon (since R2023b).
MATLAB command prompt: Enter
experimentManager
.
Examples
Image Classification by Sweeping Hyperparameters
This example shows how to use the experiment template for image classification by sweeping hyperparameters. With this template, you can quickly set up a built-in training experiment that uses the trainNetwork
function. The trainNetwork
function requires Deep Learning Toolbox.
You can configure the experiment yourself by following these steps. Alternatively, open the example to skip the configuration steps and load a preconfigured experiment that you can inspect and run.
1. Close any open projects and open the Experiment Manager app.
2. A dialog box provides links to the getting started tutorials and your recent projects, as well as buttons to create a new project or open an example from the documentation. Under New, select Blank Project.
3. A dialog box lists several templates to support your AI workflows, including image classification and regression, sequence classification, audio classification, semantic segmentation, and custom training loops. Under Image Classification Experiments, select Image Classification by Sweeping Hyperparameters.
4. Specify the name and location for the new project. Experiment Manager opens a new experiment in the project. The experiment definition tab displays the description, hyperparameters, setup function, and metrics that define the experiment.
5. In the Description field, enter a description of the experiment:
Classification of digits, using various initial learning rates.
6. Under Hyperparameters, replace the value of myInitialLearnRate
with 0.0025:0.0025:0.015
. Verify that Strategy is set to Exhaustive Sweep
.
7. Under Setup Function, click Edit. The setup function opens in MATLAB Editor. The setup function specifies the training data, network architecture, and training options for the experiment. In this experiment, the setup function has these sections:
Load Training Data defines image datastores containing the training and validation data for the experiment. The experiment uses the Digits data set, which consists of 10,000 28-by-28 pixel grayscale images of digits from 0 to 9, categorized by the digit they represent. For more information on this data set, see Image Data Sets.
Define Network Architecture defines the architecture for a simple convolutional neural network for deep learning classification.
Specify Training Options defines a
object for the experiment. In this experiment, the setup function loads the values for the initial learning rate from thetrainingOptions
myInitialLearnRate
entry in the hyperparameter table.
When you run the experiment, Experiment Manager trains the network defined by the setup function six times. Each trial uses one of the learning rates specified in the hyperparameter table. By default, Experiment Manager runs one trial at a time. If you have Parallel Computing Toolbox, you can run multiple trials at the same time or offload your experiment as a batch job in a cluster:
To run one trial of the experiment at a time, on the Experiment Manager toolstrip, set Mode to
Sequential
and click Run.To run multiple trials at the same time, set Mode to
Simultaneous
and click Run. If there is no current parallel pool, Experiment Manager starts one using the default cluster profile. Experiment Manager then runs as many simultaneous trials as there are workers in your parallel pool. For best results, before you run your experiment, start a parallel pool with as many workers as GPUs. For more information, see Run Experiments in Parallel and GPU Computing Requirements (Parallel Computing Toolbox).To offload the experiment as a batch job, set Mode to
Batch Sequential
orBatch Simultaneous
, specify your cluster and pool size, and click Run. For more information, see Offload Deep Learning Experiments as Batch Jobs to a Cluster.
A table of results displays the accuracy and loss for each trial.
To display the training plot and track the progress of each trial while the experiment is running, under Review Results, click Training Plot. You can also monitor the training progress in the MATLAB Command Window.
To display the confusion matrix for the validation data in each completed trial, under Review Results, click Validation Data.
When the experiment finishes, you can sort the table by column or filter trials by using the Filters pane. You can also record observations by adding annotations to the results table. For more information, see Sort, Filter, and Annotate Experiment Results.
To test the performance of an individual trial, export the trained network or the training information for the trial. On the Experiment Manager toolstrip, select Export > Trained Network or Export > Training Information, respectively. For more information, see net
and info
. To save the contents of the results table as a table
array in the MATLAB workspace, select Export > Results Table.
Image Regression by Sweeping Hyperparameters
This example shows how to use the experiment template for image regression by sweeping hyperparameters. With this template, you can quickly set up a built-in training experiment that uses the trainNetwork
function. The trainNetwork
function requires Deep Learning Toolbox.
You can configure the experiment yourself by following these steps. Alternatively, open the example to skip the configuration steps and load a preconfigured experiment that you can inspect and run.
1. Close any open projects and open the Experiment Manager app.
2. A dialog box provides links to the getting started tutorials and your recent projects, as well as buttons to create a new project or open an example from the documentation. Under New, select Blank Project.
3. A dialog box lists several templates to support your AI workflows, including image classification and regression, sequence classification, audio classification, semantic segmentation, and custom training loops. Under Image Regression Experiments, select Image Regression by Sweeping Hyperparameters.
4. Specify the name and location for the new project. Experiment Manager opens a new experiment in the project. The experiment definition tab displays the description, hyperparameters, setup function, and metrics that define the experiment.
5. In the Description field, enter a description of the experiment:
Regression to predict angles of rotation of digits, using various initial learning rates.
6. Under Hyperparameters, replace the value of myInitialLearnRate
with 0.001:0.001:0.006
. Verify that Strategy is set to Exhaustive Sweep
.
7. Under Setup Function, click Edit. The setup function opens in MATLAB Editor. The setup function specifies the training data, network architecture, and training options for the experiment. In this experiment, the setup function has these sections:
Load Training Data defines the training and validation data for the experiment as 4-D arrays. The training and validation data each consist of 5000 images from the Digits data set. Each image shows a digit from 0 to 9, rotated by a certain angle. The regression values correspond to the angles of rotation. For more information on this data set, see Image Data Sets.
Define Network Architecture defines the architecture for a simple convolutional neural network for deep learning regression.
Specify Training Options defines a
object for the experiment. In this experiment, the setup function loads the values for the initial learning rate from thetrainingOptions
myInitialLearnRate
entry in the hyperparameter table.
When you run the experiment, Experiment Manager trains the network defined by the setup function six times. Each trial uses one of the learning rates specified in the hyperparameter table. By default, Experiment Manager runs one trial at a time. If you have Parallel Computing Toolbox, you can run multiple trials at the same time or offload your experiment as a batch job in a cluster:
To run one trial of the experiment at a time, on the Experiment Manager toolstrip, set Mode to
Sequential
and click Run.To run multiple trials at the same time, set Mode to
Simultaneous
and click Run. If there is no current parallel pool, Experiment Manager starts one using the default cluster profile. Experiment Manager then runs as many simultaneous trials as there are workers in your parallel pool. For best results, before you run your experiment, start a parallel pool with as many workers as GPUs. For more information, see Run Experiments in Parallel and GPU Computing Requirements (Parallel Computing Toolbox).To offload the experiment as a batch job, set Mode to
Batch Sequential
orBatch Simultaneous
, specify your cluster and pool size, and click Run. For more information, see Offload Deep Learning Experiments as Batch Jobs to a Cluster.
A table of results displays the root mean squared error (RMSE) and loss for each trial.
To display the training plot and track the progress of each trial while the experiment is running, under Review Results, click Training Plot. You can also monitor the training progress in the MATLAB Command Window.
When the experiment finishes, you can sort the table by column or filter trials by using the Filters pane. You can also record observations by adding annotations to the results table. For more information, see Sort, Filter, and Annotate Experiment Results.
To test the performance of an individual trial, export the trained network or the training information for the trial. On the Experiment Manager toolstrip, select Export > Trained Network or Export > Training Information, respectively. For more information, see net
and info
. To save the contents of the results table as a table
array in the MATLAB workspace, select Export > Results Table.
Image Classification Using Custom Training Loop
This example shows how to use the training experiment template for image classification using a custom training loop. With this template, you can quickly set up a custom training experiment.
You can configure the experiment yourself by following these steps. Alternatively, open the example to skip the configuration steps and load a preconfigured experiment that you can inspect and run.
1. Close any open projects and open the Experiment Manager app.
2. A dialog box provides links to the getting started tutorials and your recent projects, as well as buttons to create a new project or open an example from the documentation. Under New, select Blank Project.
3. A dialog box lists several templates to support your AI workflows, including image classification and regression, sequence classification, audio classification, semantic segmentation, and custom training loops. Under Image Classification Experiments, select Image Classification Using Custom Training Loop.
4. Specify the name and location for the new project. Experiment Manager opens a new experiment in the project. The experiment definition tab displays the description, hyperparameters, and training function that define the experiment.
5. In the Description field, enter a description of the experiment:
Classification of digits, using various initial learning rates.
6. Under Hyperparameters, replace the value of myInitialLearnRate
with 0.0025:0.0025:0.015
. Verify that Strategy is set to Exhaustive Sweep
.
7. Under Training Function, click Edit. The training function opens in MATLAB Editor. The training function specifies the training data, network architecture, training options, and training procedure used by the experiment. In this experiment, the training function has these sections:
Load Training Data defines the training data for the experiment as 4-D arrays. The experiment uses the Digits data set, which consists of 5,000 28-by-28 pixel grayscale images of digits from 0 to 9, categorized by the digit they represent. For more information on this data set, see Image Data Sets.
Define Network Architecture defines the architecture for a simple convolutional neural network for deep learning classification. To train the network with a custom training loop, the training function represents the network as a
dlnetwork
object.Specify Training Options defines the training options used by the experiment. In this experiment, the training function loads the values for the initial learning rate from the
myInitialLearnRate
entry in the hyperparameter table.Train Model defines the custom training loop used by the experiment. For each epoch, the custom training loop shuffles the data and iterates over mini-batches of data. For each mini-batch, the custom training loop evaluates the model gradients, state, and loss, determines the learning rate for the time-based decay learning rate schedule, and updates the network parameters. To track the progress of the training and record the value of the training loss, the training function uses the
experiments.Monitor
objectmonitor
.
When you run the experiment, Experiment Manager trains the network defined by the training function six times. Each trial uses one of the learning rates specified in the hyperparameter table. By default, Experiment Manager runs one trial at a time. If you have Parallel Computing Toolbox, you can run multiple trials at the same time or offload your experiment as a batch job in a cluster:
To run one trial of the experiment at a time, on the Experiment Manager toolstrip, set Mode to
Sequential
and click Run.To run multiple trials at the same time, set Mode to
Simultaneous
and click Run. If there is no current parallel pool, Experiment Manager starts one using the default cluster profile. Experiment Manager then runs as many simultaneous trials as there are workers in your parallel pool. For best results, before you run your experiment, start a parallel pool with as many workers as GPUs. For more information, see Run Experiments in Parallel and GPU Computing Requirements (Parallel Computing Toolbox).To offload the experiment as a batch job, set Mode to
Batch Sequential
orBatch Simultaneous
, specify your cluster and pool size, and click Run. For more information, see Offload Deep Learning Experiments as Batch Jobs to a Cluster.
A table of results displays the training loss for each trial.
To display the training plot and track the progress of each trial while the experiment is running, under Review Results, click Training Plot.
When the experiment finishes, you can sort the table by column or filter trials by using the Filters pane. You can also record observations by adding annotations to the results table. For more information, see Sort, Filter, and Annotate Experiment Results.
To test the performance of an individual trial, export the training output for the trial. On the Experiment Manager toolstrip, select Export > Training Output. In this experiment, the training output is a structure that contains the values of the training loss and the trained network. To save the contents of the results table as a table
array in the MATLAB workspace, select Export > Results Table.
Configure Built-In Training Experiment
This example shows how to set up a built-in training experiment using
Experiment Manager. Built-in training experiments rely on the trainNetwork
function and support workflows such as image classification,
image regression, sequence classification, and semantic segmentation. The
trainNetwork
function requires Deep Learning Toolbox.
Built-in training experiments consist of a description, a table of hyperparameters, a setup function, and a collection of metric functions to evaluate the results of the experiment.
In the Description field, enter a description of the experiment.
Under Hyperparameters, select the strategy and specify the hyperparameters to use for your experiment:
To sweep through a range of
hyperparameter values, set Strategy to Exhaustive
Sweep
. In the hyperparameter table, enter the names and values of the
hyperparameters used in the experiment.
Hyperparameter names must start with a letter, followed by letters,
digits, or underscores. Hyperparameter values must be scalars or vectors
with numeric, logical, or string values, or cell arrays of character vectors. For example, these
are valid hyperparameter specifications:
0.01
0.01:0.01:0.05
[0.01 0.02 0.04 0.08]
["alpha" "beta" "gamma"]
{'delta' 'epsilon' 'zeta'}
Experiment Manager trains the network using every combination of the hyperparameter values specified in the table.
To find optimal training options by using Bayesian optimization, set
Strategy to Bayesian Optimization
.
In the hyperparameter table, specify these properties of the hyperparameters used in the experiment:
Name — Enter a valid hyperparameter name. Hyperparameter names must start with a letter, followed by letters, digits, or underscores.
Range — For a real- or integer-valued hyperparameter, enter a two-element vector that gives the lower bound and upper bound of the hyperparameter. For a categorical hyperparameter, enter an array of strings or a cell array of character vectors that lists the possible values of the hyperparameter.
Type — Select
real
for a real-valued hyperparameter,integer
for an integer-valued hyperparameter, orcategorical
for a categorical hyperparameter.Transform — Select
none
to use no transform orlog
to use a logarithmic transform. When you selectlog
, the hyperparameter values must be positive. With this setting, the Bayesian optimization algorithm models the hyperparameter on a logarithmic scale.
To specify the duration of your experiment, under Bayesian Optimization Options, enter the maximum time in seconds and the maximum number of trials to run. Note that the actual run time and number of trials in your experiment can exceed these settings because Experiment Manager checks these options only when a trial finishes executing.
Optionally, specify deterministic constraints, conditional constraints, and an acquisition function for the Bayesian optimization algorithm (since R2023a). Under Bayesian Optimization Options, click Advanced Options and specify:
Deterministic Constraints – Enter the name of a deterministic constraint function. To run the Bayesian optimization algorithm without deterministic constraints, leave this option blank. For more information, see Deterministic Constraints — XConstraintFcn (Statistics and Machine Learning Toolbox).
Conditional Constraints – Enter the name of a conditional constraint function. To run the Bayesian optimization algorithm without conditional constraints, leave this option blank. For more information, see Conditional Constraints — ConditionalVariableFcn (Statistics and Machine Learning Toolbox).
Acquisition Function Name – Select an acquisition function from the list. The default value for this option is
expected-improvement-plus
. For more information, see Acquisition Function Types (Statistics and Machine Learning Toolbox).
When you run the experiment, Experiment Manager searches for the best combination of hyperparameters. Each trial in the experiment uses a new combination of hyperparameter values based on the results of the previous trials. Bayesian optimization requires Statistics and Machine Learning Toolbox.
The Setup Function
configures the training data, network architecture, and training options for the
experiment. The input to the setup function is a structure with fields from the
hyperparameter table. The output of the setup function must match the input of the
trainNetwork
function. This table lists the supported signatures for the
setup function.
Goal of Experiment | Setup Function Signature |
---|---|
Train a network for image classification and regression tasks using the
images and responses specified by images and the training
options defined by options . |
function [images,layers,options] = Experiment_setup(params) ... end |
Train a network using the images specified by images and
responses specified by responses . |
function [images,responses,layers,options] = Experiment_setup(params) ... end |
Train a network for sequence or time-series classification and regression
tasks (for example, an LSTM or GRU network) using the sequences and responses
specified by sequences . |
function [sequences,layers,options] = Experiment_setup(params) ... end |
Train a network using the sequences specified by
sequences and responses specified by
responses . |
function [sequences,responses,layers,options] = Experiment_setup(params) ... end |
Train a network for feature classification or regression tasks (for
example, a multilayer perceptron, or MLP, network) using the feature data and
responses specified by features . |
function [features,layers,options] = Experiment_setup(params) ... end |
Train a network using the feature data specified by
features and responses specified by
responses . |
function [features,responses,layers,options] = Experiment_setup(params) ... end |
Tip
When writing your setup function, follow these guidelines:
Access the hyperparameter values for the experiment by using dot notation. For more information, see Structure Arrays.
Load data for your experiment from a location that is accessible to all your parallel workers. For example, store your data outside the project and access the data by using an absolute path. Alternatively, create a datastore object that can access the data on another machine by setting up the
AlternateFileSystemRoots
property of the datastore. For more information, see Set Up Datastore for Processing on Different Machines or Clusters.For networks containing batch normalization layers, if the
BatchNormalizationStatistics
training option ispopulation
, Experiment Manager displays final validation metric values that are often different from the validation metrics evaluated during training. The difference in values is the result of additional operations performed after the network finishes training. For more information, see Batch Normalization Layer.The execution modes that you can use for your experiment depend on the settings you specify for the training options
ExecutionEnvironment
andDispatchInBackground
.Execution Mode Valid Settings for ExecutionEnvironment
Valid Settings for DispatchInBackground
Sequential
"auto"
,"cpu"
,"gpu"
,"multi-gpu"
,"parallel"
true
,false
Simultaneous
"auto"
,"cpu"
,"gpu"
false
Batch Sequential
"auto"
,"cpu"
,"gpu"
,"parallel"
true
,false
Batch Simultaneous
"auto"
,"cpu"
,"gpu"
false
For more information, see Use Experiment Manager to Train Networks in Parallel and Offload Deep Learning Experiments as Batch Jobs to a Cluster.
The Metrics section specifies functions to evaluate the results of the experiment. The input to a metric function is a structure with three fields:
trainedNetwork
is theSeriesNetwork
object orDAGNetwork
object returned by thetrainNetwork
function. For more information, see Trained Network.trainingInfo
is a structure containing the training information returned by thetrainNetwork
function. For more information, see Training Information.parameters
is a structure with fields from the hyperparameter table.
The output of a metric function must be a scalar number, a logical value, or a string.
If your experiment uses Bayesian optimization, select a metric to optimize from the
Optimize list. In the Direction list,
specify that you want to Maximize
or
Minimize
this metric. Experiment Manager uses
this metric to determine the best combination of hyperparameters for your experiment.
You can choose a standard training or validation metric (such as accuracy, RMSE, or
loss) or a custom metric from the table.
Configure Custom Training Experiment
This example shows how to set up a custom training experiment using
Experiment Manager. Custom training experiments support workflows that
require a training function other than trainNetwork
. These workflows include:
Training a network that is not defined by a layer graph.
Training a network using a custom learning rate schedule.
Updating the learnable parameters of a network by using a custom function.
Training a generative adversarial network (GAN).
Training a twin neural network.
Custom training experiments consist of a description, a table of hyperparameters, and a training function.
In the Description field, enter a description of the experiment.
Under Hyperparameters, select the strategy and specify the hyperparameters to use for your experiment:
To sweep through a range of
hyperparameter values, set Strategy to Exhaustive
Sweep
. In the hyperparameter table, enter the names and values of the
hyperparameters used in the experiment.
Hyperparameter names must start with a letter, followed by letters,
digits, or underscores. Hyperparameter values must be scalars or vectors
with numeric, logical, or string values, or cell arrays of character vectors. For example, these
are valid hyperparameter specifications:
0.01
0.01:0.01:0.05
[0.01 0.02 0.04 0.08]
["alpha" "beta" "gamma"]
{'delta' 'epsilon' 'zeta'}
Experiment Manager trains the network using every combination of the hyperparameter values specified in the table.
To find optimal training options by using Bayesian optimization, set
Strategy to Bayesian Optimization
.
In the hyperparameter table, specify these properties of the hyperparameters used in the experiment:
Name — Enter a valid hyperparameter name. Hyperparameter names must start with a letter, followed by letters, digits, or underscores.
Range — For a real- or integer-valued hyperparameter, enter a two-element vector that gives the lower bound and upper bound of the hyperparameter. For a categorical hyperparameter, enter an array of strings or a cell array of character vectors that lists the possible values of the hyperparameter.
Type — Select
real
for a real-valued hyperparameter,integer
for an integer-valued hyperparameter, orcategorical
for a categorical hyperparameter.Transform — Select
none
to use no transform orlog
to use a logarithmic transform. When you selectlog
, the hyperparameter values must be positive. With this setting, the Bayesian optimization algorithm models the hyperparameter on a logarithmic scale.
To specify the duration of your experiment, under Bayesian Optimization Options, enter the maximum time in seconds and the maximum number of trials to run. Note that the actual run time and number of trials in your experiment can exceed these settings because Experiment Manager checks these options only when a trial finishes executing.
Optionally, specify deterministic constraints, conditional constraints, and an acquisition function for the Bayesian optimization algorithm (since R2023a). Under Bayesian Optimization Options, click Advanced Options and specify:
Deterministic Constraints – Enter the name of a deterministic constraint function. To run the Bayesian optimization algorithm without deterministic constraints, leave this option blank. For more information, see Deterministic Constraints — XConstraintFcn (Statistics and Machine Learning Toolbox).
Conditional Constraints – Enter the name of a conditional constraint function. To run the Bayesian optimization algorithm without conditional constraints, leave this option blank. For more information, see Conditional Constraints — ConditionalVariableFcn (Statistics and Machine Learning Toolbox).
Acquisition Function Name – Select an acquisition function from the list. The default value for this option is
expected-improvement-plus
. For more information, see Acquisition Function Types (Statistics and Machine Learning Toolbox).
When you run the experiment, Experiment Manager searches for the best combination of hyperparameters. Each trial in the experiment uses a new combination of hyperparameter values based on the results of the previous trials. Bayesian optimization requires Statistics and Machine Learning Toolbox.
The Training Function specifies the training data, network architecture, training options, and training procedure used by the experiment. The inputs to the training function are:
A structure with fields from the hyperparameter table
An
experiments.Monitor
object that you can use to track the progress of the training, update information fields in the results table, record values of the metrics used by the training, and produce training plots
Experiment Manager saves the output of the training function, so you can export it to the MATLAB workspace when the training is complete.
Tip
When writing your training function, follow these guidelines:
Access the hyperparameter values for the experiment by using dot notation. For more information, see Structure Arrays.
Load data for your experiment from a location that is accessible to all your parallel workers. For example, store your data outside the project and access the data by using an absolute path. Alternatively, create a datastore object that can access the data on another machine by setting up the
AlternateFileSystemRoots
property of the datastore. For more information, see Set Up Datastore for Processing on Different Machines or Clusters.Both information and metric columns display numerical values in the results table for your experiment. Additionally, metric values are recorded in the training plot. Use information columns for values that you want to display in the results table but not in the training plot.
When the training is complete, the Review Results gallery in the toolstrip displays a button for each figure that you create in your training function (since R2023a). To display a figure in the Visualizations pane, click the corresponding button in the Custom Plot section of the gallery. Specify the name of the button by using the
Name
property of the figure. If you do not name your figure, Experiment Manager derives the name of the button from the axes or figure title.When the training is complete, the Review Results gallery in the toolstrip displays a button for each figure that you create in your training function (since R2023a). To display a figure in the Visualizations pane, click the corresponding button in the Custom Plot section of the gallery. Specify the name of the button by using the
Name
property of the figure. If you do not name your figure, Experiment Manager derives the name of the button from the axes or figure title.
If your experiment uses Bayesian optimization, in the Metrics
section, under Optimize, enter the name of a metric to optimize. In
the Direction list, specify that you want to
Maximize
or Minimize
this
metric. Experiment Manager uses this metric to determine the best combination
of hyperparameters for your experiment. You can choose any metric that you define using
the experiments.Monitor
object for the training function.
Stop and Restart Training
Experiment Manager provides two options for interrupting experiments:
Clicking Stop
marks any running trials as
Stopped
and saves the results. When the experiment stops, you can display the training plot and export the training results for these trials.Clicking Cancel
marks any running trials as
Canceled
and discards the results. When the experiment stops, you cannot display the training plot or export the training results for these trials.
Both options save the results of any completed trials and cancel any queued trials. Typically, Cancel is faster than Stop.
Instead of interrupting the entire experiment, you can stop an individual trial that
is running or cancel an individual queued trial. In the Actions column of the results table, click the Stop button
or the Cancel button
for the trial.
To reduce the size of your experiments, discard the results and
visualizations of any trial that is no longer relevant. In the Actions column of the results table, click the Discard button
for the trial.
When the training is complete, you can restart a trial that you stopped, canceled,
or discarded. In the Actions column of the results
table, click the Restart button for the trial.
Alternatively, to restart multiple trials, in the Experiment Manager
toolstrip, open the Restart list, select one or more restarting
criteria, and click Restart
. Restarting criteria include
All
Canceled
, All Stopped
, All
Error
, and All Discarded
.
Note
Stop, cancel, and restart options are not available for all experiment types, strategies, or execution modes.
Sort, Filter, and Annotate Experiment Results
Experiment Manager runs multiple trials of your experiment using a different combination of hyperparameters for each trial. A table of results displays the hyperparameter and metric values for each trial. To compare your results, you can use these values to sort the results table and filter trials.
To sort the trials in the results table, use the drop-down list on a column header.
Point to the header of a column by which you want to sort.
Click the triangle icon.
Select Sort in Ascending Order or Sort in Descending Order.
To filter trials from the results table, use the Filters pane:
On the Experiment Manager toolstrip, select Filters.
The Filters pane shows a histogram for each column in the results table that has numeric values. To remove a histogram, in the results table, open the drop-down list for the corresponding column and clear the Show Filter check box.
Adjust the sliders under the histogram for the column by which you want to filter.
The results table shows only the trials with a value in the selected range.
To restore all of the trials in the results table, close the experiment result tab and reopen the results from the Experiment Browser pane.
To record observations about the results of your experiment, add an annotation:
Right-click a cell in the results table and select Add Annotation. Alternatively, select a cell in the results table and, on the Experiment Manager toolstrip, select Annotations > Add Annotation.
In the Annotations pane, enter your observations in the text box. You can add multiple annotations for each cell in the results table.
To sort annotations, use the Sort By drop-down list. You can sort by creation time or trial number.
To highlight the cell that corresponds to an annotation, click the link above the annotation.
To delete an annotation, click the delete button
to the right of the annotation.
View Source of Past Experiment Definitions
Experiment Manager stores a read-only copy of the hyperparameter values and MATLAB code that produce each set of results for your experiment. You can run an experiment multiple times, each time using a different version of your code but always using the same function name. If you decide that you want to revert to an earlier version of your code, you can access it by opening the experiment source for the earlier result. To see this information:
On the Experiment Browser pane, double-click the name of the set of results you want to inspect.
On the experiment result tab, click View Experiment Source.
In the experiment source tab, inspect the experiment description, hyperparameter values, and functions that produced the set of results.
To open the functions used by the experiment, click the links at the bottom of the tab. These functions are read-only, but you can copy them to the project folder, rerun the experiment, and reproduce your results.
Related Examples
- Generate Experiment Using Deep Network Designer
- Create a Deep Learning Experiment for Classification
- Create a Deep Learning Experiment for Regression
- Evaluate Deep Learning Experiments by Using Metric Functions
- Tune Experiment Hyperparameters by Using Bayesian Optimization
- Use Bayesian Optimization in Custom Training Experiments
- Try Multiple Pretrained Networks for Transfer Learning
- Experiment with Weight Initializers for Transfer Learning
- Audio Transfer Learning Using Experiment Manager
- Choose Training Configurations for LSTM Using Bayesian Optimization
- Run a Custom Training Experiment for Image Comparison
- Use Experiment Manager to Train Generative Adversarial Networks (GANs)
- Custom Training with Multiple GPUs in Experiment Manager
Tips
To visualize, build, and train a network without sweeping hyperparameters, you can use the Deep Network Designer app. After you train your network, generate an experiment to find the optimal training options. For more information, see Generate Experiment Using Deep Network Designer.
To navigate Experiment Manager when using a mouse is not an option, use shortcut keyboards. For more information, see Keyboard Shortcuts for Experiment Manager.
To reduce the size of your experiments, discard the results and visualizations of any trial that is no longer relevant. In the Actions column of the results table, click the Discard button
for the trial.
Version History
Introduced in R2020aR2023b: App available in MATLAB
You can now use Experiment Manager in MATLAB, with or without Deep Learning Toolbox. When you share your experiments with colleagues who do not have a Deep Learning Toolbox license, they can open your experiments and access your results. Experiment Manager requires:
Deep Learning Toolbox to run built-in or custom training experiments for deep learning and to view confusion matrices for these experiments
Statistics and Machine Learning Toolbox to run custom training experiments for machine learning and experiments that use Bayesian optimization
Parallel Computing Toolbox to run multiple trials at the same time or a single trial at a time on multiple GPUs, on a cluster, or in the cloud
MATLAB Parallel Server to offload experiments as batch jobs in a remote cluster
For more information on general-purpose experiments that you can run in MATLAB, see Manage Experiments.
R2023b: Delete multiple experiments and results
Use the Experiment Browser to delete multiple experiments or multiple results from a project in a single operation. Select the experiments or results you want to delete, then right-click and select Delete. Your selection must contain only experiments or only results. If you delete an experiment, Experiment Manager also deletes the results contained in the experiment.
R2023a: Visualizations for custom training experiments
Display visualizations for your custom training experiments directly in the Experiment Manager app. When the training is complete, the Review Results gallery in the toolstrip displays a button for each figure that you create in your training function. To display a figure in the Visualizations pane, click the corresponding button in the Custom Plot section of the gallery.
R2023a: Debug code before or after running experiment
Diagnose problems in your experiment directly from the Experiment Manager app.
Before running an experiment, you can test your setup and training functions with your choice of hyperparameter values.
After running an experiment, you can debug your setup and training functions using the same random seed and hyperparameters values you used in one of your trials.
For more information, see Debug Deep Learning Experiments.
R2023a: Ease-of-use enhancements
Specify deterministic constraints, conditional constraints, and an acquisition function for experiments that use Bayesian optimization. Under Bayesian Optimization Options, click Advanced Options and specify:
Deterministic Constraints
Conditional Constraints
Acquisition Function Name
Load a project that is already open in MATLAB. When you start the Experiment Manager app, a dialog box prompts you to open the current project in Experiment Manager. Alternatively, in the Experiment Manager app, select New > Project and, in the dialog box, click Project from MATLAB.
If you have Audio Toolbox™, you can set up your built-in or custom training experiments for audio classification by selecting a preconfigured template.
R2022b: Ease-of-use enhancements
In the Experiment Manager toolstrip, the Restart list replaces the Restart All Canceled button. To restart multiple trials of your experiment, open the Restart list, select one or more restarting criteria, and click Restart
. The restarting criteria include
All Canceled
,All Stopped
,All Error
, andAll Discarded
.During training, the results table displays the intermediate values for standard training and validation metrics for built-in training experiments. These metrics include loss, accuracy (for classification experiments), and root mean squared error (for regression experiments).
In built-in training experiments, the Execution Environment column of the results table displays whether each trial of a built-in training experiment runs on a single CPU, a single GPU, multiple CPUs, or multiple GPUs.
To discard the training plot, confusion matrix, and training results for trials that are no longer relevant, in the Actions column of the results table, click the Discard button
.
R2022a: Experiments as batch jobs in a cluster
If you have Parallel Computing Toolbox and MATLAB Parallel Server, you can send your experiment as a batch job to a remote cluster. If you have only Parallel Computing Toolbox, you can use a local cluster profile to develop and test your experiments on your client machine instead of running them on a network cluster. For more information, see Offload Deep Learning Experiments as Batch Jobs to a Cluster.
R2022a: Ease-of-use enhancements
In the Experiment Manager toolstrip, the Mode list replaces the Use Parallel button.
To run one trial of the experiment at a time, select
Sequential
and click Run.To run multiple trials at the same time, select
Simultaneous
and click Run.To offload the experiment as a batch job, select
Batch Sequential
orBatch Simultaneous
, specify your cluster and pool size, and click Run.
Manage experiments using new Experiment Browser context menu options:
To add a new experiment to a project, right-click the name of the project and select New Experiment.
To create a copy of an experiment, right-click the name of the experiment and select Duplicate.
Specify hyperparameter values as cell arrays of character vectors. In previous releases, Experiment Manager supported only hyperparameter specifications using scalars and vectors with numeric, logical, or string values.
To stop, cancel, or restart a trial, in the Action column of the results table, click the Stop
, Cancel
, or Restart
buttons. In previous releases, these buttons were located in the Progress column. Alternatively, you can right-click the row for the trial and, in the context menu, select Stop, Cancel, or Restart.
When an experiment trial ends, the Status column of the results table displays one of these reasons for stopping:
Max epochs completed
Met validation criterion
Stopped by OutputFcn
Training loss is NaN
To sort annotations by creation time or trial number, in the Annotations pane, use the Sort By list.
After training completes, save the contents of the results table as a
table
array in the MATLAB workspace by selecting Export > Results Table.To export the training information or trained network for a stopped or completed trial, right-click the row for the trial and, in the context menu, select Export Training Information or Export Trained Network.
R2021b: Bayesian optimization in custom training experiments
If you have Statistics and Machine Learning Toolbox, you can use Bayesian optimization to determine the best combination of hyperparameters for a custom training experiment. Previously, custom training experiments supported only sweeping hyperparameters. For more information, see Use Bayesian Optimization in Custom Training Experiments.
R2021b: Experiments in MATLAB Online
Run Experiment Manager in your web browser by using MATLAB Online™. For parallel execution of experiments, you must have access to a Cloud Center cluster.
R2021b: Ease-of-use enhancements
In the Experiment Manager toolstrip, click Cancel to stop an experiment, mark any running trials as
Canceled
, and discard their results. When the training is complete, click Restart All Canceled to restart all the trials that you canceled.Use keyboard shortcuts to navigate Experiment Manager when using a mouse is not an option. For more information, see Keyboard Shortcuts for Experiment Manager.
R2021a: Custom training experiments
Create custom training experiments to support workflows such as:
Using a custom training loop on a
dlnetwork
, such as a twin neural network or a generative adversarial network (GAN)Training a network by using a model function or a custom learning rate schedule
Updating the learnable parameters of a network by using a custom function
R2021a: Ease-of-use enhancements
When you create an experiment, use a preconfigured template as a guide for defining your experiment. Experiment templates support workflows that include image classification, image regression, sequence classification, semantic segmentation, and custom training loops.
Add annotations to record observations about the results of your experiment. Right-click a cell in the results table and select Add Annotation. For more information, see Sort, Filter, and Annotate Experiment Results.
R2020b: Bayesian optimization
If you have Statistics and Machine Learning Toolbox, you can use Bayesian optimization to determine the best combination of hyperparameters for an experiment. For more information, see Tune Experiment Hyperparameters by Using Bayesian Optimization.
R2020b: Parallel execution
If you have Parallel Computing Toolbox, you can run multiple trials of an experiment at the same time by clicking Use Parallel and then Run. Experiment Manager starts the parallel pool and executes multiple simultaneous trials. For more information, see Use Experiment Manager to Train Networks in Parallel.
See Also
Apps
Functions
Objects
Abrir ejemplo
Tiene una versión modificada de este ejemplo. ¿Desea abrir este ejemplo con sus modificaciones?
Comando de MATLAB
Ha hecho clic en un enlace que corresponde a este comando de MATLAB:
Ejecute el comando introduciéndolo en la ventana de comandos de MATLAB. Los navegadores web no admiten comandos de MATLAB.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)