Main Content

TrainingOptionsLBFGS

Training options for limited-memory BFGS (L-BFGS) optimizer

Since R2023b

    Description

    Use a TrainingOptionsLBFGS object to set training options for the limited-memory BFGS (L-BFGS) optimizer, including line search method and gradient and step tolerances.

    The L-BFGS algorithm [1] is a quasi-Newton method that approximates the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Use the L-BFGS algorithm for small networks and data sets that you can process in a single batch.

    Creation

    Create a TrainingOptionsLBFGS object by using the trainingOptions function and specifying "lbfgs" as the first input argument.

    Properties

    expand all

    L-BFGS

    Maximum number of iterations to use for training, specified as a positive integer.

    The L-BFGS solver is a full-batch solver, which means that it processes the entire training set in a single iteration.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Method to find suitable learning rate, specified as one of these values:

    • "weak-wolfe" — Search for a learning rate that satisfies the weak Wolfe conditions. This method maintains a positive definite approximation of the inverse Hessian matrix.

    • "strong-wolfe" — Search for a learning rate that satisfies the strong Wolfe conditions. This method maintains a positive definite approximation of the inverse Hessian matrix.

    • "backtracking" — Search for a learning rate that satisfies sufficient decrease conditions. This method does not maintain a positive definite approximation of the inverse Hessian matrix.

    Number of state updates to store, specified as a positive integer. Values between 3 and 20 suit most tasks.

    The L-BFGS algorithm uses a history of gradient calculations to approximate the Hessian matrix recursively. For more information, see Limited-Memory BFGS.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Initial value that characterizes the approximate inverse Hessian matrix, specified as a positive scalar.

    To save memory, the L-BFGS algorithm does not store and invert the dense Hessian matrix B. Instead, the algorithm uses the approximation Bkm1λkI, where m is the history size, the inverse Hessian factor λk is a scalar, and I is the identity matrix. The algorithm then stores the scalar inverse Hessian factor only. The algorithm updates the inverse Hessian factor at each step.

    The initial inverse hessian factor is the value of λ0.

    For more information, see Limited-Memory BFGS.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Maximum number of line search iterations to determine the learning rate, specified as a positive integer.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Relative gradient tolerance, specified as a positive scalar.

    The software stops training when the relative gradient is less than or equal to GradientTolerance.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Step size tolerance, specified as a positive scalar.

    The software stops training when the step that the algorithm takes is less than or equal to StepTolerance.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Data Formats

    Since R2023b

    Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.

    If InputDataFormats is "auto", then the software uses the formats expected by the network input. Otherwise, the software uses the specified formats for the corresponding network input.

    A data format is a string of characters, where each character describes the type of the corresponding data dimension.

    The characters are:

    • "S" — Spatial

    • "C" — Channel

    • "B" — Batch

    • "T" — Time

    • "U" — Unspecified

    For example, consider an array containing a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can specify that this array has the format "CBT" (channel, batch, time).

    You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and "T" at most once. The software ignores singleton trailing "U" dimensions after the second dimension.

    For a neural networks with multiple inputs net, specify an array of input data formats, where InputDataFormats(i) corresponds to the input net.InputNames(i).

    For more information, see Deep Learning Data Formats.

    Data Types: char | string | cell

    Since R2023b

    Description of the target data dimensions, specified as one of these values:

    • "auto" — If the target data has the same number of dimensions as the input data, then the trainnet function uses the format specified by InputDataFormats. If the target data has a different number of dimensions to the input data, then the trainnet function uses the format expected by the loss function.

    • String array, character vector, or cell array of character vectors — The trainnet function uses the data formats you specify.

    A data format is a string of characters, where each character describes the type of the corresponding data dimension.

    The characters are:

    • "S" — Spatial

    • "C" — Channel

    • "B" — Batch

    • "T" — Time

    • "U" — Unspecified

    For example, consider an array containing a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can specify that this array has the format "CBT" (channel, batch, time).

    You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and "T" at most once. The software ignores singleton trailing "U" dimensions after the second dimension.

    For more information, see Deep Learning Data Formats.

    Data Types: char | string | cell

    Monitoring

    Plots to display during neural network training, specified as one of these values:

    • "none" — Do not display plots during training.

    • "training-progress" — Plot training progress.

    The plot shows the training and validation loss, training and validation metrics specified by the Metrics property, and additional information about the training progress.

    Metrics to track, specified as a character vector or string scalar of a built-in metric name, a string array of names, a built-in or custom metric object, a function handle (@myMetric), or a cell array of names, metric objects, and function handles.

    • Built-in metric name — Specify metrics as a string scalar, character vector, or string array of built-in metric names. Supported values are "accuracy", "fScore", "recall", "precision", "rmse", and "auc".

    • Built-in metric object — If you need more flexibility, you can use built-in metric objects. The software supports these built-in metric objects:

      When you create a built-in metric object, you can specify additional options such as the averaging type and whether the task is single-label or multilabel.

    • Custom metric function handle — If the metric you need is not a built-in metric, then you can specify custom metrics using a function handle. The function must have the syntax metric = metricFunction(Y,T), where Y corresponds to the network predictions and T corresponds to the target responses. For networks with multiple outputs, the syntax must be metric = metricFunction(Y1,…,YN,T1,…TM), where N is the number of outputs and M is the number of targets. For more information, see Define Custom Metric Function.

    • deep.DifferentiableFunction object (since R2024a) — Function object with custom backward function. For more information, see Define Custom Deep Learning Operations.

    • Custom metric object — If you need greater customization, then you can define your own custom metric object. For an example that shows how to create a custom metric, see Define Custom F-Beta Score Metric Object . For general information about creating custom metrics, see Define Custom Deep Learning Metric Object. Specify your custom metric as the Metrics option of the trainingOptions function.

    Example: Metrics=["accuracy","fscore"]

    Example: Metrics=["accuracy",@myFunction,precisionObj]

    Since R2024a

    Name of objective metric to use for early stopping and returning the best network, specified as a string scalar or character vector.

    The metric name must be "loss" or match the name of a metric specified by the Metrics name-value argument. Metrics specified using function handles are not supported. To specify the ObjectiveMetricName value as the name of a custom metric, the value of the Maximize property of the custom metric object must be nonempty. For more information, see Define Custom Deep Learning Metric Object.

    For more information about specifying the objective metric for early stopping, see ValidationPatience. For more information about returning the best network using the objective metric, see OutputNetwork.

    Data Types: char | string

    Flag to display training progress information in the command window, specified as 1 (true) or 0 (false).

    When this property is 1 (true), the software displays this information.

    VariableDescription
    IterationIteration number.
    TimeElapsedTime elapsed in hours, minutes, and seconds.
    TrainingLossTraining loss.
    ValidationLossValidation loss. If you do not specify validation data, then the software does not display this information.
    GradientNormNorm of the gradients.
    StepNormNorm of the steps.

    If you specify additional metrics in the training options, then they also appear in the verbose output. For example, if you set the Metrics training option to "accuracy", then the information includes the TrainingAccuracy and ValidationAccuracy variables.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical

    Frequency of verbose printing, which is the number of iterations between printing to the Command Window, specified as a positive integer.

    If you validate the neural network during training, then the software also prints to the command window every time validation occurs.

    To enable this property, set the Verbose training option to 1 (true).

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Output functions to call during training, specified as a function handle or cell array of function handles. The software calls the functions once before the start of training, after each iteration, and once when training is complete.

    The functions must have the syntax stopFlag = f(info), where info is a structure containing information about the training progress, and stopFlag is a scalar that indicates to stop training early. If stopFlag is 1 (true), then the software stops training. Otherwise, the software continues training.

    The trainnet function passes the output function the structure info that contains these fields:

    FieldDescription
    IterationIteration number
    TimeElapsedTime elapsed in hours, minutes, and seconds
    TrainingLossTraining loss
    ValidationLossValidation loss. If you do not specify validation data, then the software does not display this information.
    GradientNormNorm of the gradients
    StepNormNorm of the steps
    StateIteration training state, specified as "start", "iteration", or "done".

    If you specify additional metrics in the training options, then they also appear in the training information. For example, if you set the Metrics training option to "accuracy", then the information includes the TrainingAccuracy and ValidationAccuracy fields.

    If a field is not calculated or relevant for a certain call to the output functions, then that field contains an empty array.

    For an example showing how to use output functions, see Custom Stopping Criteria for Deep Learning Training.

    Data Types: function_handle | cell

    Validation

    Data to use for validation during training, specified as [], a datastore, or a cell array containing the validation predictors and targets.

    During training, the software uses the validation data to calculate the validation loss and metric values. To specify the validation frequency, use the ValidationFrequency training option. You can also use the validation data to stop training automatically when the validation objective metric stops improving. By default, the objective metric is set to the loss. To turn on automatic validation stopping, use the ValidationPatience training option.

    If ValidationData is [], then the software does not validate the neural network during training.

    If your neural network has layers that behave differently during prediction than during training (for example, dropout layers), then the validation loss can be lower than the training loss.

    If ValidationData is [], then the software does not validate the neural network during training.

    Specify the validation data as a datastore, minibatchqueue object, or the cell array {predictors,targets}, where predictors contains the validation predictors and targets contains the validation targets. Specify the validation predictors and targets using any of the formats supported by the trainnet function.

    For more information, see the input arguments of the trainnet function.

    Frequency of neural network validation in number of iterations, specified as a positive integer.

    The ValidationFrequency value is the number of iterations between evaluations of validation metrics. To specify validation data, use the ValidationData training option.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Patience of validation stopping of neural network training, specified as a positive integer or Inf.

    ValidationPatience specifies the number of times that the objective metric on the validation set can be worse than or equal to the previous best value before neural network training stops. If ValidationPatience is Inf, then the values of the validation metric do not cause training to stop early. The software aims to maximize or minimize the metric, as specified by the Maximize property of the metric. When the objective metric is "loss", the software aims to minimize the loss value.

    The returned neural network depends on the OutputNetwork training option. To return the neural network with the best validation metric value, set the OutputNetwork training option to "best-validation".

    Before R2024a: The software computes the validation patience using the validation loss value.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Neural network to return when training completes, specified as one of the following:

    • "auto" – Use "best-validation" if ValidationData is specified. Otherwise, use "last-iteration".

    • "best-validation" – Return the neural network corresponding to the training iteration with the best validation metric value, where the metric to optimize is specified by the ObjectiveMetricName option. To use this option, you must specify the ValidationData training option.

    • "last-iteration" – Return the neural network corresponding to the last training iteration.

    Regularization and Normalization

    Factor for L2 regularization (weight decay), specified as a nonnegative scalar. For more information, see L2 Regularization.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Option to reset input layer normalization, specified as one of the following:

    • 1 (true) — Reset the input layer normalization statistics and recalculate them at training time.

    • 0 (false) — Calculate normalization statistics at training time when they are empty.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical

    Mode to evaluate the statistics in batch normalization layers, specified as one of the following:

    • "population" — Use the population statistics. After training, the software finalizes the statistics by passing through the training data once more and uses the resulting mean and variance.

    • "moving" — Approximate the statistics during training using a running estimate given by update steps

      μ*=λμμ^+(1λμ)μσ2*=λσ2σ2^+(1-λσ2)σ2

      where μ* and σ2* denote the updated mean and variance, respectively, λμ and λσ2 denote the mean and variance decay values, respectively, μ^ and σ2^ denote the mean and variance of the layer input, respectively, and μ and σ2 denote the latest values of the moving mean and variance values, respectively. After training, the software uses the most recent value of the moving mean and variance statistics. This option supports CPU and single GPU training only.

    • "auto" — Use the "moving" option.

    Gradient Clipping

    Gradient threshold, specified as Inf or a positive scalar. If the gradient exceeds the value of GradientThreshold, then the gradient is clipped according to the GradientThresholdMethod training option.

    For more information, see Gradient Clipping.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Gradient threshold method used to clip gradient values that exceed the gradient threshold, specified as one of the following:

    • "l2norm" — If the L2 norm of the gradient of a learnable parameter is larger than GradientThreshold, then scale the gradient so that the L2 norm equals GradientThreshold.

    • "global-l2norm" — If the global L2 norm, L, is larger than GradientThreshold, then scale all gradients by a factor of GradientThreshold/L. The global L2 norm considers all learnable parameters.

    • "absolute-value" — If the absolute value of an individual partial derivative in the gradient of a learnable parameter is larger than GradientThreshold, then scale the partial derivative to have magnitude equal to GradientThreshold and retain the sign of the partial derivative.

    For more information, see Gradient Clipping.

    Sequence

    Option to pad, truncate, or split input sequences, specified as one of the following:

    • "longest" — Pad sequences to have the same length as the longest sequence. This option does not discard any data, though padding can introduce noise to the neural network.

    • "shortest" — Truncate sequences to have the same length as the shortest sequence. This option ensures that no padding is added, at the cost of discarding data.

    To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding and Truncation.

    Direction of padding or truncation, specified as one of the following:

    • "right" — Pad or truncate sequences on the right. The sequences start at the same time step and the software truncates or adds padding to the end of the sequences.

    • "left" — Pad or truncate sequences on the left. The software truncates or adds padding to the start of the sequences so that the sequences end at the same time step.

    Because recurrent layers process sequence data one time step at a time, when the recurrent layer OutputMode property is "last", any padding in the final time steps can negatively influence the layer output. To pad or truncate sequence data on the left, set the SequencePaddingDirection option to "left".

    For sequence-to-sequence neural networks (when the OutputMode property is "sequence" for each recurrent layer), any padding in the first time steps can negatively influence the predictions for the earlier time steps. To pad or truncate sequence data on the right, set the SequencePaddingDirection option to "right".

    To learn more about the effect of padding and truncating sequences, see Sequence Padding and Truncation.

    Value by which to pad input sequences, specified as a scalar.

    Do not pad sequences with NaN, because doing so can propagate errors throughout the neural network.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Hardware and Acceleration

    Hardware resource, specified as one of these values:

    • "auto" — Use a GPU if one is available. Otherwise, use the CPU.

    • "gpu" — Use the GPU. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information about supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.

    • "cpu" — Use the CPU.

    Since R2024a

    Performance optimization, specified as one of these values:

    • "auto" – Automatically apply a number of optimizations suitable for the input network and hardware resources.

    • "none" – Disable all optimizations.

    Checkpoints

    Path for saving the checkpoint neural networks, specified as a string scalar or character vector.

    • If you do not specify a path (that is, you use the default ""), then the software does not save any checkpoint neural networks.

    • If you specify a path, then the software saves checkpoint neural networks to this path and assigns a unique name to each neural network. You can then load any checkpoint neural network and resume training from that neural network.

      If the folder does not exist, then you must first create it before specifying the path for saving the checkpoint neural networks. If the path you specify does not exist, then the software throws an error.

    Data Types: char | string

    Frequency of saving checkpoint neural networks in iterations, specified as a positive integer.

    This option only has an effect when CheckpointPath is nonempty.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Examples

    collapse all

    Create a set of options for training a neural network using the L-BFGS optimizer:

    • Determine the learn rate using the "strong-wolfe" line search method.

    • Stop training when the relative gradient is less than or equal to 1e-5.

    • Turn on the training progress plot.

    options = trainingOptions("lbfgs", ...
        LineSearchMethod="strong-wolfe", ...
        GradientTolerance=1e-5, ...
        Plots="training-progress")
    options = 
      TrainingOptionsLBFGS with properties:
    
                       MaxIterations: 1000
                         HistorySize: 10
         InitialInverseHessianFactor: 1
                    LineSearchMethod: 'strong-wolfe'
          MaxNumLineSearchIterations: 20
                   GradientTolerance: 1.0000e-05
                       StepTolerance: 1.0000e-05
                      SequenceLength: 'longest'
                 CheckpointFrequency: 30
                    L2Regularization: 1.0000e-04
             GradientThresholdMethod: 'l2norm'
                   GradientThreshold: Inf
                             Verbose: 1
                    VerboseFrequency: 50
                      ValidationData: []
                 ValidationFrequency: 50
                  ValidationPatience: Inf
                 ObjectiveMetricName: 'loss'
                      CheckpointPath: ''
                ExecutionEnvironment: 'auto'
                           OutputFcn: []
                             Metrics: []
                               Plots: 'training-progress'
                SequencePaddingValue: 0
            SequencePaddingDirection: 'right'
                    InputDataFormats: "auto"
                   TargetDataFormats: "auto"
             ResetInputNormalization: 1
        BatchNormalizationStatistics: 'auto'
                       OutputNetwork: 'auto'
                        Acceleration: "auto"
    
    

    Algorithms

    expand all

    References

    [1] Liu, Dong C., and Jorge Nocedal. "On the limited memory BFGS method for large scale optimization." Mathematical programming 45, no. 1 (August 1989): 503-528. https://doi.org/10.1007/BF01589116.

    [2] Pascanu, R., T. Mikolov, and Y. Bengio. "On the difficulty of training recurrent neural networks". Proceedings of the 30th International Conference on Machine Learning. Vol. 28(3), 2013, pp. 1310–1318.

    [3] Bishop, C. M. Pattern Recognition and Machine Learning. Springer, New York, NY, 2006.

    [4] Murphy, K. P. Machine Learning: A Probabilistic Perspective. The MIT Press, Cambridge, Massachusetts, 2012.

    Version History

    Introduced in R2023b

    expand all