Resilient backpropagation

`net.trainFcn = 'trainrp'`

[net,tr] = train(net,...)

`trainrp`

is a network training function that updates weight and bias
values according to the resilient backpropagation algorithm (Rprop).

`net.trainFcn = 'trainrp'`

sets the network `trainFcn`

property.

`[net,tr] = train(net,...)`

trains the network with
`trainrp`

.

Training occurs according to `trainrp`

training parameters, shown here
with their default values:

`net.trainParam.epochs` | `1000` | Maximum number of epochs to train |

`net.trainParam.show` | `25` | Epochs between displays ( |

`net.trainParam.showCommandLine` | `false` | Generate command-line output |

`net.trainParam.showWindow` | `true` | Show training GUI |

`net.trainParam.goal` | `0` | Performance goal |

`net.trainParam.time` | `inf` | Maximum time to train in seconds |

`net.trainParam.min_grad` | `1e-5` | Minimum performance gradient |

`net.trainParam.max_fail` | `6` | Maximum validation failures |

`net.trainParam.lr` | `0.01` | Learning rate |

`net.trainParam.delt_inc` | `1.2` | Increment to weight change |

`net.trainParam.delt_dec` | `0.5` | Decrement to weight change |

`net.trainParam.delta0` | `0.07` | Initial weight change |

`net.trainParam.deltamax` | `50.0` | Maximum weight change |

You can create a standard network that uses `trainrp`

with
`feedforwardnet`

or `cascadeforwardnet`

.

To prepare a custom network to be trained with `trainrp`

,

Set

`net.trainFcn`

to`'trainrp'`

. This sets`net.trainParam`

to`trainrp`

’s default parameters.Set

`net.trainParam`

properties to desired values.

In either case, calling `train`

with the resulting network trains the
network with `trainrp`

.

Here is a problem consisting of inputs `p`

and targets
`t`

to be solved with a network.

p = [0 1 2 3 4 5]; t = [0 0 0 1 1 1];

A two-layer feed-forward network with two hidden neurons and this training function is created.

Create and test a network.

net = feedforwardnet(2,'trainrp');

Here the network is trained and retested.

net.trainParam.epochs = 50; net.trainParam.show = 10; net.trainParam.goal = 0.1; net = train(net,p,t); a = net(p)

See `help feedforwardnet`

and `help cascadeforwardnet`

for other examples.

`trainrp`

can train any network as long as its weight, net input, and
transfer functions have derivative functions.

Backpropagation is used to calculate derivatives of performance `perf`

with respect to the weight and bias variables `X`

. Each variable is adjusted
according to the following:

dX = deltaX.*sign(gX);

where the elements of `deltaX`

are all initialized to
`delta0`

, and `gX`

is the gradient. At each iteration the
elements of `deltaX`

are modified. If an element of `gX`

changes sign from one iteration to the next, then the corresponding element of
`deltaX`

is decreased by `delta_dec`

. If an element of
`gX`

maintains the same sign from one iteration to the next, then the
corresponding element of `deltaX`

is increased by `delta_inc`

.
See Riedmiller, M., and H. Braun, “A direct adaptive method for faster backpropagation
learning: The RPROP algorithm,” *Proceedings of the IEEE International
Conference on Neural Networks*,1993, pp. 586–591.

Training stops when any of these conditions occurs:

The maximum number of

`epochs`

(repetitions) is reached.The maximum amount of

`time`

is exceeded.Performance is minimized to the

`goal`

.The performance gradient falls below

`min_grad`

.Validation performance has increased more than

`max_fail`

times since the last time it decreased (when using validation).

Riedmiller, M., and H. Braun, “A direct adaptive method for faster backpropagation
learning: The RPROP algorithm,” *Proceedings of the IEEE International
Conference on Neural Networks*,1993, pp. 586–591.

`trainbfg`

| `traincgb`

| `traincgf`

| `traincgp`

| `traingda`

| `traingdm`

| `traingdx`

| `trainlm`

| `trainoss`

| `trainscg`