Does more number of hidden units in lstm layer means the network require more training time

13 visualizaciones (últimos 30 días)
I have the following queries regarding the number of hidden units in LSTM layer:
Does more number of hidden unit in the lstm layer means the network requires more training time?
I mean, how the number of hidden units in lstm layer affects the training time of the network, computational complexity?
Is it so that more number of hidden units helps lstm network to remember the previous data more?

Respuesta aceptada

Himanshu
Himanshu el 3 de Mzo. de 2023
Hello Debojit,
I understand that you have some queries regarding the hidden units in the LSTM layer.
The training time of the network depends on various factors, like the number of layers used in the network architecture, the complexity of the network architecture, the size of the dataset, etc.
Increasing the number of hidden units in an LSTM layer can increase the network's training time and computational complexity as the number of computations required to update and propagate information through the layer increases.
Increasing the number of hidden units also increases the capacity of the network to store and learn from past data. However, this is not always the case, and there is a trade-off between the network capacity and generalization performance.
A more extensive network may have more capacity to remember past data. Still, it may also be more prone to overfitting, which can affect the generalization performance of the network on unseen data.
You can refer to the following documentation to learn more about LSTM networks:

Más respuestas (0)

Categorías

Más información sobre Deep Learning Toolbox en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by