Solving the Burgers Equations with Echo state networks

13 visualizaciones (últimos 30 días)
Mike
Mike el 1 de Sept. de 2022
Comentada: Mike el 19 de Sept. de 2022
Hi,
found this example, which is very interesting for my work.
Can someone give me a hint, how to solve the PDE with an ESN, so change from ordinary neuronal network to reservoir?
Any help is welcome.
Best regards,
Mike

Respuesta aceptada

David Willingham
David Willingham el 2 de Sept. de 2022
Hi Mike,
Are you able to provide the code for your PDE? How long does it take to train?
  2 comentarios
Mike
Mike el 5 de Sept. de 2022
Hi David,
as I said, I solve different PDE, some are 2d, some are 1D, like the Burger Equation.
The time depend on the spatial Domain I try to solve them, something between 1 and 10 minutes.
Therefore If I thought to choose the Burger Example as a starting point to help me. If you could provide me an example for this PDE example ( or at least some hints how to implement these ESN within the physic informed use case) I will be hopefully able to transfer this example to my other usecases.
Best regards,
Mike
David Willingham
David Willingham el 7 de Sept. de 2022
I followed up with our developers on this. They have stated:
This is a use case we hadn't had a user request before on, so currently don't have an example. Tips for creating an example would include:
  • The ESN reservoir is simply x(n+1) = tanh(Ux(n) + Vu(n))​ for randomly initialized and untrained matrices U,V​, input data u​ and reservoir x​.
  • The ESN output is u(n+1) = Wx(n+1)​ for a trained matrix W​. One of the keys is that W​ is the only dlarray​ to update in training, i.e. the only value on parameters​ in the current example.
  • ESN is a discrete time model, whereas the current PINN example uses time as an input.
  • The PINN loss should work more or less the same - though the time derivative is estimated via finite differences.
To help development prioritise building an example, can you ellaborate on what application the PDE's are being used for?

Iniciar sesión para comentar.

Más respuestas (4)

David Willingham
David Willingham el 1 de Sept. de 2022
Hi Mike,
Are you able to ellaborate a little more about the problem you're looking to solve? We currently don't have an example for this, so I'm looking to see what might be the best material / support I can give you.
David

Mike
Mike el 2 de Sept. de 2022
Editada: Mike el 2 de Sept. de 2022
Hi David, Thanks for your answer.
I'am trying to solve different partial differential with neuronal networks.
In some cases it takes a while and I focus on the accurary. I read, that using reservoir networks or ESN have advantages there, they are fast and could improve the accuracy in some application. I wanna give them a try in my analyses...
However I'am not very familiar how to implement them, I thought that the use case of the Burger equation could be a good starting point how to integrate it.
kind regards, Mike

David Willingham
David Willingham el 9 de Sept. de 2022
Hi Mike,
I received an update from our development team on this (attached). Whilst not fully worked through, it should serve as a starting point.
The following is a brief description:
==
Solving PDEs with Echo State Networks
This repo demonstrates how to train an echo-state network to solve a PDE.
It follows this example.
Echo State Networks
An echo-state network is a discrete time recurrent model. Given a sequence x(t) the model computes a reservoir sequence z(t+1) = tanh(U*z(t)+V*x(t)). Then the model output is y(t) = W*z(t). Here the U,V,W are randomly initialized matrices. During training only the W matrix (the output matrix) is trained. This speeds up typical deep learning model training as there are far fewer parameters, and in fact the model z(t) -> y(t) is merely a linear model which can be fit with least squares or ridge regression in typical cases.
Solving PDEs
To solve PDEs with deep learning models we simply add a "soft constraint" into the model loss which is some norm of the PDE residual. This method is typically called Physics-Informed Neural Networks (PINNs).
Technical Details of using ESN to solve PDE
  1. Since the loss involves the PDE term it is not possible to optimize W with simple methods like least squares. This means we need to use gradient-based methods, which lose some of the efficiency gains of typical ESN workflows. In particular we need to re-compute the reservoir dynamics on every iteration of the gradient-based optimization as we need the autodiff system to trace through the reservoir so that it can compute the partial derivatives that define the PDE.
  2. Since ESN is a discrete time model we have to modify this example to be discrete time. In particular this means we have to estimate the partial derivative in time via finite differences.
Future or ToDo
  • This example does not currently use any of the common and important techniques necessary to train an ESN well. For example see A Practical Guide to Appling Echo State Networks
  • This example does not include loss terms to encourage the model to satisfy an initial condition or boundary condition as in the original example. This is simply for brevity, those terms can be added.
  • The example splits the training data x(t) into subsequences x(1:N),x(N+1:N),.... In principle the N could be increased during training, and this may help fine-tune the model to be more accurate for longer time.
  • This example does not use LBFGS to optimize the weights as in this example. It may be important to use LBFGS in PINN workflows according to literature.
  • It may be possible to use the idea of CTESN to extend this to a continuous time model.
==

Mike
Mike el 15 de Sept. de 2022
Hello David, many thanks to you and your team. Excellent work.
I have only one difficulty, I am not quite clear how to take into account the time. If I stick to the example (solve Partial Differential Equ. Using DL), the model function contains x and T. However, with ESN only x. I'am not quite sure how to incooperate time t, or differences of time?
Best regards,
Mike
  2 comentarios
Ben
Ben el 15 de Sept. de 2022
Hi Mike,
I wrote the example script following the methodology of this paper which uses ESNs to solve PDEs. My understanding is that the authors use a discrete time approach. To understand what that means, suppose you have the Burger's equation:
I have left out initial conditions and boundary conditions for brevity, but these are important. In the discrete time case we can only model for some set of samples . In the script I used t = linspace(0,1,numT).
The approach is to define the reservoir state by the recurrence relation:
where are randomly initialized matrices. We also need an initial reservoir state , I used ones(reservoirSize,1).
Finally the output of the model is
where W is a matrix you train such that should be close to . In a "classic" approach this would mean fitting W via linear regression given real values of for each and some samples of .
However in the physics-informed approach you want to add a soft constraint to the loss that satisfies the PDE. In the script above I do that by computing the derivatives in x using automatic differentiation via dlarray, and use a finite difference as an approximation to .
So the model does depend on t, but can only compute predictions at the fixed discrete time steps . To predict you have to start and run through - or if you know you could input that directly. If you need an approximation to for with this model I would consider using an interpolation method.
I should note a few details here:
  • The above describes using ESN to model the mapping , however the script I wrote actually attempts to model . I chose that approach in the script as it's more similar to what we do in the example with general neural networks.
  • Adding additional terms to the loss for the initial and boundary conditions is critical if you want to model the solution as in the example.
  • Typically an advantage of ESN is that you can fit W very quickly with linear regression. But this isn't possible with the physics-informed loss, and we instead have to use iterative gradient descent methods. Since we have to use these methods anyway, I wonder if it's worth training too, or even replacing the ESN with a typical RNN like LSTM or GRU.
Hope that wasn't too much information! Let me know if you have any questions, thanks,
Ben
Mike
Mike el 19 de Sept. de 2022
Many thanks you for the explanation, Ben, now it has become clearer.

Iniciar sesión para comentar.

Categorías

Más información sobre Pattern Recognition and Classification en Help Center y File Exchange.

Productos


Versión

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by