Solving nonlinear equation using lsqnonlin
19 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Anuj Kumar Sahoo
el 28 de Nov. de 2020
Comentada: Anuj Kumar Sahoo
el 5 de Dic. de 2020
Hi
I am trying to solve a set of non-linear equations using lsqnonlin. Initially I was using 'levenberg-marquardt', but I was getting arbitrary results than what I was supposed to get. Then I tried solving using 'lsqnonlin' by defining some upper and lower bounds for the variables. However, the variables are now clinging to either the upper or the lower bound value. Therefore, the reults once again are not correct. What might be the reason for such behaviour?
Here is the code that I am using: (There are 36 to 56 variables. I am showing just 2 for example)
lb=[0.5,-0.5];
ub=[1.5,0.5];
x0=[1,0];
options=optimoptions(@lsqnonlin,'Algorithm','trust-region-reflective');
[x,res]=lsqnonlin(fun,x0,lb,ub,options);
2 comentarios
Ameer Hamza
el 28 de Nov. de 2020
Editada: Ameer Hamza
el 28 de Nov. de 2020
This is likely caused by your objective function 'fun'. It may be unbounded, so the optimal solution is clinging to the extreme limits.
Respuesta aceptada
Walter Roberson
el 30 de Nov. de 2020
Editada: Walter Roberson
el 2 de Dic. de 2020
When I use those values in my objective function I get good results. However, when I am trying to findout the values of those unknown coefficients/variables of the same ideal system using 'lsqnonlin' they are coming far from the expected results.
There is no possible algorithm that can find the global minimum of arbitrary "black box" functions (a "black box" function is one that has to just be executed but cannot be effectively examined.) Literally not possible -- it has been proven mathematically.
Different minimizer algorithms have different kinds of functions that they do a good job on. And, conversely, different kinds of situations that they tend to get stuck in.
lsqnonlin often does much better than I expect, but there are some situations it does not do well on.
One example of a situation lsqnonlin does not do well on, is the sum of two gaussians that are slightly different formulas, and which are not well separated. In such a situation, lsqnonlin tends to drive one of the two gaussians to effectively take over all of the fitting, with the other one being driven indefinitely wide -- effectively the second one gets driven to be a constant term. If you start too close to one side the first gaussian takes over; if you start too close to the other side, the other gaussian takes over. You might need to start quite close to the true positions for lsqnonlin to find the correct values. lsqnonlin is not able to find the correct positions itself because every time it moves away from the "one gaussian plus effectively constant" situation, the fit seems to get much much worse.
Another case that optimizers have trouble with is a central region that is narrow and deep, with asymptoptic descent on either side of that: if the optimizer happens to land outside the edge of the central region, then the slope is always decreasing away from the central region, so the optimizers tend to chase that out until infinity or the bounds. The optimizer would have to deliberately "go uphill" hoping that there might just happen to be a well to be discovered.
These kinds of problems are often difficult to deal with unless you can find an optimizer that is designed for the particular shape of function you are dealing with. There are algorithms for sum of gaussians, for example (but I have never researched to find out how they work.)
0 comentarios
Más respuestas (1)
Ameer Hamza
el 30 de Nov. de 2020
As Walter already explained that there is no guaranteed way to get a globally optimum solution for an arbitrary problem using a numerical method. Because of the way they are formulated, gradient-based methods can at best reach a locally optimal solution depending on the initial guess. Some metaheuristic optimizers, such as genetic algorithm ga(), or particle swarm optimizer particleswarm() have a higher chance of reaching a global solution, but still, the globally optimum solution can never be guaranteed. However, you can increase the probability in several ways. Global Optimization toolbox gives necessary tools for that.
For example, see GlobalSearch(): https://www.mathworks.com/help/gads/globalsearch.html or multistart(): https://www.mathworks.com/help/gads/multistart.html functions. They provide a systematic way to run the common optimizers, such as fmincon(), with several different starting points hoping that this will lead to a global solution. Similarly, check ga(): https://www.mathworks.com/help/gads/ga.html and particleswarm(): https://www.mathworks.com/help/gads/particleswarm.html.
8 comentarios
Walter Roberson
el 5 de Dic. de 2020
The editor points out that you assign to gblce and gblco but you do not use those variables. But on the line after you assign to gblce you use blce and on the line after you assign to gblco you use blco . Is it possible that you should have been using the g* version of the variables on those lines?
Ver también
Categorías
Más información sobre Linear Least Squares en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!