10 views (last 30 days)

Show older comments

I used fmincon to find x which minimized my function and my object function looks like

myratio=(2*P_peak/10*R*nvar)

so at the end fmincon needed to provide me x which is a vector 2*1 .

when I changed my x0 I've got different answers, then I thought may my function doesn't have global minimum. I plot it and I see it has a valley but that is not an exact point but also it's a surface.

John D'Errico
on 6 Jan 2018

You have completely changed your question now, asking the difference between fmincon algorithms.

John D'Errico
on 6 Jan 2018

If you have no clue about your objective? Then you should spend some time in learning about it, thinking about it, plot it, etc. If you know absolutely nothing about what you are trying to solve, are unwilling to do the necessary work in advance, then you are probably the wrong person to be doing this optimization.

You should never just throw a function at some optimizer and expect something meaningful to come out, with no thought invested. Well, if you do, expect less than useful results. So plot your function. Does it appear to have simple behavior? Is it continuous? Differentiable? Smoothly so? Are there bumps in the function, multiple dips? Should there be? Verify that your plots make sense in context of what you know or expect. Do there appear to be local minima on the boundary?

Yes, if your function is highly multi-dimensional, then all of this gets harder. Things can get nasty in high dimensions.

In general, an objective can be arbitrarily complex. There is no simple scheme that will always work to come up with a good set of starting values. If there were, then optimization tools would use it!

In the case of your plotted function, it appears to live on a triangular domain. (If not, then why is that what you have shown us?) So I assume the region arises from your constraints. The function appears to be relatively well posed, with a probable minimum somewhere in the interior.

If the function has some global minimum that lies outside of your constraint boundary, that is irrelevant. All you care about are minima that lie inside, and it appears to be quite a simple shape. So what is the problem?

If you are worried and still have no clue, then pick multiple points inside the boundary. Use the best of the lot as your start point.

John D'Errico
on 6 Jan 2018

How is this not helpful? I told you to plot the function. I told you to look at the behavior. I suggested you can generate multiple points to sample the value, then take the best, a simple approach when you have no clue about where to start and no knowledge about the system.

I'm a bit mystified, since your function appears to have a very simple shape, in only two dimensions. So that surface plot will tell you a vast amount about the function.

My comment was that you need to look at what you have. ALWAYS plot your problem if you can do so. Rotate the plot. Think about the shape if that is the only way to gain knowledge with no other source. Too many people start out with an optimization by just throwing it at an optimizer, without even considering that their objective is some highly discontinuous, virtually random mess, and then are surprised at what they get.

Here is the same figure you showed, but I have rotated it around a bit:

Rotate it around. It appears to have a valley, that may be rather flat along the length of the valley. It is conceivable there may be multiple local minima along the valley, but if not, when a function has a long flat valley like that, it may suggest that there are essentially multiple solutions, all of which are equally nearly as good. Starting from multiple points may help to resolve some of this.

Note that you can see your function DOES have at least two minima. You can see that one corner of the function turns down, along that diagonal edge of the domain. So assuming that edge of the region is a constraint boundary, if you start in that vicinity, the optimization will get stuck in a local min, without finding the global min in the valley.

In higher dimensions, you can pick several variables to plot, looking at slices of the function.

And when you have lots and lots of variables, none of which you have a clue about, well, there is no magical way to know where to start the search.

But this problem is quite a simple one. I expect to see a local min along one boundary. I expect to see a min in the valley, which appears to be the global min within the constraint region (based on the plot coloring.) I might postulate that valley could be rather flat. All of this could be resolved far more clearly with the actual function.

Thomas Steffen
on 5 Jan 2018

If your objective function has only one local minimum, it does not matter which x0 you specify.

If there are several local minima, fmincon (depending on the solver) may find the closest local minimum, which may not be the global minimum.

Since you know little about your objective function, the best approach is to try different values for x0. You can either take random values, or regular spaces values. See if they lead to the same minimum - if they do, it is likely to be the global optimum. If not, x0 needs to be carefully chosen.

Also try the trust-region-reflective algorithm, it can be much better at finding the global minimum than the default interior point algorithm. https://uk.mathworks.com/help/optim/ug/fmincon.html#busoq0w-1

Walter Roberson
on 6 Jan 2018

fmincon is not a global minimizer. Depending on the algorithm you choose, you might not even get the "closest" minima even if your function is not especially steep: the algorithms used to decide on the next point to examine can overproject into a different "catch basin", especially early on.

There is a global optimization toolbox. Even the algorithms there are not promised to find the global minima (and often do not, even when they do end up in the right catch basin). The different algorithms have different strategies to attempt to find the global minima, but it can be proved that there cannot be any algorithm that is certain to find the global minima in finite time.

It is often useful to try fminsearch for functions with multiple minima.

To say much more we would need to see the code for your function, preferably with data file to allow us to test.

John D'Errico
on 7 Jan 2018

Edited: John D'Errico
on 7 Jan 2018

To answer your new question, which is now what is the difference between the 5 algorithms in fmincon, just read the

doc fmincon

Follow the link under "Algorithms" for "Choosing the algorithm". It has a fairly detailed explanation of the differences, and there seems to be no purpose in copying that explanation when it is already well written there.

Most of those differences may seem somewhat subtle to someone who is not into optimization methods. One method requires the user to supply a gradient. Others are able to survive NaN or inf results, something that can blow some codes out of the water. Some are better at truly large problems than others. Some might be a bit more efficient on one problem or another.

But all of those methods will be able to solve your problem, finding a local minimum based on the start point, if used properly. Ok, not trust-region-reflective, since it does not allow inequality constraints. But then the optimizer will tell you there is a problem and force you to switch methods.

NONE of the algorithms in fmincon are global methods. They will all converge to a point based on the start point. (Not even a method described as a global method is assured to always converge to the true global solution for a complete black box objective. For that to happen, you need to have some way to limit the derivatives of your function.)

So the real issue here seems to be you need to understand a concept that I've always called a basin of attraction. Any local minimum will be a point of convergence for a given set of start points. So the set of all start points that converge to a given solution are the basin of attraction for that solution. That basin may well be different for different methods, but all methods in fmincon are only expected to converge to a local minimizer at best, IF they converge at all. Sufficiently nasty problems will exist to confuse any method that works on black box objectives.

John D'Errico
on 7 Jan 2018

Moving to c won't help, a waste of time. The issue is with the surface, not the algorithm. So a different language will just shoot the messenger, without changing the story you are told.

As well, changing the optimization method in fmincon is another waste of time here. Again, that is just shooting the messenger.

What I have not been told is if the different solutions you got are truly different, or if they are all essentially the same, to within the tolerances that can be achieved in floating point arithmetic, and the convergence tolerance. I'm not sure it matters a lot though.

I expect to see at LEAST two distinct answers. I have now said this several times from looking at your surface. Depending on where you start, you will terminate either in the valley, or along the edge. And in either case, start a numerical optimization tool at a different spot, and once it finds a point that satisfies the convergence tolerances, it must stop. So you should expect at best to find a cluster of solutions around two locations.

As to what those clusters should look like, I would expect to see poor definition down in the bottom of the valley. So the cluster should look moderately linear there. Again, the convergence tolerances will impact the shape, but don't go crank the convergence tolerances down to 1e-20 or 1e-1000 and expect it to help. I see people try to do that all the time. Beyond a certain point, this just assures the code will never terminate until they hit the maximum number of iterations. Then they rerun the code, but increase the maximum number of iterations to 1e100, and then ask why their optimization does not terminate. (Yes, I've seen it done.)

Were it my problem to solve, I would use clustering ideas to resolve the sets of solutions. As I said, expect roughly two distinct solution clusters.

I would look at the set of solutions in the valley, because the valley should have lower function value than along the edge. Look to see their behavior. Compare the function value returned for each solution generated. Be careful. Use the exact points returned, not a 5 digit rounded approximation as you see it in the command window due to format short being active.

I might compute the hessian matrix at the solutions, thus the matrix of second partial derivatives at the optimum. I'd look to see if that matrix will be effectively numerically singular, again, to within reasonable tolerances defined by the arithmetic.

My guess is this problem is not that distinct from many classically posed optimization functions, designed to test optimization tools (and given to students of optimization to learn from as homework assignments.) The Rosenbrock function for example, with a valley of its own.

There are lots of issues here. Subtle ones, concerning convergence tolerances and issues with floating point double precision arithmetic. You can spend a lot of time in a somewhat quixotic task, along the way hopefully learning much about things like double precision arithmetic, about convergence tolerances, about optimization methods in general, about shoes and ships, and sealing wax, cabbages and kings. (Sorry, I started to channel Lewis Carroll there.) But do you really need the truly globally perfect solution, accurate to precisions like 1e-17 or finer yet? At some point you must accept there are other windmills more worth your time to tilt.

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!