Reformulate a Constrained Linear Least Square Problem

4 visualizaciones (últimos 30 días)
I have a problem of the form
Normally you solve it like this:
u = (0:0.1:10)';
v = sin(2*pi*1/30*(u-0.5));
plot(u,v);
xlabel('u');
ylabel('v');
C = u.^[0 1 2 3 4 5 6 7];
d = v;
u2 = (0:0.01:11)';
A = -u2.^[0 1 2 3 4 5 6 7];
b = zeros(size(A,1),1);
% normally just use lsqlin to solve this
x = lsqlin(C,d,A,b);
hold all
plot(u,C*x)
However, my problem is actually quite large and ill-conditioned because of the high degree of polynomial that I am fitting. The interior-point solver uses C'*C to solve this problem. That is a problem because it squares the condition number of C. The interior-point algorithm struggles to converge while the older, now deprecated Active-set algorithm works well. The Active-set algorithm wasn't large scale. It was medium scale. It worked great. However, I don't have that available. What are some ways I can stabilize and solve this problem?
Some ideas:
  • Maybe I can use the null space of A?
  • Maybe I can reformulate the problem to fmincon with proper settings.
  • I know about orthogonal transformation for a single dimension polynomial. However, I work with multidimensional polynomials and even though orthogonal decomposition and formulations exist, I don't know them or how to use them. If you propose this, please refer to how I can use these and where I can learn about them and understand them.

Respuesta aceptada

Steve Grikschat
Steve Grikschat el 29 de Mayo de 2018
You can try a trick from Lawson & Hanson and re-formulate to a minimal distance problem and solve via lsqnonneg. lsqnonneg is an active-set method, so if those work for you, then it might as well.
The minimum distance problem looks like:
minimize ||x||^2
x
s.t. Abar*x <= bbar
Code:
% Transform into minimal distance
[Q,R] = qr(C,0);
dbar = Q'*d;
Abar = A/R;
bbar = b - ARinv*dbar;
% Get min-distance into lsqnonneg form
n = size(Abar,2);
E = [Abar';
bbar'];
f = [zeros(n,1); -1];
[u,~,residual] = lsqnonneg(E,f);
xbar = -residual(1:n)/residual(end);
% Map back
x = R\(xbar+dbar);
  6 comentarios
Steve Grikschat
Steve Grikschat el 4 de Jun. de 2018
Glad you found my typo and that this worked for you.
fmincon is going to be slower on structured problems like this because it doesn't assume any structure.
Jason Nicholson
Jason Nicholson el 13 de Jun. de 2018
Just got my copy of Solving Least Squares Problems (Classics in Applied Mathematics). I am working on understanding the details of this problem.

Iniciar sesión para comentar.

Más respuestas (1)

Nikhil Negi
Nikhil Negi el 29 de Mayo de 2018
Hello Jason,
let me see if i understand your problem coorectly you want to minimize the function J which is constrained by the inequality A*x < 0, here im assuming you want A*x < b where b is a mineq x 1 zero vector. please correct me if i'm wrong.
i think you can use fmincon function of matlab for this problem very efficiently.
%define J as
J=@(x)0.5*((rssq(C*x-d))^2);
x0=rand(8,1);
b=zeros(mineq,1);
x=finmincon(J,x0,A,b);
use different random values of x0 because it might give local minima (fmincon is generally used for convex functions because we can not be sure if the minima given is local or global) and compare J(x) for all these x obtained and compare J(x) for these x's and the minimum of these will give you your answer. now its not 100% certain because there are chances that it will always get stuck on the local minima and never reach global minima but i think if you do it for 100 or 1000 random x0 it should give u the correct anwer.
  2 comentarios
Jason Nicholson
Jason Nicholson el 29 de Mayo de 2018
My feeling is that this problem is convex. The constraint is linear and the problem is linear. fmincon should give me the global minimum no matter the start point.
Jason Nicholson
Jason Nicholson el 3 de Jun. de 2018
Editada: Jason Nicholson el 3 de Jun. de 2018
This is a valid solution but it seems slow. I will keep it mind. Below is more information.
First of all, here is a better cost function that includes the gradient:
function [cost, gradient] = costFunction(x,C,d,CC,Cd)
residuals = C*x-d;
cost = 1/2*(residuals'*residuals);
if nargout > 1
gradient = CC*x-Cd;
end
end
Next, we set the options to fmincon with the active-set solver
options = optimoptions('fmincon', 'Algorithm', 'active-set', ...
'SpecifyObjectiveGradient', true, 'Display', 'iter-detailed');
or you can use the sqp solver
options = optimoptions('fmincon', 'Algorithm', 'active-set', ...
'SpecifyObjectiveGradient', true, 'Display', 'iter-detailed');
Next we make a find a reasonable guess:
x0 = C\d;
Then we call fmincon:
x = fmincon(@(x) costFunction(x,C,d,C'*C,C'*d),x0,A,b, ...
[],[],[],[],[],options);
I get the following graph (for both active-set and sqp) which shows the fit matches the test points.
The only downside is this is slow compared to the "re-formulate to a minimal distance problem" posted by Steve Grikschat.
It should be noted that the sqp and the active-set solver both gives the same results. This is good news. The sqp was faster for my test problem by a factor of 10. active-set took 127 seconds. sqp took 13 seconds. "reformulate to a minimal distance problem" took 0.063 seconds.

Iniciar sesión para comentar.

Categorías

Más información sobre Solver Outputs and Iterative Display en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by