L1 Optimization in matlab
Mostrar comentarios más antiguos
Hi guys,
I am trying to solve a slightly modified L1 optimization problem in matlab
argmin_x : |x-d|||^2 + |Fx|||_1
where F is a low rank matrix and d is a given vector. x is the variable to be minimized. Could you suggest the best way to solve this in matlab??
Respuesta aceptada
Más respuestas (1)
Sravan Karrena
el 21 de Mzo. de 2019
Editada: Walter Roberson
el 21 de Mzo. de 2019
s = size(F,1);
nx = size(F,2);
f = [-2*d; zeros(s,1); ones(s,1)];
H = blkdiag(2*eye(nx),zeros(s),zeros(s));
Aeq = [F -eye(s) -zeros(s)];
beq = zeros(s,1);
A = [zeros(s,nx) eye(s) -eye(s); zeros(s,nx) -eye(s) -eye(s)];
b = zeros(2*s,1);
[xopt,fval] = quadprog(H,f,A,b,Aeq,beq);
xopt = xopt(1:nx)
Categorías
Más información sobre Choose a Solver en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!