Solution to 8 non-linear equations.
Mostrar comentarios más antiguos
I want to solve system of non-linear equations. I have total 8 equations with 8 unknowns. I only know we can sole non-linear equations in matlab by command called fsolve but i am not able to solve in matlab. I am attahching one document which gives these all equations. I have already asked solution for these equations last week but i guess due to complexity of equations no one has tried to solve it so I have simplified equations by replacing some terms with constants. I am providing two attahcments one of which is pdf where there are total 8 equations which are to be solved for the unknown variables as x(1) x(2) upto x(8) which are marked as red and all other variables are known whereas in second attahcment i have written all these equations in matlab. So i request you people to go through these attahcments and please give some idea on syntax to solve these equations.
19 comentarios
Walter Roberson
el 2 de Mayo de 2016
Please recheck the last equation in your .m . It appears to me that the first ] should not be there: it is the only ] that occurs in the middle of a line and it is not balanced by any [
Amit Darekar
el 2 de Mayo de 2016
Editada: Amit Darekar
el 2 de Mayo de 2016
Walter Roberson
el 2 de Mayo de 2016
Note: the equations do not use k(6), k(7), k(8), k(9), k(10) but does use k(11), k(12), k(13) . Also, you used k14 which is inconsistent with the k(14) form that would be expected.
A cleaner version is
syms a1 a2 a3 a4 a5 a6 a7 a8 g1 g2 g3 g4 g5 g6 g7 g8 k1 k11 k12 k13 k14 k2 k3 k4 k5 x1 x2 x3 x4 x5 x6 x7 x8
Q = @(v) sym(v); %converts to rational
eqn1 = a1*(g1-x1) == k1*x1*x2/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+k4*x1*x8;
eqn2 = a2*(g2-x2) == Q(.5)*k1*x1*x2/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+Q(4.5)*k2*x2*x5/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+5*k3*x2*x6/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1));
eqn3 = a3*(g3-x3) == k1*x1*x2/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+3*k2*x2*x5/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+3*k3*x2*x6/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+k4*x1*x8+3*k5*x6*x8;
eqn4 = a4*(g4-x4) == 3*k2*x2*x5/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+4*k3*x2*x6/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+3*k5*x6*x8;
eqn5 = a5*(g5-x5) == k2*x2*x5/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+3*k5*x6*x8;
eqn6 = a6*(g6-x6) == k3*x2*x6/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1));
eqn7 = a7*(g7-x7) == Q(.5)*k4*x1*x8+Q(4.5)*k5*x6*x8;
eqn8 = a8*(g8-x8) == k4*x1*x8+k5*x6*x8;
Amit Darekar
el 2 de Mayo de 2016
Amit Darekar
el 2 de Mayo de 2016
Walter Roberson
el 2 de Mayo de 2016
This appears to be impractical to solve symbolically, as it quickly gets into quartics and quintics.
Do you have particular values for the constants?
Amit Darekar
el 2 de Mayo de 2016
Editada: Walter Roberson
el 2 de Mayo de 2016
Walter Roberson
el 2 de Mayo de 2016
This is pretty much intractable. It involves terms of degree at least 12 just to calculate the most simple form of the second variable. Some of the expressions are length over 3 million. And that is with the numbers substituted: the general form is much worse.
Amit Darekar
el 2 de Mayo de 2016
Walter Roberson
el 3 de Mayo de 2016
Well, the syntax I used was
Q := v->convert(v,rational);
V := Q([a1 = 4.8994, a2 = 5.143, a3 = 4.09248, a4 = 6.264, a5 = 3.7584, a6 = 2.81184, a7 = 5.568, a8 = 4.176, g1 = 0.1e-4, g2 = 0.1e-2, g3 = 0, g4 = 0, g5 = 0.1e-1, g6 = 0.8e-2, g7 = .89, g8 = 0.1899e-1, k1 = 6.4721*10^33, k2 = 6.86457*10^37, k3 = 3.66776*10^44, k4 = 3.76152*10^18, k5 = 2.61193*10^25, k11 = 1612.404956, k12 = 6925.140687, k13 = 2.56699*10^17, k14 = 1.21445*10^11]);
eval([eqn1, eqn2, eqn3, eqn4, eqn5, eqn6, eqn7, eqn8], V);
From this you might deduce that I was not using MATLAB ;-)
The MuPAD equivalent would appear to be something like
Q := v->map(v, w->lhs(w)=rationalize(rhs(w),ApproximateFloats)[1]);
V := Q([a1 = 4.8994, a2 = 5.143, a3 = 4.09248, a4 = 6.264, a5 = 3.7584, a6 = 2.81184, a7 = 5.568, a8 = 4.176, g1 = 0.1e-4, g2 = 0.1e-2, g3 = 0, g4 = 0, g5 = 0.1e-1, g6 = 0.8e-2, g7 = .89, g8 = 0.1899e-1, k1 = 6.4721*10^33, k2 = 6.86457*10^37, k3 = 3.66776*10^44, k4 = 3.76152*10^18, k5 = 2.61193*10^25, k11 = 1612.404956, k12 = 6925.140687, k13 = 2.56699*10^17, k14 = 1.21445*10^11]);
eval(subs([eqn1,eqn2,eqn3,eqn4,eqn5,eqn6,eqn7,eqn8],V));
Amit Darekar
el 3 de Mayo de 2016
Walter Roberson
el 3 de Mayo de 2016
With those constants, there are a number of places the equations hold. One of them is near
[-7281.96341633410884, -0.0386234138558410323, 94448.6657176205044, 7789.46067172915264, -17013.5102649819964, -0.000215706547689383183, -19193.1654243540615, -1.30250537199668123e-18]
and others can be quite distant.
I determined these values by minimizing the sum of squares of the differences between the left and right sides of the equations. The Mathworks minimizers I tried did not do very well; I used a tool of my own that I have been developing.
Amit Darekar
el 3 de Mayo de 2016
Walter Roberson
el 4 de Mayo de 2016
The area near g1 to g8 is not a good fit at all. I ran tens of millions of random locations through in that range, and the residues were pretty poor.
Amit Darekar
el 4 de Mayo de 2016
Editada: Walter Roberson
el 5 de Mayo de 2016
Walter Roberson
el 5 de Mayo de 2016
I find essentially identical minima at
[1363.93463893202, -293.436714960208, -3518.50682986549, -3373.75768742187, -5311.7843252732, -0.000196646724068822, -3913.20292170346, -1.30154906686073e-18]
[1119.56884188313666, -293.142401207111845, -2933.1647785108853, -2800.20300617299426, -4356.03889870663443, -0.000161454896514545384, -3053.11255938343857, -1.30134033707433513e-18]
and there were indications that there were more minima. My tests did show, though, that it was not a simple matter of it being periodic in any one variable.
Amit Darekar
el 5 de Mayo de 2016
Walter Roberson
el 5 de Mayo de 2016
Each equation has a left hand side and a right hand side. If you take (lhs)-(rhs) then you get the amount of mismatch between the two sides: if the two sides were equal then the difference would be 0. Now take the sum of squares of these partial resides, to get an expression for the total residue. Again, if you had perfect match on all of the equations then the total residue would be 0.
Now take this sum-of-squares residue formula and mininimize its output over the input parameter space. A perfect match would give 0, and it is not possible for negative totals to occur if all of the terms are real-valued. Your best match, the location of a root, is the place where the sum-of-squares residue is as small as you can get.
This is a standard formulation for solving such problems, non-linear least squares.
Finding the global minimum is not easy. There are various strategies you can use, which are implemented by the Global Optimization Toolbox. I wrote my own global minimizer instead, that combines brute force with fminsearch. I am continuing to improve the code, and I am not ready to release it.
Amit Darekar
el 5 de Mayo de 2016
Respuestas (2)
Walter Roberson
el 3 de Mayo de 2016
Getting you further: if you have the symbolic toolbox then:
syms a1 a2 a3 a4 a5 a6 a7 a8 g1 g2 g3 g4 g5 g6 g7 g8 k1 k11 k12 k13 k14 k2 k3 k4 k5 x1 x2 x3 x4 x5 x6 x7 x8
Q = @(v) sym(v); %converts to rational
eqn1 = a1*(g1-x1) == k1*x1*x2/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+k4*x1*x8;
eqn2 = a2*(g2-x2) == Q(.5)*k1*x1*x2/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+Q(4.5)*k2*x2*x5/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+5*k3*x2*x6/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1));
eqn3 = a3*(g3-x3) == k1*x1*x2/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+3*k2*x2*x5/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+3*k3*x2*x6/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+k4*x1*x8+3*k5*x6*x8;
eqn4 = a4*(g4-x4) == 3*k2*x2*x5/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+4*k3*x2*x6/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+3*k5*x6*x8;
eqn5 = a5*(g5-x5) == k2*x2*x5/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+3*k5*x6*x8;
eqn6 = a6*(g6-x6) == k3*x2*x6/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1));
eqn7 = a7*(g7-x7) == Q(.5)*k4*x1*x8+Q(4.5)*k5*x6*x8;
eqn8 = a8*(g8-x8) == k4*x1*x8+k5*x6*x8
eqns = [eqn1, eqn2, eqn3, eqn4, eqn5, eqn6, eqn7, eqn8];
va1 = Q(4.8994),
va2 = Q(5.143),
va3 = Q(4.09248);
va4 = Q(6.264);
va5 = Q(3.7584);
va6 = Q(2.81184);
va7 = Q(5.568);
va8 = Q(4.176);
vg1 = Q(0.00001);
vg2 = Q(0.001);
vg3 = Q(0);
vg4 = Q(0);
vg5 = Q(0.01);
vg6 = Q(0.008);
vg7 = Q(0.89);
vg8 = Q(0.01899);
vk1 = Q(6.4721*10^33);
vk2 = Q(6.86457*10^37);
vk3 = Q(3.66776*10^44);
vk4 = Q(3.76152*10^18);
vk5 = Q(2.61193*10^25);
vk11 = Q(1612.404956);
vk12 = Q(6925.140687);
vk13 = Q(2.56699*10^17);
vk14 = Q(1.21445*10^11);
neweqns = subs(eqns, [a1 a2 a3 a4 a5 a6 a7 a8 g1 g2 g3 g4 g5 g6 g7 g8 k1 k11 k12 k13 k14 k2 k3 k4 k5], [va1 va2 va3 va4 va5 va6 va7 va8 vg1 vg2 vg3 vg4 vg5 vg6 vg7 vg8 vk1 vk11 vk12 vk13 vk14 vk2 vk3 vk4 vk5]);
60 comentarios
Amit Darekar
el 3 de Mayo de 2016
Walter Roberson
el 6 de Mayo de 2016
Amit, see the attached file.
The result, Frv, will be a function handle suitable for passing to any of the optimization routines; it calculates a residue given a vector of 8 values.
Walter Roberson
el 6 de Mayo de 2016
The "solution" will be any point at which the the residue is calculated to be 0 to within round-off error. However, figuring out what reasonable round-off error would be, is a bit involved.
Let me turn it around another way: your constants are specified with anywhere from 1 to 10 significant digits, and your constants span a range of 10^49. How much accuracy can you possibly be expecting?
Amit Darekar
el 6 de Mayo de 2016
Walter Roberson
el 7 de Mayo de 2016
I adapted my tool to be able to search for multiple minima, with a target residue. I find that there are deep minima (that, is, probable roots) at
319.882540966194 -288.853286010057 -1014.84828080771 -921.426619350281 -1227.3536225538 -4.62901338251226e-05 -238.469571605636 -1.29842784708347e-18
1119.5688418853 -293.142401207522 -2933.16477851641 -2800.20300617832 -4356.03889871522 -0.000161454896514856 -3053.11255939107 -1.30134033707435e-18
1363.93463893113 -293.436714959978 -3518.5068298632 -3373.75768741967 -5311.78432526975 -0.000196646724068695 -3913.20292170032 -1.30154906686071e-18
Walter Roberson
el 8 de Mayo de 2016
When I check for an absolute difference in coordinates of at least 1E-5, then my tool pulls out at least 292 points. Each of the points is close to one of the three major groups. I have to go up in tolerance to about 2.07E-3 to separate the groups. I should clarify that each of those points has a residue (as defined above) of no more than 1E-11. In other words, there are entire surfaces of "almost balancing" of the equations; or possibly even that each component of the "true" root has a big of wiggle room around it where the equations are "pretty nearly" balanced.
The tool I have been designing was, I see now, constructed on the assumption that it was looking for a single global minima, so it does not notice when it has refined the situation enough that it plausibly has disconnected "basins" of global (or near-global) minima. I need to think more about better ways to handle that situation. It is easy for us to see after the fact, but I am not sure at the moment that there are good ways to detect the basins.
Walter Roberson
el 8 de Mayo de 2016
Using a residue target of 1E-15, I found over 6000 points, which group into the same three as above, with spread about 6.4e-6. This suggests that if you want your tolerance target in position to be about 1E-5 that you would need to use a residue target somewhere around 1E-14 . Frankly that sounds deep into the territory of numeric error to me, but I have not completed my analysis of the effects of numeric error (the calculation tends to fill up my memory :( )
Walter Roberson
el 8 de Mayo de 2016
Editada: Walter Roberson
el 8 de Mayo de 2016
GlobalSearch with fmincon did not do a good job on this, at least not with the minimization algorithm I tested with.
Amit Darekar
el 9 de Mayo de 2016
Walter Roberson
el 9 de Mayo de 2016
fminsearch(Frv, rand(1, 8)*10000)
For example.
Walter Roberson
el 9 de Mayo de 2016
fminunc(Frv, rand(1,8) *10000)
Amit Darekar
el 9 de Mayo de 2016
Amit Darekar
el 9 de Mayo de 2016
Walter Roberson
el 9 de Mayo de 2016
The values of the parameters range from about -5300 to +1400, but we did not know that ahead of time -- we had no idea ahead of time that they could be that negative, and if the equations change a little or of the values of the constants change a little, the locations could change significantly. So to be "fair" we need to scan a large space, such as +/- 10000.
I have enclosed my current scripts to create the equations. Inside "if false" blocks there are calls to various of the minimizers, showing setting up the structures and making the call. I have tested all of those... they just don't happen to produce useful results.
If your version of MATLAB is complaining about not getting an equation then it could mean that you are using R2011a or earlier (I think it was); those versions treated symbolic A == B as something to resolve to boolean immediately instead of as setting up an equation. If that is what you have then, in each line that has something of the form
variable = expression1 == expression2
convert it to
variable = (expression1) - (expression2)
so for example,
n1 = a1*(g1-x1) == k1*x1*x2/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+k4*x1*x8;
becomes
n1 = (a1*(g1-x1)) - (k1*x1*x2/((k11*x1+k12*x2+1)^2*(k13*x1^2*x5^2+1)*(k14*x8+1))+k4*x1*x8);
and replace
ToExpr = @(expr) SE('lhs', expr) - SE('rhs', expr);
with
ToExpr = @(expr) expr;
and then everything from
RR = arrayfun(@(expr) ToExpr(expr).^2, neweqns, 'Uniform', 0);
onward would stay the same. Because that was the purpose of that section of code anyhow: converting things from the form A == B to the form (A) - (B)
Amit Darekar
el 10 de Mayo de 2016
Walter Roberson
el 10 de Mayo de 2016
Under the constraint of positive, the best I have found so far is
[1.00000030104776094, 5.15642964104606959e-18, 6.445536160958264e-14, 1.46351906890768065e-14, 99.9999992205651722, 18.170949120290075, 889.999999259677907, 1.06301874173211104e-40]
this is, however, a residue still in the 1380 range, which is not all that good. With negative values permitted I can get into the residue 1E-22 range. The only way to interpret the 1380-ish range is that for a proper solution there are some values that really need to be negative, and the minimizer is bashing its metaphorical brains out against the barrier at 0.
Walter Roberson
el 10 de Mayo de 2016
Allowing down to -1 allows the residue to drop to about 705.
[1.07212887692108882, -7.48545078265310845e-11, 8.91486700698497536, -0.99999999999999778, 125.70981614655355, -2.58562396186333098e-05, 903.54995873455789, 2.48419697195922868e-20]
Amit Darekar
el 10 de Mayo de 2016
Walter Roberson
el 11 de Mayo de 2016
I showed you some sample syntaxes for using the function:
fminsearch(Frv, rand(1, 8)*10000)
and
fminunc(Frv, rand(1,8) *10000)
and the .m I uploaded had a whole series of "if false" that show actual working calls for using the functions with the global optimization toolbox, including ga(), and fmincon with MultiStart and GlobalSearch .
Walter Roberson
el 11 de Mayo de 2016
With the changed version, the places that look like solutions are in 3 groups near
319.862061098747 -288.868957096923 1014.81267692092 -582.23765692593 -1227.27809982177 -4.62871899069394e-05 2018.39787490468 -1.29842789540784e-18
1119.4910550871 -293.127673565497 2932.96639651961 -2456.20110771685 -4355.73048977154 -0.000161443696299899 4832.83866523531 -1.30134020783254e-18
1363.87921765495 -293.422351405688 3518.3623491881 -3029.48564718766 -5311.56389771774 -0.000196638749512495 5693.0077043954 -1.3015489111825e-18
Notice that a number of these are still negative. I did not bother to scan for all-positive solutions as it was obvious to me that no strictly-positive solution would be nearly as good.
If you changed the sign for #2, #4, and #5 as well, then that would take care of all of the "definitely negative" variables, leaving #6 and #8 as "near 0" and plausibly could be constrained to non-negative and still get a useful fit.
Amit Darekar
el 11 de Mayo de 2016
Walter Roberson
el 11 de Mayo de 2016
Editada: Walter Roberson
el 11 de Mayo de 2016
Each of the "if false" is a self-contained example. None of them does at all well.
fminsearch is less likely to get caught in local minima then some of the others are, but still if you do not happen to try it somewhere "nearby" a root, it might not do all that well.
I ran the calculation with the sign changed on x2, x4, x5. With that change I find that there are minima near
17.2979712260506 3836.95137368288 3262.9432497545 -2069.79485508912 1122.95756776566 -2.72701988859792e-06 953.468712285631 -1.22709693489669e-18
319.863832849098 288.867640006916 1014.81574141473 582.242996949238 1227.28492717406 -4.62874542633426e-05 2018.40426454802 -1.29842788503071e-18
1119.49019693062 293.127514775399 2932.96311620136 2456.19861788612 4355.72653945754 -0.000161443534776783 4832.83481788872 -1.30134024151965e-18
1363.87564262692 293.421419626657 3518.35367543979 3029.47845895869 5311.550197454 -0.000196638256889932 5692.99581919993 -1.30154897473227e-18
The first of those, with negative x4, is possibly not as good a match as the others; I would need to do further testing on that. There would, though, certainly be three solutions for which at most x6 an x8 are slightly negative; I did not yet try forcing non-negative for those.
I would suggest to you that you consider something along the lines of
N = 1000000;
L = [500*rand(N,1), 500*rand(N,1), 1500*rand(N,1), 750 * rand(N,1), 1500 * rand(N,1), rand(N,1), 2500*rand(N,1), rand(N,1)];
R = Frv(L);
[sortR, sortidx] = sort(R);
L = L(sortidx,:);
bestR = sortR(1); bestL = L(1,:);
That will probably not give you a great starting point, but it will be fast, and it will give you a fair sampling of the search space and will keep you from missing anything "obvious".
Amit Darekar
el 11 de Mayo de 2016
Walter Roberson
el 12 de Mayo de 2016
I am still analyzing. I find that if you solve the first two equations for x1 and x2, that you immediately come out with a 7th order equation. The first 4 together, solved for x1, x2, x3, x4, also appears to be 7th order. The last 3 equations together, for x5, x6, x7, appear to have a pair of solutions. It would appear, then, that there could be up to 14 real-valued solutions. With the 7th order equation, the number of real-valued solutions for the first equations is going to be odd.
I also observe that the 5th equation can induce an 11th order polynomial...
I am continuing to investigate.
Amit Darekar
el 12 de Mayo de 2016
Walter Roberson
el 12 de Mayo de 2016
I don't know yet. The calculations use a lot of resources. I do not know yet how many real-valued solutions there are.
Amit Darekar
el 12 de Mayo de 2016
Walter Roberson
el 12 de Mayo de 2016
A division by 0 on that operation makes no sense to me. I think you might need to reinstall the toolbox and
rehash toolboxcache
afterwards.
Amit Darekar
el 12 de Mayo de 2016
Walter Roberson
el 12 de Mayo de 2016
That line is fine.
Your .jpg shows you did not reach that line, that the code failed in the "syms" command while trying to do a (:) reshape operation. That makes no sense to me, suggesting that you have a corrupt symbolic toolbox installation. (Is it possible you installed a version that did not match your version of MATLAB ?)
Amit Darekar
el 13 de Mayo de 2016
Walter Roberson
el 13 de Mayo de 2016
fminsearch_options = struct('MaxFunEvals', 10000, 'MaxIter', 5000);
x = fminsearch(Frv, rand(1, 8)*10000, fminsearch_options);
Amit Darekar
el 13 de Mayo de 2016
Editada: Walter Roberson
el 13 de Mayo de 2016
Walter Roberson
el 13 de Mayo de 2016
Possibly. Give the command
format long g
and run again. I suggest
[x, fval] = fminsearch(Frv, rand(1, 8)*10000, fminsearch_options)
and then you will also see the residue that was reached.
Amit Darekar
el 13 de Mayo de 2016
Amit Darekar
el 13 de Mayo de 2016
Amit Darekar
el 13 de Mayo de 2016
Editada: Walter Roberson
el 13 de Mayo de 2016
Walter Roberson
el 13 de Mayo de 2016
Yes, fminsearch uses random processes. So does ga() and particleswarm and simulated annealing. You can use calls to rng() to set a particular random number seed if you need to recreate a particular location.
Notice how poor your fval is. You are caught in a local minima that is far away from set of roots. The location you have reached is not suitable for use in whatever you are doing.
fminsearch is not a global minimizer. The routines in the Global Optimization toolbox, such as MultiStart and GlobalSearch and ga and pso are intended to have more chance of finding a global minima.
Finding global minima is difficult; even in cases such as yours where in theory you could calculate all of the roots, it is often not something that is feasible to do because of time or memory limitations.
Amit Darekar
el 13 de Mayo de 2016
Editada: Amit Darekar
el 13 de Mayo de 2016
Walter Roberson
el 13 de Mayo de 2016
Those "if false" blocks I had showed a number of different attempts at global optimization. Basically, you try them all, with different parameters.
I have not tried simulated annealing yet. That has the potential to be the best, but it is also usually the slowest.
Amit Darekar
el 13 de Mayo de 2016
Editada: Amit Darekar
el 13 de Mayo de 2016
Walter Roberson
el 16 de Mayo de 2016
Different parameters would be things like different Algorithm specifications for fmincon. Or if you do not get a nice solution for MultiStart, then increasing the number of starts requested.
Amit Darekar
el 16 de Mayo de 2016
Walter Roberson
el 16 de Mayo de 2016
There is an algebraic approach.
Take the residue formula, Fra . Differentiate with respect to x7 and solve for x7. Substitute the result into Fra . Differentiate the result with respect to x4, solve for x4, substitute back in. Differentiate the result with respect to x3, solve for x3, substitute back in. Differentiate with respect to x6, solve for x6, substitute back in.
At this point you have minimized (one hopes) the residue with respect to x7, x4, x3, x6. Things get tougher now.
Go back to the list of equations and substitute in the x7, x4, x3, x6 you have found, and simplify. You will find that equations 1, 5, 6, and 8 have become true-isms, so you can remove those from consideration for the next part, leaving equations 2, 3, 4, and 7. Solve the first of those, the original equation #2, for x5: you will get two solutions that are root-free. (Note: solving that equation for x2, x5, or x8 will get you two root-free solutions; solving for x1 or solving any of the other equations for anything will get you something that requires roots.)
You are now down to three equations, the original #3, 4, and 7, and three variables, x1, x2, and x7.
You can substitute the new x5 into the those and form a new residue, sum of squares of left side minus right side for each equation. Your cleanest approach from there is to solve for x2. That will get you a polynomial of degree 10 that is only about a megabyte long. (Solving for either of the other two is much worse!)
You can now take one of the 10th roots and substitute it in, but you are not going to be able to solve for either of the remaining two variables; the equation has gotten too complicated.
At this point you should probably use matlabFunction on the new residue (the one in x1, x2, x7) and start looking for roots.
Unfortunately, as soon as you do that, you will find that the intermediate numbers involved are so large that they become infinity as far as double precision is concerned, and you cannot proceed numerically from that.
This is as far as I got algebraically. What I would do next is step back down one variable at a time, such as to the residue implied by having solved for x7, x4, x3, x6, leaving equations 2, 3, 4, and 7 to be solved for x1, x2, x5, x7. Possibly that would not overflow as badly.
The idea is to algebraically solve for as many equations as feasible, then search for roots over the reduced number of variables, and then back-substitute for the other variables.
Amit Darekar
el 16 de Mayo de 2016
Walter Roberson
el 16 de Mayo de 2016
I have been using Maple; MATLAB struggles with multiple roots sometimes.
Walter Roberson
el 16 de Mayo de 2016
Editada: Walter Roberson
el 18 de Mayo de 2016
In Maple notation:
EQ1 := Matrix(1, 8, [[24497/5000-24497*x1*(1/5000) = 3761520000000000000*x1*x8+6472099999999999310700850599428096*x1*x2/((121445000000*x8+1)*(256699000000000000*x1^2*x5^2+1)*(7091431991222599*x1*(1/4398046511104)+7614272709341177*x2*(1/1099511627776)+1)^2), 5143/100-5143*x2*(1/1000) = 3236049999999999655350425299714048*x1*x2/((121445000000*x8+1)*(256699000000000000*x1^2*x5^2+1)*(7091431991222599*x1*(1/4398046511104)+7614272709341177*x2*(1/1099511627776)+1)^2)+308905649999999974575817119877992284160*x2*x5/((121445000000*x8+1)*(256699000000000000*x1^2*x5^2+1)*(7091431991222599*x1*(1/4398046511104)+7614272709341177*x2*(1/1099511627776)+1)^2)+1833880000000000310943966921490318344427929600*x2*x6/((121445000000*x8+1)*(256699000000000000*x1^2*x5^2+1)*(7091431991222599*x1*(1/4398046511104)+7614272709341177*x2*(1/1099511627776)+1)^2), -12789*x3*(1/3125) = -3761520000000000000*x1*x8-78357900000000012906921984*x6*x8-6472099999999999310700850599428096*x1*x2/((121445000000*x8+1)*(256699000000000000*x1^2*x5^2+1)*(7091431991222599*x1*(1/4398046511104)+7614272709341177*x2*(1/1099511627776)+1)^2)-205937099999999983050544746585328189440*x2*x5/((121445000000*x8+1)*(256699000000000000*x1^2*x5^2+1)*(7091431991222599*x1*(1/4398046511104)+7614272709341177*x2*(1/1099511627776)+1)^2)-1100328000000000186566380152894191006656757760*x2*x6/((121445000000*x8+1)*(256699000000000000*x1^2*x5^2+1)*(7091431991222599*x1*(1/4398046511104)+7614272709341177*x2*(1/1099511627776)+1)^2), -783*x4*(1/125) = 78357900000000012906921984*x6*x8-205937099999999983050544746585328189440*x2*x5/((121445000000*x8+1)*(256699000000000000*x1^2*x5^2+1)*(7091431991222599*x1*(1/4398046511104)+7614272709341177*x2*(1/1099511627776)+1)^2)-1467104000000000248755173537192254675542343680*x2*x6/((121445000000*x8+1)*(256699000000000000*x1^2*x5^2+1)*(7091431991222599*x1*(1/4398046511104)+7614272709341177*x2*(1/1099511627776)+1)^2), 9396/25-2349*x5*(1/625) = 78357900000000012906921984*x6*x8+68645699999999994350181582195109396480*x2*x5/((121445000000*x8+1)*(256699000000000000*x1^2*x5^2+1)*(7091431991222599*x1*(1/4398046511104)+7614272709341177*x2*(1/1099511627776)+1)^2), 35148/625-8787*x6*(1/3125) = 366776000000000062188793384298063668885585920*x2*x6/((121445000000*x8+1)*(256699000000000000*x1^2*x5^2+1)*(7091431991222599*x1*(1/4398046511104)+7614272709341177*x2*(1/1099511627776)+1)^2), 123888/25-696*x7*(1/125) = -1880760000000000000*x1*x8-117536850000000019360382976*x6*x8, 47241/6250-522*x8*(1/125) = 3761520000000000000*x1*x8+26119300000000004302307328*x6*x8]]):
EQ2 := convert(simplify(EQ1, size), list):
EQ2L := simplify(map(lhs-rhs, convert(EQ2, list)), size):
F := simplify(add(map(v->v^2, EQ2L)), size):
Fx7 := simplify(solve(diff(F,x7),x7),size):
F_x7 := simplify(eval(F, x7 = Fx7), size):
Fx4 := simplify(solve(diff(F_x7, x4), x4), size):
F_x47 := simplify(eval(F_x7, x4 = Fx4), size):
Fx3 := simplify(solve(diff(F_x47, x3), x3), size):
F_x347 := simplify(eval(F_x47, x3 = Fx3), size):
Fx6 := simplify(solve(diff(F_x347, x6), x6), size):
F_x3467 := simplify(eval(F_x347, x6 = Fx6), size):
NewL := eval(eval(eval(eval(EQ2, x7 = Fx7), x4 = Fx4), x3 = Fx3), x6 = Fx6):
After that things start to get rougher.
Amit Darekar
el 17 de Mayo de 2016
Walter Roberson
el 17 de Mayo de 2016
I am still learning the MATLAB equivalents to some of those parts of Maple.
Amit Darekar
el 17 de Mayo de 2016
Walter Roberson
el 17 de Mayo de 2016
You need to rethink your approach, or recheck your equations. I doubt you could get away with less than half an hour of computation each time, and probably more like a couple of hours.
To start with you need to address that it is looking likely that there are 8 to 20 real-valued solution sets, and you have not given any way to pick between them other than hoping for positive solutions (which might not be possible.)
Then, the next time around, which of the constants are likely to have changed and which are likely to stay the same? That might influence the order of solutions, as some orders might be easier to do incremental solutions than others.
Amit Darekar
el 18 de Mayo de 2016
Amit Darekar
el 19 de Mayo de 2016
Walter Roberson
el 19 de Mayo de 2016
jacobian()
Walter Roberson
el 19 de Mayo de 2016
I have been working with simulannealbnd() tonight for this problem, figuring out how it works, determining the parameters to use. I did have some success, but I also encountered some bugs. The usual options turn out to be hopeless for this; you do not really start to get anywhere until you include calls to a hybrid resolution routine such as fminsearch.
I used the code I posted earlier to construct the four-variable reduced list of equations, and I converted that to a residue function. With that in hand, I find at least 7 roots.
Amit Darekar
el 19 de Mayo de 2016
Walter Roberson
el 19 de Mayo de 2016
You cannot get a fast output for that operation.
However:
G = jacobian([f1, f2, f3, f4, f5, f6, f7, f8], [x1, x2, x3, x4, x5, x6, x7, x8]);
ABC = sym('G%d%d', [8 8]);
ABC(find(G==0)) = 0;
ABCinv = inv(ABC);
SV = symvar(ABCinv);
Now ABCinv contains the pattern of the inverse. You can now
subs(ABCinv, SV(1:4), [G(1,1), G(1,2), G(1,5), G(1,8)])
and so on.
Unsimplified, the terms are about 1.5 megabytes long each.
Amit Darekar
el 20 de Mayo de 2016
Walter Roberson
el 24 de Mayo de 2016
Editada: Walter Roberson
el 24 de Mayo de 2016
Working further with the 4-variable reduced version (by solving for the 4 variables that are easy to get rid of), I did more work with simulannealbnd() with a hybrid function specified. It turns out that the hybrid function result is not used to inform the code about the direction to move, in current versions. I created a support case and asked for an enhancement; perhaps it will show up some day.
I continued on to make a private copy of the simulannealbnd routines and added the enhancement of having the hybrid call inform the code about the direction to move. With that in place, I had a fair success in starting from random points and getting dececently fast convergence to a solution when the variables is in the right quadrant. It is not an all-in-one solution but it might be possible to connect it with a MultiStart process. I have a better understanding of how simulannealbnd works now, but I have no idea yet what parameters to use to allow it to converge reasonably without the hack I put in.
So far I have identified at least 9 different sets of roots, including two that are very close together but not identical. I might have identified a 10th as well, close to one of the other ones, but it is possible that it is the same "basin" and just not as optimized as much. One of the solutions that I discovered has x8 in a very different place than all of the other solutions.
So far only one of the solutions for x2, x3, x5, x8 has all of the quantities positive; I have not back-substituted to figure out whether the other values would also come out positive.
Is it still the case that you want all of the quantities to be constrained to be non-negative? That might prove to be difficult to find -- that combination I perhaps found is the only one I have seen so far and it has only shown up once; x8 in particular really wants to be negative.
My next step would be to investigate patternsearch() to see if it can do something for this situation.
Amit Darekar
el 24 de Mayo de 2016
Alex Sha
el 29 de Nov. de 2019
for the case:
a1= 4.8994;
a2=5.143;
a3= 4.09248;
a4= 6.264;
a5= 3.7584;
a6= 2.81184;
a7= 5.568;
a8=4.176;
g1=0.00001;
g2=0.001;
g3=0;
g4=0;
g5=0.01;
g6=0.008;
g7=0.89;
g8= 0.01899;
k1=6.4721*10^33;
k2=6.86457*10^37;
k3=3.66776*10^44;
k4=3.76152*10^18;
k5=2.61193*10^25;
k11=1612.404956;
k12=6925.140687;
k13=2.56699*10^17;
k14=1.21445*10^11;
results:
x1: 1.00000104645732E-5
x2: 2.7660633694392E-41
x3: -0.058886670079626
x4: -0.0386368328164341
x5: -0.0532999996306495
x6: 0.00763418928185147
x7: 0.825908750010556
x8: 3.97704950889303E-25
for the case:
a1 = 4.8994;
a2 = 5.143;
a3 = 4.09248;
a4 = 6.264;
a5 = 3.7584;
a6 = 2.81184;
a7 = 5.568;
a8 = 4.176;
k1 = 6.4721*10^33;
k2 = 6.86457*10^37;
k3 = 3.66776*10^44;
k4 = 3.76152*10^18;
k5 = 2.61193*10^25;
k11 =1612.404956;
k12 =6925.140687;
k13 =2.56699*10^17;
k14 =1.21445*10^11;
g1 = 1;
g2 = 10;
g3 = 0;
g4 = 0;
g5 = 100;
g6 = 20;
g7 = 890;
g8 = 1.81;
The results:
x1: 1.00000025803197
x2: 1.01251539939794E-17
x3: 13.0809883376179
x4: 10.1883258496752
x5: 93.9666637847973
x6: 16.341900663404
x7: 896.108749942814
x8: 1.77082221063051E-26
Categorías
Más información sobre Linear Least Squares en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!