Optimization problem giving inconsistent answers upon changing initial guesses
1 view (last 30 days)
I want to solve the following optimization problem from Optimization of Chemical processes:
Here is what I have done so far:
- Assume wcp is constant for all streams
- Contraints can be derived from the equations of heat:
Where dT_cold & dT_hot are the change in temperature of each cold and hot stream in a heat exchanger. And . This is better visualized in the following image:
- The objective function of the problem seems pretty straightforward:
- Some linear equilibrium constraints can be made as follows:
From the heat equation dT_cold=dT_hot, from that I got:
- The inequality linear constraints can be set as:
-Lower and upper bounds for the variables are somewhat irrelevant, given the constraints, either way I specified them as:
-Lastly, some non-linear constraints were specified freom the heat equations as:
Could have used dT_cold or dT_hot, but those should be the same if the other contraints hold.
I coded that into Matlab as follows:
% Initial guesses (a = A1, A2, A3, T1, T2, T3, T4 & T5):
a0 = [500, 1000, 2000, 190, 300, 450, 300, 200];
% Linear inequality constraints:
A = [0, 0, 0, 1, -1, 0, 0, 0;... % T1<T2
0, 0, 0, 0, 1, -1, 0, 0;... % T2<T3
0, 0, 0, 1, 0, 0, -1, 0]; % T1<T4
b = [0; 0; 0];
% Linear equality constraints (dTcold=dThot):
Aeq = [0, 0, 0, 1, 0, 0, 0, 1;...
0, 0, 0, -1, 1, 0, 1, 0;...
0, 0, 0, 0, -1, 1, 0, 0];
beq = [400; 400; 100];
LB = [0, 0, 0, 100, 100, 100, 100, 100];
UB = [Inf, Inf, Inf, 300, 400, 600, 400, 300];
[a, AT] = fmincon(@(a) a(1)+a(2)+a(3), a0, A, b, Aeq, beq, LB, UB, 'nlconf');
Where the non linear constraints function was:
function [c, ceq] = nlconf(a)
c = ;
wcp = 100000;
U1 = 120;
U2 = 80;
U3 = 40;
dTlm = @(dT1,dT2) (dT1-dT2)/log(dT1/dT2);
dT1_1 = 300-a(4);
dT2_1 = a(8)-100;
HX1 = U1*a(1)*dTlm(dT1_1,dT2_1)-wcp*(a(4)-100);
dT1_2 = 400-a(5);
dT2_2 = a(7)-a(4);
HX2 = U2*a(2)*dTlm(dT1_2,dT2_2)-wcp*(a(5)-a(4));
dT1_3 = 600-500;
dT2_3 = a(6)-a(5);
HX3 = U3*a(3)*dTlm(dT1_3,dT2_3)-wcp*(500-a(5));
ceq = [HX1; HX2; HX3];
I am not sure whether there is something wrong with my reasoning, or perhaps the way I coded it. Because when I change the initial guesses just a bit, I end up getting a different minimized value for the cost function every time.
If you read it this far, my I just ask to leave any comments or answers on what you think could cause the estimated parameters to change on every initial guess and not settle.
Any help is appreciated, thanks!
Edit: I am not sure if I can run the function here, because fmincon needs the separate function script to specify the nonlinear constraints.
With initial guessess of:
a0 = [500, 1000, 2000, 190, 300, 450, 300, 200];
1.7927 2.4757 3.8711 0.2365 0.3452 0.4452 0.2914 0.1635
Different initial guesses (closer to the estimate from the previous guesses):
a0 = [1500, 2500, 4000, 190, 300, 450, 300, 200];
1.3740 2.3942 4.0050 0.2245 0.3398 0.4398 0.2847 0.1755
John D'Errico on 21 Feb 2022
There are several issues you need to understand.
First. not all optimization problems have a single, unique solution, arrived at regardless of your start point.
Think about an optimization as the pight of a blind person, set down on the face of the earth, tasked with finding the point of lowest altitude over the entire earth. They are given only a cane to determine which direction points downhill, and an altimeter that reads out the current altitude. Lets hope it works under water. And make sure you give the poor fellow some scuba gear.
Now start them out in different spots. Do you expect they will always find the point of lowest evevation? What happens if you set them down near the Dead Sea, or any local depression? Where will they end up?
(DISCLAIMER: No blind people were injured in this thought experiment.)
An optimizer can be in virtually the same situation. What are the odds, that if you start it out in some random spot, that it will ALWAYS find the same solution?
Luckily, most optimization problems are not quite that difficult, but many nonlinear problems will have multiple solutions. We can characterize the set of start points that will end up in any given resting point as the basin of attraction of that solution. Such a basin may be shaped oddly. It may even be composed of multiple disjoint regions, that all arive at the same solution.
So if you start a nopnlinear optimizer, at multiple points, you should not be surprised to see different solutions. Is one better than the others? Perhaps. But if all are at local stable points, then the optimizer will be happy. An optimizer does not insure a globally optimal solution. Just a point where it could not find a way downhill from there. If you want a better solution, then it makes sense to provide better starting values.
Next, even if a solution is found, you may have a problem where there are many solutions, all of which lie in a long thing valley, but are all essentially equally good, or nearly so to within tolerances. So start at a different point, and the solution may arrive at a different point along that valley. The optimizer does not care. It thinks it is done. (Often this class of solution may have attached with it a warning of sorts, perhaps in terms of a nearly singular matrix, but this is not true of all optimizers.)