hello
I want to share the difficulties that I faced. Can someone help
problem statement:
there is a 'x' column where the values of the independent variable are written and there is a 'y' column where the experimental values of the dependent variable are written.
approximation model is considered:
y_calculate=A*x^B+C,
and based on this model, an objective function is created, which is equal to the average value of the relative deviations between y and y_calculate:
error_function = mean(abs(y - y_calculate)) / y)=mean(abs(y - =A*x^B+C)) / y);
Our goal is to select parameters A,B,C in such a way that 'error_function' takes the value of the global minimum.
I calculated the optimal values of A, B, C and got:
A = 85.5880, B = -0.0460, C = 4.8824,
at which error function value for optimized parameters: 0.0285.
but I know in advance the specific values of A, B, C:
A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400,
at which error function value for specific parameters: 0.002680472178434,
which is much better than with optimization
Below is the code with visualization, which confirms the above.
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
initial_guess = [1, 1, 1];
error_function = @(params) mean(abs(y - (params(1) * x.^params(2) + params(3))) ./ y);
optimized_params = fminsearch(error_function, initial_guess);
A_optimized = optimized_params(1);
B_optimized = optimized_params(2);
C_optimized = optimized_params(3);
y_calculate_optimized = A_optimized * x.^B_optimized + C_optimized;
value_error_optimized = error_function(optimized_params);
fprintf('Optimized parameters:\nA = %.4f\nB = %.4f\nC = %.4f\n', A_optimized, B_optimized, C_optimized);
fprintf(' error function value for optimized parameters: %.4f\n', value_error_optimized);
A_specific = 1005.63358529310;
B_specific = -1.59745963925582;
C_specific = 73.541497447544;
y_calculate_specific = A_specific * x.^B_specific + C_specific;
value_error_specific = error_function([A_specific, B_specific, C_specific]);
fprintf('Specific parameters:\nA = %.10f\nB = %.14f\nC = %.14f\n', A_specific, B_specific, C_specific);
fprintf(' error function value for specific parameters: %.4f\n', value_error_specific);
plot(x, y, 'bo-', 'DisplayName', 'Experimental data');
plot(x, y_calculate_optimized, 'r--', 'DisplayName', 'Fitted model (Optimized)');
plot(x, y_calculate_specific, 'g-.', 'DisplayName', 'Fitted model (Specific)');
legend('Location', 'best');
title('Approximation of experimental data');
Obviously, my optimization code does not lead to a global minimum of the objective function, since there is a better approximation for specific values of A,B,C. Maybe this is caused by a random selection of the initial values of the parameters A=1, B=1, c=1 and therefore my code is stuck in a local minimum?
who can write a code that will select the A,B,C parameters so as to achieve the global minimum of the target function 'error_function', for any initial iteration data of the variables A,B,C. Thoughts for testing: the value of the target function 'error_function' should not be worse (that is, more) than 0.002680472178434, which is obtained with the specific value of A,B,C: A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400