How error is propagating when using approximated values

Hello!! I have a problem and I wonder if anyone here posses the knowledge i lack... I am not quite sure how the error is propagating. This is the scenario: I have a value with a error of approximatively 07*10^(-3). I then use this value in another calculation. This calculation will itself give me an approximative answer that has a tolerans of 05*10^(-3). The final value i get should then as I understand it have two correct decimals because 0.7*10^(-3) + 05*10(-3) < 0.5*10^(-2). Am I thinking correct or is it wrong to add the maximal error like I just did?

1 comentario

Torsten
Torsten el 27 de Abr. de 2017
You are wrong.
You can easily see this if you assume, e.g., that in the second calculation, you get back 1e20*inputvalue. Calculate this product (exact without error) for inputvalue = 1-0.7e-3 and inputvalue = 1+0.7e-3.
So if f is the function of the second calculation, df/dx comes into play.
https://en.wikipedia.org/wiki/Propagation_of_uncertainty
Best wishes
Torsten.

Iniciar sesión para comentar.

Respuestas (1)

John D'Errico
John D'Errico el 27 de Abr. de 2017
Editada: John D'Errico el 27 de Abr. de 2017
This is a classic question in the field of Error Propagation. often called Statistical Tolerancing.
The main question that you need to consider are:
1. What is the distribution of the "error" you describe on the input? You say 07*10^(-3). But is that a standard deviation? Or is it a MAXIMUM error, so you really have a variable with error that is uniformly distributed over the range x +/- 7e-3? Thus in the interval
[x - 7e-3, x + 7e-3]
The traditional assumption that is made is that error is usually normally distributed. This happens because most of the time, the central limit theorem operates well, and we see noise (error) that is essentially bell-shaped for a distribution.
The above is important, in terms of what you can and should do.
Suppose that your function is linear, and your uncertainty in the estimate of x is normally distributed? Then you have a known standard deviation. But we know that for a LINEAR function f(x) and additive Normal (Gaussian) noise, E, that
f( x + E ) = f(x) + f'(x)*E
So you just multiply the standard deviation of the noise by the derivative of f at that point.
If f is really a nonlinear function, but it is well approximated by a first order truncated Taylor series, then the above still applies.
If the computation of f(x) itself has some noise in it, then you need to be careful, since when you add two normal random variables together, their variances are added.
What if the error you describe is actually uniform, as I mentioned above?
Now if the function f(x) is again linear, then the intervals that you pass through F will behave nicely. Thus the new error bounds will be
[f(x - 7e-3), f(x + 7e-3)]
But if f itself has noise in the computation, the noises will now add. But the sum of two UNIFORM random variables has a trapezoidal distribution in general. The maximum error will be computed as the sum, but the distribution will no longer be uniform over that interval.
The point of all this is you need to do some reading. Odds are, you need to learn something about basic statistics, especially about random variables. That will be useful in order to understand the concepts of error propagation.

Preguntada:

el 27 de Abr. de 2017

Editada:

el 27 de Abr. de 2017

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by