49 views (last 30 days)
BN on 31 Jul 2020
Answered: Stan Driggs on 2 Nov 2021 at 15:16
I want to check if two data sets have similar distribution. I would like to use Anderson-darling test in order to do that, But adtest() in Matlab returns a test decision for the null hypothesis that the data in vector x is from a population with a normal distribution. My question is how to check if two data sets have similar distribution or not (without specifying the nature of that distributions).
So is it possible to do that in Matlab?
Thanks

John D'Errico on 31 Jul 2020
Edited: John D'Errico on 31 Jul 2020
Not using adtest. Like the Ford Model T, which Henry Ford sold in any color as long as the color you wanted was black, adtest tests to see if your distriution is any distribution, as long as normal is the distribution you want to test against.
Instead, you probably want to use a Kolmogorov-Smirnov test.
From the help for kstest2:
kstest2 Two-sample Kolmogorov-Smirnov goodness-of-fit hypothesis test.
H = kstest2(X1,X2) performs a Kolmogorov-Smirnov (K-S) test
to determine if independent random samples, X1 and X2, are drawn from
the same underlying continuous population.
BN on 31 Jul 2020
Thank you, more ever than what you mentioned about the Kolmogorov Smirnov test; I found a function in FEX which do something like what you said but using Anderson darling test. But this function just prints results in the workspace and not able to save p-value. I ask how to overcome this issue in another question.
Anyways Kolmogorov Smirnov test which you mentioned is perfect too.
Thank you
Best Regards

Stan Driggs on 2 Nov 2021 at 15:16
I know this is a stale question, but John's answer vis-a-vi Henry Ford is a bit misleading. You can use Anderson-Darling to test for ANY continuous distribution. The adtest function allows you to specify different distributions by name or with a distribution object. Note that if you create a distribution object, you must specify the parameters of the distribution, which might be unknown. The problem is the critical values of the test statistic can be slightly different if you use estimates of the parameters for that particular distribution (e.g. sample mean and variance) instead of true values. The critical values are also slightly different depending on the number of samples in your data.
In general, if you know the CDF function and can generate random data that follows a specific distribution, then you can generate thousands of cases, calculate the AD test statistic for each case, histogram the ad values, and determine the critical values for various levels of significance yourself. If you really want to understand Anderson-Darling, you should go through this exercise. The adtest function does this monte carlo process for you when you pass in a distrbution object. For the built-in supported distributions (norm, exp, ev, logn, weibull) it probably uses precomputed critical value tables and adjusts for the number of samples. These published tables were originally generated by monte carlo analysis back in the mainframe days, and some of the published tables have been found to have errors. I believe the table values assume the distribution parameters are unknown. YMMV.
Note that the AD test is sensitive because it pays more attention to the tails of the distribution, since the tails are where distributions differ the most. This makes the AD test very sensitive to outliers. You will be tempted to start removing outliers from your data, but be careful. If you remove enough outliers, all distributions end up looking uniform!

R2020a

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by