Executes the "two-stage" Benjamini, Krieger, & Yekutieli (2006) procedure for controlling the false discovery rate (FDR) of a family of hypothesis tests. FDR is the expected proportion of rejected hypotheses that are mistakenly rejected (i.e., the null hypothesis is actually true for those tests). FDR is a somewhat less conservative/more powerful method for correcting for multiple comparisons than procedures like Bonferroni correction that provide strong control of the family-wise error rate (i.e., the probability that one or more null hypotheses are mistakenly rejected).
The procedure implemented by this function is more powerful than the original Benjamini & Hochberg (1995) procedure when a considerable percentage of the hypotheses in the family are false. It is only slightly less powerful than the original procedure when there are very few false hypotheses. To the best of my knowledge, this procedure is only guaranteed to control FDR if the tests are independent. However, simulations suggest that it can control FDR even when the tests are positively correlated (Benjamini et al., 2006).
Benjamini, Y., Krieger, A.M., & Yekutieli, D. (2006) Adaptive linear step-up procedures that control the false discovery rate. Biometrika. 93(3), 491-507.
Benjamini, Y. & Hochberg, Y. (1995) Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B (Methodological). 57(1), 289-300.
David Groppe (2020). Two-stage Benjamini, Krieger, & Yekutieli FDR procedure (https://www.mathworks.com/matlabcentral/fileexchange/27423-two-stage-benjamini-krieger-yekutieli-fdr-procedure), MATLAB Central File Exchange. Retrieved .
one aspect i have not seen discussed is the minimum number of simultaneous inferences that justifies use of Benjamini, Krieger, & Yekutieli. original versions of FDR corrections seem to yield erroneous corrections for fewer than 100s or 1000s of tests, particularly for nested designs. Benjamini, Krieger, & Yekutieli appears to simplify toward unadjusted p values for small numbers of tests, but does it perform appropriately under such circumstances (e.g. 5 or 10 post hoc tests). if so, could this approach largely replace bonferroni or holms corrections for exploratory (but not for confirmatory) analyses?
Perhaps these files should be grouped together into the same zip file or even integrated into the same m file? At least you could provide linkso that people would know about the existence of both.