In statistics, extensions of Fisher's method are a group of approaches that allow approximately valid statistical inferences to be made when the assumptions required for the direct application of Fisher's method are not valid. Fisher's method is a way of combining the information in the p-values from different statistical tests so as to form a single overall test: this method requires that the individual test statistics (or, more immediately, their resulting p-values) should be statistically independent.(ref:https://en.wikipedia.org/wiki/Extensions_of_Fisher%27s_method)
import efm
f = efm.model(p,w,c) #Initialize Fish’s method
f.org() #Application to independent test statistics
f.dep() #Extension to dependent test statistics
f.wei() #Extension to weighted test statistics
f.depwei() #Extension to dependent and weighted test statistics
Defining the statistic that followed a chi-squared distribution with 2 degrees of freedom,
When introducing weight(w), the overall p-value corresponding T-statistic followed a chi-squared distribution with 2n degrees of freedom and can be approximated by scaled chi-squared distribution,
Considering statistics aren't independent, when estimating the parameters, the covariance matrix can be expressed as[1],
where
and
Rho hat is the Pearson correlation coefficient between test statistics
Using the method of moments, the parameters can be estimated as,
[1] Yang, J.J. Distribution of fisher's combination statistic when the tests are dependent. J Stat Comput Sim 2010, 80, 1-12.






