Fisher Transformation

In statistics, the Fisher transformation (or Fisher z-transformation) of a Pearson correlation coefficient is its inverse hyperbolic tangent (artanh).

When the sample correlation coefficient r is near 1 or -1, its distribution is highly skewed, which makes it difficult to estimate confidence intervals and apply tests of significance for the population correlation coefficient ρ. The Fisher transformation solves this problem by yielding a variable whose distribution is approximately normally distributed, with a variance that is stable over different values of r.

Fisher Transformation
A graph of the transformation (in orange). The untransformed sample correlation coefficient is plotted on the horizontal axis, and the transformed coefficient is plotted on the vertical axis. The identity function (gray) is also shown for comparison.

Definition

Given a set of N bivariate sample pairs (XiYi), i = 1, ..., N, the sample correlation coefficient r is given by

    Fisher Transformation 

Here Fisher Transformation  stands for the covariance between the variables Fisher Transformation  and Fisher Transformation  and Fisher Transformation  stands for the standard deviation of the respective variable. Fisher's z-transformation of r is defined as

    Fisher Transformation 

where "ln" is the natural logarithm function and "artanh" is the inverse hyperbolic tangent function.

If (XY) has a bivariate normal distribution with correlation ρ and the pairs (XiYi) are independent and identically distributed, then z is approximately normally distributed with mean

    Fisher Transformation 

and standard deviation

    Fisher Transformation 

where N is the sample size, and ρ is the true correlation coefficient.

This transformation, and its inverse

    Fisher Transformation 

can be used to construct a large-sample confidence interval for r using standard normal theory and derivations. See also application to partial correlation.

Derivation

Fisher Transformation 
Fisher Transformation with Fisher Transformation  and Fisher Transformation . Illustrated is the exact probability density function of Fisher Transformation  (in black), together with the probability density functions of the usual Fisher transformation (blue) and that obtained by including extra terms that depend on Fisher Transformation  (red). The latter approximation is visually indistinguishable from the exact answer (its maximum error is 0.3%, compared to 3.4% of basic Fisher).

Hotelling gives a concise derivation of the Fisher transformation.

To derive the Fisher transformation, one starts by considering an arbitrary increasing, twice-differentiable function of Fisher Transformation , say Fisher Transformation . Finding the first term in the large-Fisher Transformation  expansion of the corresponding skewness Fisher Transformation  results in

    Fisher Transformation 

Setting Fisher Transformation  and solving the corresponding differential equation for Fisher Transformation  yields the inverse hyperbolic tangent Fisher Transformation  function.

Similarly expanding the mean m and variance v of Fisher Transformation , one gets

    m = Fisher Transformation 

and

    v = Fisher Transformation 

respectively.

The extra terms are not part of the usual Fisher transformation. For large values of Fisher Transformation  and small values of Fisher Transformation  they represent a large improvement of accuracy at minimal cost, although they greatly complicate the computation of the inverse – a closed-form expression is not available. The near-constant variance of the transformation is the result of removing its skewness – the actual improvement is achieved by the latter, not by the extra terms. Including the extra terms, i.e., computing (z-m)/v1/2, yields:

    Fisher Transformation 

which has, to an excellent approximation, a standard normal distribution.

Fisher Transformation 
Calculator for the confidence belt of r-squared values (or coefficient of determination/explanation or goodness of fit).

Application

The application of Fisher's transformation can be enhanced using a software calculator as shown in the figure. Assuming that the r-squared value found is 0.80, that there are 30 data [clarification needed], and accepting a 90% confidence interval, the r-squared value in another random sample from the same population may range from 0.588 to 0.921. When r-squared is outside this range, the population is considered to be different.

Discussion

The Fisher transformation is an approximate variance-stabilizing transformation for r when X and Y follow a bivariate normal distribution. This means that the variance of z is approximately constant for all values of the population correlation coefficient ρ. Without the Fisher transformation, the variance of r grows smaller as |ρ| gets closer to 1. Since the Fisher transformation is approximately the identity function when |r| < 1/2, it is sometimes useful to remember that the variance of r is well approximated by 1/N as long as |ρ| is not too large and N is not too small. This is related to the fact that the asymptotic variance of r is 1 for bivariate normal data.

The behavior of this transform has been extensively studied since Fisher introduced it in 1915. Fisher himself found the exact distribution of z for data from a bivariate normal distribution in 1921; Gayen in 1951 determined the exact distribution of z for data from a bivariate Type A Edgeworth distribution. Hotelling in 1953 calculated the Taylor series expressions for the moments of z and several related statistics and Hawkins in 1989 discovered the asymptotic distribution of z for data from a distribution with bounded fourth moments.

An alternative to the Fisher transformation is to use the exact confidence distribution density for ρ given by

Fisher Transformation 
where Fisher Transformation  is the Gaussian hypergeometric function and Fisher Transformation  .

Other uses

While the Fisher transformation is mainly associated with the Pearson product-moment correlation coefficient for bivariate normal observations, it can also be applied to Spearman's rank correlation coefficient in more general cases. A similar result for the asymptotic distribution applies, but with a minor adjustment factor: see the cited article for details.

See also

References

Tags:

Fisher Transformation DefinitionFisher Transformation DerivationFisher Transformation ApplicationFisher Transformation DiscussionFisher Transformation Other usesFisher TransformationConfidence intervalsInverse hyperbolic tangentNormally distributedPearson correlation coefficientSkewnessStatisticsTests of significance

🔥 Trending searches on Wiki English:

MaldivesInter Miami CFSeppuku69 (sex position)Siren (2024 film)Yandex.ZenDavid Bowie2024 ICC Men's T20 World CupPakistanCanvaMole (unit)Mao ZedongRoyal Challengers BangaloreErin MoranTrick WilliamsVoice of VietnamZoe BallLiz TrussGeneration XTurks and Caicos IslandsBack to Black (film)Blink TwiceJulius CaesarHiroyuki SanadaEuropeThe Eras TourSteve JobsAnthony Ashley-Cooper, 10th Earl of ShaftesburyIsrael–Hamas warFeyenoordUnited KingdomTelangana State Board of Intermediate Education2024 AFC Futsal Asian CupQueen VictoriaJeffrey DahmerSylvester StalloneThe Judge (2014 film)AVideoIndian Super LeagueFallout (video game)Ripley (TV series)Cristiano RonaldoVijay (actor)Harry PotterTelegram (software)Elizabeth PrelogarMark ZuckerbergX-Men (film series)Harley BalicSunny LeoneElisabeth MossEuropean UnionSofia BoutellaSofía VergaraJalen BrunsonGhoul (Fallout)Split (2016 American film)2024 Indian general election in MaharashtraList of NBA championsDwight D. EisenhowerDark webAlex GarlandShohei OhtaniAriana GrandeRahul GandhiMyanmarShou Zi ChewRobert F. Kennedy Jr.Crackhead Barney2024 Indian general election in KeralaWish (film)Jimmy CarterKYURParis Saint-Germain F.C.Planet of the ApesJessica Williams (actress)Reggie BushDwayne Johnson🡆 More