-->

Saturday, March 31, 2018

Algorithms for calculating variance play a major role in computational statistics. A key difficulty in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.

Naïve algorithm




How to calculate Standard Deviation, Mean, Variance Statistics, Excel - Using Microsoft Excel to calculate Standard Deviation, Mean, and Variance. Related Video: How to Calculate Standard Deviation and Variance http://www.youtube.com/watch?v=qqOyy_NjflU Like...

A formula for calculating the variance of an entire population of size N is:

σ 2 = ( x 2 ) ¯ âˆ' x ¯ 2 = âˆ' i = 1 N x i 2 âˆ' ( âˆ' i = 1 N x i ) 2 / N N . {\displaystyle \sigma ^{2}={\bar {(x^{2})}}-{\bar {x}}^{2}=\displaystyle {\frac {\sum _{i=1}^{N}x_{i}^{2}-(\sum _{i=1}^{N}x_{i})^{2}/N}{N}}.\!}

Using Bessel's correction to calculate an unbiased estimate of the population variance from a finite sample of n observations, the formula is:

s 2 = âˆ' i = 1 n x i 2 âˆ' ( âˆ' i = 1 n x i ) 2 / n n âˆ' 1 . {\displaystyle s^{2}=\displaystyle {\frac {\sum _{i=1}^{n}x_{i}^{2}-(\sum _{i=1}^{n}x_{i})^{2}/n}{n-1}}.\!}

Therefore, a naive algorithm to calculate the estimated variance is given by the following:

This algorithm can easily be adapted to compute the variance of a finite population: simply divide by N instead of n âˆ' 1 on the last line.

Because SumSq and (Sum×Sum)/n can be very similar numbers, cancellation can lead to the precision of the result to be much less than the inherent precision of the floating-point arithmetic used to perform the computation. Thus this algorithm should not be used in practice. This is particularly bad if the standard deviation is small relative to the mean. However, the algorithm can be improved by adopting the method of the assumed mean.

Computing shifted data

We can use a property of the variance to avoid the catastrophic cancellation in this formula, namely the variance is invariant with respect to changes in a location parameter

Var ⁡ ( X âˆ' K ) = Var ⁡ ( X ) . {\displaystyle \operatorname {Var} (X-K)=\operatorname {Var} (X).}

with K {\displaystyle K} any constant, which leads to the new formula

s 2 = âˆ' i = 1 n ( x i âˆ' K ) 2 âˆ' ( âˆ' i = 1 n ( x i âˆ' K ) ) 2 / n n âˆ' 1 . {\displaystyle s^{2}=\displaystyle {\frac {\sum _{i=1}^{n}(x_{i}-K)^{2}-(\sum _{i=1}^{n}(x_{i}-K))^{2}/n}{n-1}}.\!}

the closer K {\displaystyle K} is to the mean value the more accurate the result will be, but just choosing a value inside the samples range will guarantee the desired stability. If the values ( x i âˆ' K ) {\displaystyle (x_{i}-K)} are small then there are no problems with the sum of its squares, on the contrary, if they are large it necessarily means that the variance is large as well. In any case the second term in the formula is always smaller than the first one therefore no cancellation may occur.

If we take just the first sample as K {\displaystyle K} the algorithm can be written in Python programming language as

this formula facilitates as well the incremental computation, that can be expressed as

Two-pass algorithm


Seasonal variation of the mean, median and standard deviation of ...
Seasonal variation of the mean, median and standard deviation of .... Source : www.researchgate.net

An alternative approach, using a different formula for the variance, first computes the sample mean,

x ¯ = âˆ' j = 1 n x j n {\displaystyle {\bar {x}}=\displaystyle {\frac {\sum _{j=1}^{n}x_{j}}{n}}} ,

and then computes the sum of the squares of the differences from the mean,

v a r i a n c e = s 2 = âˆ' i = 1 n ( x i âˆ' x ¯ ) 2 n âˆ' 1 {\displaystyle \mathrm {variance} =s^{2}=\displaystyle {\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}{n-1}}\!} ,

where s is the standard deviation. This is given by the following pseudocode:

This algorithm is numerically stable if n is small. However, the results of both of these simple algorithms ("Naïve" and "Two-pass") can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as compensated summation can be used to combat this error to a degree.

Online algorithm


machine learning - Why do we divide by the standard deviation and ...
machine learning - Why do we divide by the standard deviation and .... Source : stats.stackexchange.com

It is often useful to be able to compute the variance in a single pass, inspecting each value x i {\displaystyle x_{i}} only once; for example, when the data are being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such an online algorithm, a recurrence relation is required between quantities from which the required statistics can be calculated in a numerically stable fashion.

The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional element xn. Here, xn denotes the sample mean of the first n samples (x1, ..., xn), s2n their sample variance, and σ2n their population variance.

x ¯ n = ( n âˆ' 1 ) x ¯ n âˆ' 1 + x n n = x ¯ n âˆ' 1 + x n âˆ' x ¯ n âˆ' 1 n {\displaystyle {\bar {x}}_{n}={\frac {(n-1)\,{\bar {x}}_{n-1}+x_{n}}{n}}={\bar {x}}_{n-1}+{\frac {x_{n}-{\bar {x}}_{n-1}}{n}}\!}
s n 2 = ( n âˆ' 2 ) ( n âˆ' 1 ) s n âˆ' 1 2 + ( x n âˆ' x ¯ n âˆ' 1 ) 2 n , n > 1 {\displaystyle s_{n}^{2}={\frac {(n-2)}{(n-1)}}\,s_{n-1}^{2}+{\frac {(x_{n}-{\bar {x}}_{n-1})^{2}}{n}},\quad n>1}
σ n 2 = ( n âˆ' 1 ) σ n âˆ' 1 2 + ( x n âˆ' x ¯ n âˆ' 1 ) ( x n âˆ' x ¯ n ) n . {\displaystyle \sigma _{n}^{2}={\frac {(n-1)\,\sigma _{n-1}^{2}+(x_{n}-{\bar {x}}_{n-1})(x_{n}-{\bar {x}}_{n})}{n}}.}

These formulas suffer from numerical instability, as we are repeatedly subtracting a small number from a big number which scales with n. A better quantity for updating is the sum of squares of differences from the current mean, âˆ' i = 1 n ( x i âˆ' x ¯ n ) 2 {\displaystyle \textstyle \sum _{i=1}^{n}(x_{i}-{\bar {x}}_{n})^{2}} , here denoted M 2 , n {\displaystyle M_{2,n}} :

M 2 , n = M 2 , n âˆ' 1 + ( x n âˆ' x ¯ n âˆ' 1 ) ( x n âˆ' x ¯ n ) {\displaystyle M_{2,n}\!=M_{2,n-1}+(x_{n}-{\bar {x}}_{n-1})(x_{n}-{\bar {x}}_{n})}
s n 2 = M 2 , n n âˆ' 1 {\displaystyle s_{n}^{2}={\frac {M_{2,n}}{n-1}}}
σ n 2 = M 2 , n n {\displaystyle \sigma _{n}^{2}={\frac {M_{2,n}}{n}}}

A numerically stable algorithm for the sample variance is given below. It also computes the mean. This algorithm was found by Welford, and it has been thoroughly analyzed. It is also common to denote M k = x ¯ k {\displaystyle M_{k}={\bar {x}}_{k}} and S k = M 2 , k {\displaystyle S_{k}=M_{2,k}} .

This algorithm is much less prone to loss of precision due to catastrophic cancellation, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, one can first compute and subtract an estimate of the mean, and then use this algorithm on the residuals.

The parallel algorithm below illustrates how to merge multiple sets of statistics calculated online.

Weighted incremental algorithm


A practical SNR estimation scheme for remotely sensed optical ...
A practical SNR estimation scheme for remotely sensed optical .... Source : www.researchgate.net

The algorithm can be extended to handle unequal sample weights, replacing the simple counter n with the sum of weights seen so far. West (1979) suggests this incremental algorithm:

Parallel algorithm


Video Tutorial: QuantTrader - A complete walk-through for new ...
Video Tutorial: QuantTrader - A complete walk-through for new .... Source : logical-invest.com

Chan et al. note that the above "On-line" algorithm is a special case of an algorithm that works for any partition of the sample X {\displaystyle X} into sets X A {\displaystyle X_{A}} , X B {\displaystyle X_{B}} :

δ = x ¯ B âˆ' x ¯ A {\displaystyle \delta \!={\bar {x}}_{B}-{\bar {x}}_{A}}
x ¯ X = x ¯ A + δ ⋅ n B n X {\displaystyle {\bar {x}}_{X}={\bar {x}}_{A}+\delta \cdot {\frac {n_{B}}{n_{X}}}}
M 2 , X = M 2 , A + M 2 , B + δ 2 ⋅ n A n B n X {\displaystyle M_{2,X}=M_{2,A}+M_{2,B}+\delta ^{2}\cdot {\frac {n_{A}n_{B}}{n_{X}}}} .

This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input.

Chan's method for estimating the mean is numerically unstable when n A ≈ n B {\displaystyle n_{A}\approx n_{B}} and both are large, because the numerical error in x ¯ B âˆ' x ¯ A {\displaystyle {\bar {x}}_{B}-{\bar {x}}_{A}} is not scaled down in the way that it is in the n B = 1 {\displaystyle n_{B}=1} case. In such cases, prefer x ¯ X = n A x ¯ A + n B x ¯ B n A + n B {\displaystyle {\bar {x}}_{X}={\frac {n_{A}{\bar {x}}_{A}+n_{B}{\bar {x}}_{B}}{n_{A}+n_{B}}}} .

Example


Statistical Algorithms in Review Manager 5 (PDF Download Available)
Statistical Algorithms in Review Manager 5 (PDF Download Available). Source : www.researchgate.net

Assume that all floating point operations use standard IEEE 754 double-precision arithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate of population variance is 30. Both the naïve algorithm and two-pass algorithm compute these values correctly.

Next consider the sample (108 + 4, 108 + 7, 108 + 13, 108 + 16), which gives rise to the same estimated variance as the first sample. The two-pass algorithm computes this variance estimate correctly, but the naïve algorithm returns 29.333333333333332 instead of 30.

While this loss of precision may be tolerable and viewed as a minor flaw of the naïve algorithm, further increasing the offset makes the error catastrophic. Consider the sample (109 + 4, 109 + 7, 109 + 13, 109 + 16). Again the estimated population variance of 30 is computed correctly by the two-pass algorithm, but the naïve algorithm now computes it as âˆ'170.66666666666666. This is a serious problem with naïve algorithm and is due to catastrophic cancellation in the subtraction of two similar numbers at the final stage of the algorithm.

Higher-order statistics


Interquartile range - Wikipedia
Interquartile range - Wikipedia. Source : en.wikipedia.org

Terriberry extends Chan's formulae to calculating the third and fourth central moments, needed for example when estimating skewness and kurtosis:

M 3 , X = M 3 , A + M 3 , B + δ 3 n A n B ( n A âˆ' n B ) n X 2 + 3 δ n A M 2 , B âˆ' n B M 2 , A n X {\displaystyle M_{3,X}=M_{3,A}+M_{3,B}+\delta ^{3}{\frac {n_{A}n_{B}(n_{A}-n_{B})}{n_{X}^{2}}}+3\delta {\frac {n_{A}M_{2,B}-n_{B}M_{2,A}}{n_{X}}}}
M 4 , X = M 4 , A + M 4 , B + δ 4 n A n B ( n A 2 âˆ' n A n B + n B 2 ) n X 3 + 6 δ 2 n A 2 M 2 , B + n B 2 M 2 , A n X 2 + 4 δ n A M 3 , B âˆ' n B M 3 , A n X {\displaystyle {\begin{aligned}M_{4,X}=M_{4,A}+M_{4,B}&{}+\delta ^{4}{\frac {n_{A}n_{B}\left(n_{A}^{2}-n_{A}n_{B}+n_{B}^{2}\right)}{n_{X}^{3}}}\\&{}+6\delta ^{2}{\frac {n_{A}^{2}M_{2,B}+n_{B}^{2}M_{2,A}}{n_{X}^{2}}}+4\delta {\frac {n_{A}M_{3,B}-n_{B}M_{3,A}}{n_{X}}}\\\end{aligned}}}

Here the M k {\displaystyle M_{k}} are again the sums of powers of differences from the mean Σ ( x âˆ' x ¯ ) k {\displaystyle \Sigma (x-{\overline {x}})^{k}} , giving

skewness: g 1 = n M 3 M 2 3 / 2 , {\displaystyle g_{1}={\frac {{\sqrt {n}}M_{3}}{M_{2}^{3/2}}},}
kurtosis: g 2 = n M 4 M 2 2 âˆ' 3. {\displaystyle g_{2}={\frac {nM_{4}}{M_{2}^{2}}}-3.}

For the incremental case (i.e., B = { x } {\displaystyle B=\{x\}} ), this simplifies to:

δ = x âˆ' m {\displaystyle \delta \!=x-m}
m ′ = m + δ n {\displaystyle m'=m+{\frac {\delta }{n}}}
M 2 ′ = M 2 + δ 2 n âˆ' 1 n {\displaystyle M_{2}'=M_{2}+\delta ^{2}{\frac {n-1}{n}}}
M 3 ′ = M 3 + δ 3 ( n âˆ' 1 ) ( n âˆ' 2 ) n 2 âˆ' 3 δ M 2 n {\displaystyle M_{3}'=M_{3}+\delta ^{3}{\frac {(n-1)(n-2)}{n^{2}}}-{\frac {3\delta M_{2}}{n}}}
M 4 ′ = M 4 + δ 4 ( n âˆ' 1 ) ( n 2 âˆ' 3 n + 3 ) n 3 + 6 δ 2 M 2 n 2 âˆ' 4 δ M 3 n {\displaystyle M_{4}'=M_{4}+{\frac {\delta ^{4}(n-1)(n^{2}-3n+3)}{n^{3}}}+{\frac {6\delta ^{2}M_{2}}{n^{2}}}-{\frac {4\delta M_{3}}{n}}}

By preserving the value δ / n {\displaystyle \delta /n} , only one division operation is needed and the higher-order statistics can thus be calculated for little incremental cost.

An example of the online algorithm for kurtosis implemented as described is:

Pébaÿ further extends these results to arbitrary-order central moments, for the incremental and the pairwise cases, and subsequently Pébaÿ et al. for weighted and compound moments. One can also find there similar formulas for covariance.

Choi and Sweetman offer two alternative methods to compute the skewness and kurtosis, each of which can save substantial computer memory requirements and CPU time in certain applications. The first approach is to compute the statistical moments by separating the data into bins and then computing the moments from the geometry of the resulting histogram, which effectively becomes a one-pass algorithm for higher moments. One benefit is that the statistical moment calculations can be carried out to arbitrary accuracy such that the computations can be tuned to the precision of, e.g., the data storage format or the original measurement hardware. A relative histogram of a random variable can be constructed in the conventional way: the range of potential values is divided into bins and the number of occurrences within each bin are counted and plotted such that the area of each rectangle equals the portion of the sample values within that bin:

H ( x k ) = h ( x k ) A {\displaystyle H(x_{k})={\frac {h(x_{k})}{A}}}

where h ( x k ) {\displaystyle h(x_{k})} and H ( x k ) {\displaystyle H(x_{k})} represent the frequency and the relative frequency at bin x k {\displaystyle x_{k}} and A = âˆ' k = 1 K h ( x k ) Î" x k {\displaystyle A=\sum _{k=1}^{K}h(x_{k})\,\Delta x_{k}} is the total area of the histogram. After this normalization, the n {\displaystyle n} raw moments and central moments of x ( t ) {\displaystyle x(t)} can be calculated from the relative histogram:

m n ( h ) = âˆ' k = 1 K x k n H ( x k ) Î" x k = 1 A âˆ' k = 1 K x k n h ( x k ) Î" x k {\displaystyle m_{n}^{(h)}=\sum _{k=1}^{K}x_{k}^{n}\,H(x_{k})\Delta x_{k}={\frac {1}{A}}\sum _{k=1}^{K}x_{k}^{n}\,h(x_{k})\Delta x_{k}}
θ n ( h ) = âˆ' k = 1 K ( x k âˆ' m 1 ( h ) ) n H ( x k ) Î" x k = 1 A âˆ' k = 1 K ( x k âˆ' m 1 ( h ) ) n h ( x k ) Î" x k {\displaystyle \theta _{n}^{(h)}=\sum _{k=1}^{K}{\Big (}x_{k}-m_{1}^{(h)}{\Big )}^{n}\,H(x_{k})\Delta x_{k}={\frac {1}{A}}\sum _{k=1}^{K}{\Big (}x_{k}-m_{1}^{(h)}{\Big )}^{n}\,h(x_{k})\Delta x_{k}}

where the superscript ( h ) {\displaystyle ^{(h)}} indicates the moments are calculated from the histogram. For constant bin width Î" x k = Î" x {\displaystyle \Delta x_{k}=\Delta x} these two expressions can be simplified using I = A / Î" x {\displaystyle I=A/\Delta x} :

m n ( h ) = 1 I âˆ' k = 1 K x k n h ( x k ) {\displaystyle m_{n}^{(h)}={\frac {1}{I}}{\sum _{k=1}^{K}x_{k}^{n}\,h(x_{k})}}
θ n ( h ) = 1 I âˆ' k = 1 K ( x k âˆ' m 1 ( h ) ) n h ( x k ) {\displaystyle \theta _{n}^{(h)}={\frac {1}{I}}{\sum _{k=1}^{K}{\Big (}x_{k}-m_{1}^{(h)}{\Big )}^{n}\,h(x_{k})}}

The second approach from Choi and Sweetman is an analytical methodology to combine statistical moments from individual segments of a time-history such that the resulting overall moments are those of the complete time-history. This methodology could be used for parallel computation of statistical moments with subsequent combination of those moments, or for combination of statistical moments computed at sequential times.

If Q {\displaystyle Q} sets of statistical moments are known: ( γ 0 , q , μ q , σ q 2 , α 3 , q , α 4 , q ) {\displaystyle (\gamma _{0,q},\mu _{q},\sigma _{q}^{2},\alpha _{3,q},\alpha _{4,q})\quad } for q = 1 , 2 , … , Q {\displaystyle q=1,2,\ldots ,Q} , then each γ n {\displaystyle \gamma _{n}} can be expressed in terms of the equivalent n {\displaystyle n} raw moments:

γ n , q = m n , q γ 0 , q for n = 1 , 2 , 3 , 4  and  q = 1 , 2 , … , Q {\displaystyle \gamma _{n,q}=m_{n,q}\gamma _{0,q}\qquad \quad {\textrm {for}}\quad n=1,2,3,4\quad {\text{ and }}\quad q=1,2,\dots ,Q}

where γ 0 , q {\displaystyle \gamma _{0,q}} is generally taken to be the duration of the q t h {\displaystyle q^{th}} time-history, or the number of points if Î" t {\displaystyle \Delta t} is constant.

The benefit of expressing the statistical moments in terms of γ {\displaystyle \gamma } is that the Q {\displaystyle Q} sets can be combined by addition, and there is no upper limit on the value of Q {\displaystyle Q} .

γ n , c = âˆ' q = 1 Q γ n , q for n = 0 , 1 , 2 , 3 , 4 {\displaystyle \gamma _{n,c}=\sum _{q=1}^{Q}\gamma _{n,q}\quad \quad {\textrm {for}}\quad n=0,1,2,3,4}

where the subscript c {\displaystyle _{c}} represents the concatenated time-history or combined γ {\displaystyle \gamma } . These combined values of γ {\displaystyle \gamma } can then be inversely transformed into raw moments representing the complete concatenated time-history

m n , c = γ n , c γ 0 , c for n = 1 , 2 , 3 , 4 {\displaystyle m_{n,c}={\frac {\gamma _{n,c}}{\gamma _{0,c}}}\quad {\textrm {for}}\quad n=1,2,3,4}

Known relationships between the raw moments ( m n {\displaystyle m_{n}} ) and the central moments ( θ n = E [ ( x âˆ' μ ) n ] ) {\displaystyle \theta _{n}=E[(x-\mu )^{n}])} ) are then used to compute the central moments of the concatenated time-history. Finally, the statistical moments of the concatenated history are computed from the central moments:

μ c = m 1 , c σ c 2 = θ 2 , c α 3 , c = θ 3 , c σ c 3 α 4 , c = θ 4 , c σ c 4 âˆ' 3 {\displaystyle \mu _{c}=m_{1,c}\qquad \sigma _{c}^{2}=\theta _{2,c}\qquad \alpha _{3,c}={\frac {\theta _{3,c}}{\sigma _{c}^{3}}}\qquad \alpha _{4,c}={\frac {\theta _{4,c}}{\sigma _{c}^{4}}}-3}

Covariance



Very similar algorithms can be used to compute the covariance.

Naïve algorithm

The naïve algorithm is:

Cov ⁡ ( X , Y ) = âˆ' i = 1 n x i y i âˆ' ( âˆ' i = 1 n x i ) ( âˆ' i = 1 n y i ) / n n . {\displaystyle \operatorname {Cov} (X,Y)=\displaystyle {\frac {\sum _{i=1}^{n}x_{i}y_{i}-(\sum _{i=1}^{n}x_{i})(\sum _{i=1}^{n}y_{i})/n}{n}}.\!}

For the algorithm above, one could use the following Python code:

With estimate of the mean

As for the variance, the covariance of two random variables is also shift-invariant, so given that k x {\displaystyle k_{x}} and k y {\displaystyle k_{y}} are whatever two constant values it can be written:

Cov ⁡ ( X , Y ) = Cov ⁡ ( X âˆ' k x , Y âˆ' k y ) = âˆ' i = 1 n ( x i âˆ' k x ) ( y i âˆ' k y ) âˆ' ( âˆ' i = 1 n ( x i âˆ' k x ) ) ( âˆ' i = 1 n ( y i âˆ' k y ) ) / n n . {\displaystyle \operatorname {Cov} (X,Y)=\operatorname {Cov} (X-k_{x},Y-k_{y})=\displaystyle {\frac {\sum _{i=1}^{n}(x_{i}-k_{x})(y_{i}-k_{y})-(\sum _{i=1}^{n}(x_{i}-k_{x}))(\sum _{i=1}^{n}(y_{i}-k_{y}))/n}{n}}.\!}

and again choosing a value inside the range of values will stabilize the formula against catastrophic cancellation as well as make it more robust against big sums. Taking the first value of each data set, the algorithm can be written as:

Two-pass

The two-pass algorithm first computes the sample means, and then the covariance:

x ¯ = âˆ' i = 1 n x i / n {\displaystyle {\bar {x}}=\displaystyle \sum _{i=1}^{n}x_{i}/n}
y ¯ = âˆ' i = 1 n y i / n {\displaystyle {\bar {y}}=\displaystyle \sum _{i=1}^{n}y_{i}/n}
Cov ⁡ ( X , Y ) = âˆ' i = 1 n ( x i âˆ' x ¯ ) ( y i âˆ' y ¯ ) n . {\displaystyle \operatorname {Cov} (X,Y)=\displaystyle {\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{n}}.\!}

The two-pass algorithm may be written as:

A slightly more accurate compensated version performs the full naive algorithm on the residuals. The final sums âˆ' x i {\displaystyle \textstyle \sum x_{i}} and âˆ' y i {\displaystyle \textstyle \sum y_{i}} should be zero, but the second pass compensates for any small error.

Online

A stable one-pass algorithm exists, similar to the online algorithm for computing the variance, that computes co-moment C n = âˆ' i = 1 n ( x i âˆ' x ¯ n ) ( y i âˆ' y ¯ n ) {\displaystyle \textstyle C_{n}=\sum _{i=1}^{n}(x_{i}-{\bar {x}}_{n})(y_{i}-{\bar {y}}_{n})} :

x ¯ n = x ¯ n âˆ' 1 + x n âˆ' x ¯ n âˆ' 1 n y ¯ n = y ¯ n âˆ' 1 + y n âˆ' y ¯ n âˆ' 1 n C n = C n âˆ' 1 + ( x n âˆ' x ¯ n ) ( y n âˆ' y ¯ n âˆ' 1 ) = C n âˆ' 1 + ( x n âˆ' x ¯ n âˆ' 1 ) ( y n âˆ' y ¯ n ) {\displaystyle {\begin{alignedat}{2}{\bar {x}}_{n}&={\bar {x}}_{n-1}&\,+\,&{\frac {x_{n}-{\bar {x}}_{n-1}}{n}}\\{\bar {y}}_{n}&={\bar {y}}_{n-1}&\,+\,&{\frac {y_{n}-{\bar {y}}_{n-1}}{n}}\\C_{n}&=C_{n-1}&\,+\,&(x_{n}-{\bar {x}}_{n})(y_{n}-{\bar {y}}_{n-1})\\&=C_{n-1}&\,+\,&(x_{n}-{\bar {x}}_{n-1})(y_{n}-{\bar {y}}_{n})\\\end{alignedat}}}

The apparent asymmetry in that last equation is due to the fact that ( x n âˆ' x ¯ n ) = n âˆ' 1 n ( x n âˆ' x ¯ n âˆ' 1 ) {\displaystyle \textstyle (x_{n}-{\bar {x}}_{n})={\frac {n-1}{n}}(x_{n}-{\bar {x}}_{n-1})} , so both update terms are equal to n âˆ' 1 n ( x n âˆ' x ¯ n âˆ' 1 ) ( y n âˆ' y ¯ n âˆ' 1 ) {\displaystyle \textstyle {\frac {n-1}{n}}(x_{n}-{\bar {x}}_{n-1})(y_{n}-{\bar {y}}_{n-1})} . Even greater accuracy can be achieved by first computing the means, then using the stable one-pass algorithm on the residuals.

Thus we can compute the covariance as

Cov N ⁡ ( X , Y ) = C N N = Cov N âˆ' 1 ⁡ ( X , Y ) â‹… ( N âˆ' 1 ) + ( x n âˆ' x ¯ n ) ( y n âˆ' y ¯ n âˆ' 1 ) N = Cov N âˆ' 1 ⁡ ( X , Y ) â‹… ( N âˆ' 1 ) + ( y n âˆ' y ¯ n ) ( x n âˆ' x ¯ n âˆ' 1 ) N = Cov N âˆ' 1 ⁡ ( X , Y ) â‹… ( N âˆ' 1 ) + N âˆ' 1 N ( x n âˆ' x ¯ n âˆ' 1 ) ( y n âˆ' y ¯ n âˆ' 1 ) N . {\displaystyle {\begin{aligned}\operatorname {Cov} _{N}(X,Y)={\frac {C_{N}}{N}}&={\frac {\operatorname {Cov} _{N-1}(X,Y)\cdot (N-1)+(x_{n}-{\bar {x}}_{n})(y_{n}-{\bar {y}}_{n-1})}{N}}\\&={\frac {\operatorname {Cov} _{N-1}(X,Y)\cdot (N-1)+(y_{n}-{\bar {y}}_{n})(x_{n}-{\bar {x}}_{n-1})}{N}}\\&={\frac {\operatorname {Cov} _{N-1}(X,Y)\cdot (N-1)+{\frac {N-1}{N}}(x_{n}-{\bar {x}}_{n-1})(y_{n}-{\bar {y}}_{n-1})}{N}}.\end{aligned}}}

We can also make a small modification to compute the weighted covariance:

Likewise, there is a formula for combining the covariances of two sets that can be used to parallelize the computation:

C X = C A + C B + ( x ¯ A âˆ' x ¯ B ) ( y ¯ A âˆ' y ¯ B ) â‹… n A n B n X . {\displaystyle C_{X}=C_{A}+C_{B}+({\bar {x}}_{A}-{\bar {x}}_{B})({\bar {y}}_{A}-{\bar {y}}_{B})\cdot {\frac {n_{A}n_{B}}{n_{X}}}.}

See also



  • Algebraic formula for the variance
  • Kahan summation algorithm
  • Squared deviations from the mean

References



External links



  • Weisstein, Eric W. "Sample Variance Computation". MathWorld. 


 
Sponsored Links