Sample Statistics Part 2

Since writing my earlier sample statistics blog post I’ve learned a few things and thought I’d provide an update. Firstly, a quick recap.

For a population X_1, X_2, \ldots , X_N the population variance is defined by

\sigma^2 = \frac{1}{N} \sum\limits_{i=1}^{N} (X_i - \mu)^2 where \mu = \frac{1}{N} \sum\limits_{i=1}^{N} X_i

and for a sample x_1, x_2, \ldots , x_n the sample variance is defined by

s^2 = \frac{1}{n-1} \sum\limits_{i=1}^{n} (x_i - \overline{x})^2 where \overline{x} = \frac{1}{n} \sum\limits_{i=1}^{n} x_i

The fact that n-1 is used instead of n is because that’s what makes the average sample variance equal to the population variance, when the samples are taken with replacement allowed (as discussed in detail in my first sample statistics blog post).

What I didn’t consider in the previous blog post was the case where the samples are taken without allowing repetition (which is often the way that real life sampling is done). I didn’t because at the time I didn’t know how to perform the relevant analysis. Since then I’ve figured it out (and, as far as I know, it’s not explained elsewhere). It turns out that the divisor isn’t n-1 and it’s not n either. It’s {\frac{N}{N-1}}(n-1)

Here’s how to see that, together with some ideas that I think are conceptually helpful when dealing with these matters. Continue reading “Sample Statistics Part 2”

An Amusing Pattern

Here’s a fun little fact involving the integers from 1 to 9 and multiples of 7.

If you consider the the 3 by 3 square

 \begin{matrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\  7 & 8 & 9 \end{matrix}

where the integers go up by one by scanning from left to right, starting from the top left corner

 \begin{matrix} 1 & \rightarrow & 2 & \rightarrow & 3 \\ 4 & \rightarrow & 5 & \rightarrow & 6 \\  7 & \rightarrow & 8 & \rightarrow & 9 \end{matrix}

and traverse them scanning bottom to top from the bottom left corner, you get

\begin{matrix} 7 & \rightarrow & 4 & \rightarrow & 1 \end{matrix} \quad\quad \begin{matrix} 8 & \rightarrow & 5 & \rightarrow & 2 \end{matrix} \quad\quad \begin{matrix}  9 & \rightarrow & 6 & \rightarrow & 3 \end{matrix}

Those numbers are the last digits of

 \begin{matrix} 7 & 14 & 21 & 28 & 35 & 42 & 49 & 56 & 63 \end{matrix}

i.e. starting from 7, they increment by 7 mod 10.

I remember that square of numbers from the old touch tone phones, but I never noticed the above pattern for many years. It’s quite disconcerting how often I’m oblivious to these things, especially since I like to think I’m a mathematician.

For what it’s worth, the pattern generalizes in a straightforward way: Continue reading “An Amusing Pattern”

Relativistic Length

Here’s something that needed to be explicitly pointed out to me: there’s a direct connection between the fact that simultaneity is observer dependent and that length is observer dependent. The reason is that measuring the length of a rod boils down to finding two positions (the two ends of the rod) at the same time. If two observers don’t agree on simultaneity, i.e. when two events occur at the same time, then they won’t agree on the two positions to use for measuring the length.

While I’m on the subject of length contraction, I’d like to point out that the length an observer assigns to a moving object is not necessarily what the observer sees when looking at the object (or equivalently, the length of the object as measured from a photograph taken by the observer). That’s because Continue reading “Relativistic Length”

Sample Statistics

If you ever take an introductory statistics course, you’ll very quickly be dealing with taking a sample of size n like x_1, x_2, \ldots , x_n from a population of size N described by X_1, X_2, \ldots , X_N. And then you’ll want to try to say something about the population from the sample.

The usual numbers derived from the population are the population mean and the population variance

\mu = E(X) = \frac{1}{N} \sum\limits_{i=1}^{N} X_i

\sigma^2 = Var(X) = \frac{1}{N} \sum\limits_{i=1}^{N} (X_i - \mu)^2

The usual numbers derived from the sample are the sample mean and the sample variance

\overline{x} = \frac{1}{n} \sum\limits_{i=1}^{n} x_i

s^2 = \frac{1}{n-1} \sum\limits_{i=1}^{n} (x_i - \overline{x})^2

And the sample mean and sample variance are then used to provide an estimate for the corresponding population numbers.

The usual question that comes up is why n-1 for the sample variance, especially since N (not N-1) is being used for the population variance. Why the gratuitous inconsistency? What’s going on?

This certainly puzzled me when I took my first statistics course (many things puzzled me, even though I aced the exams). This blog post is about what I would tell my earlier self if I had access to a time machine. Continue reading “Sample Statistics”

Pythagoras’ Theorem

I often find myself thinking about two of the triangle results from antiquity. They’re Pythagoras’ theorem relating the sides of right angled triangles to each other, and Heron’s theorem relating the sides of an arbitrary triangle to the triangle’s area.

Of the two, Pythagoras’ theorem has been far more fruitful for the advancement of mathematics (in terms of implications and generalizations). In this blog post I’ll focus on Pythagoras’ theorem and describe some easily visualized (i.e. two and three dimensional) generalizations. Continue reading “Pythagoras’ Theorem”