2016-08-28

where's the information?

It came up frequently in discussions this summer (and last): Where is the information (in, say, a spectrum of a star) about some parameter of interest (say, the potassium abundance of the star, or the radial velocity), and how much information is there? The answer is very simple! But the issues can be subtle, because there is only calculable information within the context of some kind of model. And by “model” here, I mean a probability density function for the data, parameterized by the parameters of interest. That is, a likelihood function.

The fast answer is this: The information about parameter θ is related to the (inverse squared) amount you can move parameter θ and still get reasonable probability for the data. The nice thing is that you can compute this, often, without doing a full inference. It is easiest in linear (or linearized) models with Gaussian noise! That's the question we will answer here.

When you have a linear or linearized model with Gaussian noise, there are derivatives of the expectation Y for the data with respect to the parameter of interest, dY/dθ. Here (for now) Y is an N-vector the size N of your data, and θ is a scalar parameter (let's call it the velocity!). So the derivative dY/dθ is an N-vector. The information about θ in the data is related to the dot product of this vector with respect to itself: The accuracy with which you can measure θ given data with Gaussian noise with N×N covariance matrix C (possibly diagonal if the N data points are independent) is:

σθ-2 = [dY/dθ]T C-1 [dY/dθ]

where σθ is the uncertainty on θ. That is, the inverse variance on the θ parameter is the inner product of the derivative vectors, where that inner product uses the inverse variance tensor of the noise in the data as its metric! Here we have implicitly assumed that the vectors are column vectors. When the N data points are independent, the C matrix is diagonal, as is its inverse. Note the units too: The inverse variance tensor has inverse Y-squared units, the inner product uses the derivatives to change this to inverse θ-squared units.

(When there are multiple parameters in θ—say K parameters—the inner product generalizes to making a K×K inverse covariance matrix for the parameter vector, and the expected variance on each parameter is obtained by inverting that inverse variance matrix and looking at the diagonals.)

But we started with the question: Where is the information in the data? In this case, it means: Where in the spectrum is the information about the velocity? The answer is simple: It is where the data—or really the inverse variance tensor for the noise the data—makes large contributions to the inverse variance computed above for θ. You can think of splitting the data into fine chunks, and asking this question about every chunk; the chunks or pixels or data subsets that contribute most to the scalar inverse variance are the subsets that contain the most information about θ.

No comments:

Post a Comment