GP Regression


Resources


GP Regression

Data

where  and .

Model

where  are univariate,  is multivariate and  is independent of .

INFO The GP represents your prior belief about the model. Your choice of and here is very influential to the result of the inferencing. Free parameters in your choice of covariance function are called hyperparameters.

Example

For modelling what we believe to be a continuously varying process on  centred on the origin, it is enough to set  and 

INFO Training data can be used to influence your selection and parameterisation of and .

Not worrying about this topic for now. Just hand tuning to keep things simple.

Inference

DESIRED Here we are trying to infer the distribution of unknown points from the data points, i.e. the conditional 

First let's consider the multivariate guassian  that we know we can extract from the GP using data points   on . From the definition of guassian processes,


or more simply:

Since  and  are independent, the multivariate guassian   has means and variances which are simply summed (sum of covariances from (?)):


From this, we can get the conditional distribution:

where we can express  using the complicated looking, but simple to express formulas for conditional guassians in (?):

(1)

Conclusions

Prior and Posterior

The GP itself is your prior knowledge about the model. The resulting conditional distribution is the posterior.

Model is Where the Tuning Happens

Tuning your model for the GP, i.e.  and  is where you gain control over how your inferencing result behaves. For example, a stationary vs non-stationary kernel function typically induce very different behaviour in different parts of the domain.

Variance Collapses Around Training Points

For simplicity, if you set  (noise free), assume  in the model and  for a single training data point , then working through the equations in (1) shows everything cancelling out and leaving you with just  and .  Throwing the noise in changes things a little, but you still get the dominant collapse of variance around the training points.

Characteristics of the Posterior

The mean in (1) can be viewed either as 1) a linear combination of the observations  or 2) a linear combination of the kernel functions centred on training data points (elements of ).

The variance can also be intuitively interpreted. It is simply the prior,  with a positive term subtracted due to information from the observations.

Gaussian Process vs Bayesian Regression

Guassian Process regression utilises kernels, not basis functions. However both can be shown to be equivalent for given choice of basis functions/kernels.. I rather like guassian processes for the ease of implementation.