Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »


Assumptions

  • Independence : the data points are independent
  • No knowledge about the weights : we assume no prior belief about the weights (the training data will dictate).
  • Uniform uncertainty : each data point has the same uncertainty (i.e. ), again no prior belief about this.

Problem Definition

We wish to establish the parameters for a set of gaussian distributions that can straddle a linear function. These distributions should be centred (have means) equal to the estimated function, and a constant variance independent of . The linear parameters (weights) and constant variance should maximise the likelihood for a given set of data points.

Guassians

Mathematically, this can be stated as saying that for any  the corresponding optimal  is a guassian distributed variable around a linear function with weights :

(1)

This is basically saying we are taking a discrete set of points and expecting the corresponding  to vary around a mean given by .

Estimation

The likelihood is the probability of an observation given the data. Given that the data points are independent,

Given a set of observed  the goal here is to find the linear weights  and the  such that the maximum likelihood (the probability above) is maximised. This is essentially the same as finding the set of guassian distributions such that the means are located as close as possible to the observed . To find the maximum, it is easier to find the equivalent maximum of the log likelihood

which by equating to zero, gives the maximum at,

Similarly, we can find the best choice of  by taking the derivative and equating to zero, resulting in:

Prediction

The guassian distribution of  at some arbitrary point  follows from the generated estimation of the weights and variance above and can be expressed as:

(2)

Implicit here is that the probability distribution for  above is actually a conditional probability assuming that all of the other characteristics are fixed, i.e. .


Our Naivete

The assumptions here represent a naivete about the problem that can be expounded upon in several ways.

  • What if we have expert knowledge (prior belief) about what the weights should be? We should then be looking at MAP, not ML.
  • Likewise can be said of having a prior belief about the uncertainties.
  • The uncertainty itself may have a non-constant form, i.e. training points may be more certain on some sub-domain of  and less certain elsewhere.
  • In more complicated scenarios, data points are not necessarily independent. For example, continuity can influence covariance so that points close to each other are highly correlated.
  • A traditional way of influencing the weights to keep them simple is to use optimisation techniques which add a penalty term to the optimisation.
    • Refer to these lecture notes for notes on how to do this with ridge regression or lasso techniques.


  • No labels