Linear Regression - ML
Assumptions
- Independence : the data points are independent
- No knowledge about the weights : we assume no prior belief about the weights (the training data will dictate).
- Uniform uncertainty : each data point has the same uncertainty (i.e. Loading), again no prior belief about this.
Problem Definition
We wish to establish the parameters for a set of gaussian distributions that can straddle a linear function. These distributions should be centred (have means) equal to the estimated function, and a constant variance independent of
Loading
. The linear parameters (weights) and constant variance should maximise the likelihood for a given set of data points. Mathematically, this can be stated as saying that for any Loading
the corresponding optimal Loading
is a guassian distributed variable around a linear function with weights Loading
:Loading
This is basically saying we are taking a discrete set of points and expecting the corresponding
Loading
to vary around a mean given by Loading
.Estimation
The likelihood is the probability of an observation given the data. Given that the data points are independent,
Loading
Given a set of observed
Loading
the goal here is to find the linear weights Loading
and the Loading
such that the maximum likelihood (the probability above) is maximised. This is essentially the same as finding the set of guassian distributions such that the means are located as close as possible to the observed Loading
. To find the maximum, it is easier to find the equivalent maximum of the log likelihood. Loading
which by equating to zero, gives the maximum at,
Loading
Similarly, we can find the best choice of
Loading
by taking the derivative and equating to zero, resulting in:Loading
Prediction
The guassian distribution of
Loading
at some arbitrary point Loading
follows from the generated estimation of the weights and variance above and can be expressed as:Loading
Implicit here is that the probability distribution for
Loading
above is actually a conditional probability assuming that all of the other characteristics are fixed, i.e. Loading
.Our Naivete
The assumptions here represent a naivete about the problem that can be expounded upon in several ways.
- What if we have expert knowledge (prior belief) about what the weights should be? We should then be looking at MAP, not ML.
- Likewise can be said of having a prior belief about the uncertainties.
- The uncertainty itself may have a non-constant form, i.e. training points may be more certain on some sub-domain of Loadingand less certain elsewhere.
- In more complicated scenarios, data points are not necessarily independent. For example, continuity can influence covariance so that points close to each other are highly correlated.
- A traditional way of influencing the weights to keep them simple is to use optimisation techniques which add a penalty term to the optimisation.
- Refer to these lecture notes for notes on how to do this with ridge regression or lasso techniques.
, multiple selections available, Use left or right arrow keys to navigate selected items