Degrees of Freedom

Maverick848

New member
Joined
Jan 20, 2006
Messages
22
On the Wikipedia page for "degrees of freedom," it says that the residuals are "restrained to lie within the space of two equations" and then it shows you the equations. How do they get the second equation?
 
In a simple linear regression, there are two equations in the two unknown parameters for intercept and slope. The equations for these are found by minimizing the sum of squares of the residuals with respect to the two parameters. The second equation is the condition related to finding the value of the slope that minimizes the sum of squared residuals. That is, the vector of residuals must be orthogonal to the vector of independent variables.
 
Why do those two vectors need to be orthogonal and how does that relate to finding the slope?
 
You have a set of n points, but only 2 points are needed to determine a straight line. The orthogonality arises from the process of minimization itself, and imposes the other n-2 constraints. In other words, it's the definition of least squares estimates and the requirement for a unique solution that leads to the orthogonality, because you are minimizing a sum of squares. If the criterion for the estimates are not least squares, then you don't necessarily get orthogonal vectors.
 
If there are n-2 constraints, then wouldn't there only be 2 degrees of freedom?
 
The two degrees of freedom go with the two parameters that are estimated. The remainder go with the errors. Orthogonality is why you can add them.
 
What specifically about the process of minimization of the sum of squares creates the condition of orthogonality between the two vectors?
 
Orthogonality is the result of minimizing a sum of squares which is linear in the unknown parameters. This is easiest to see by performing the minimization itself. The fact that the derivatives must be zero is equivalent to the orthogonality. Obviously taking the (partial) derivative of a quadratic will lead to a linear relationship.
 
Top