Regression analysis

In this page, we analyse the result from linear regression. These statistics make it possible to analyse a Linear regression.

The linear model is:

Y = \sum_{k = 1}^{p} a_k \Psi_k(\vect{x}) + \epsilon

where:

  • n_x \in \Nset is the dimension of the input vector,

  • \vect{x} \in \Rset^{n_x} is the input vector,

  • p \in \Nset is the number of parameters,

  • (\Psi_k)_{k = 1, ..., p} are the basis functions where \Psi_k: \Rset^{n_x} \rightarrow \Rset for k \in \{1, ..., p\},

  • (a_k)_{k = 1, ..., p} are the coefficients where a_k \in \Rset for k \in \{1, ..., p\},

  • \epsilon \sim \cN(0, \sigma^2) where \cN is a normal distribution and \sigma > 0 is its standard deviation.

The main goal of considering a normal noise is to be able to make tests of significance (see [rawlings2001] page 3). In particular, this enables to use the F-test and T-test that we are going to review later in this document. Furthermore, if the errors are normal, then the method of least squares and the maximum likelihood method are equivalent (see [bingham2010] theorem 1.8 page 22).

Caution

There is an ambiguity when the number of parameters is unspecified in the text, which may explain the differences between the various formulas we find in the books. In some texts, the intercept has the index 0, which leads to an increased number of parameters. In the present document, the number of parameters is equal to p, but not all books use the same convention:

Experimental design

Let n \in \Nset be the sample size. A set of observations of the input variables is required:

\cX = \left\{ \vect{x}^{(1)}, \dots, \vect{x}^{(n)} \right\}.

We assume that the random errors \left\{ \epsilon^{(1)}, \dots, \epsilon^{(n)} \right\} are independent. We consider the corresponding model evaluations:

\cY = \left\{ y^{(1)}, \dots, y^{(n)} \right\}

where:

y^{(i)} = \sum_{k = 1}^{p} a_k \Psi_k \left(\vect{x}^{(i)}\right) + \epsilon^{(i)}

for i \in \{1, ..., n\}. Since the errors (\epsilon^{(i)})_{i = 1, ..., n} are independent, then the output observations \cY are independent too. Let \vect{y} = (y^{(1)},\dots,y^{(n)})^T \in \Rset^{n} be the vector of output observations.

Solution of the least squares problem

The design matrix \mat{\Psi} \in \Rset^{n \times p} is the value of the basis functions over the inputs variables in the sample:

\mat{\Psi}_{ij} = \Psi_j \left(\vect{x}^{(i)}\right)

for i = 1, ..., n and j = 1, ..., p. Assume that the design matrix has full rank. The solution of the linear least squares problem is:

\widehat{\vect{a}}
= \left(\Tr{\mat{\Psi}} \mat{\Psi}\right)^{-1} \Tr{\mat{\Psi}} \vect{y}.

Statistics

Let \bar{y} be the sample mean:

\bar{y} = \frac{1}{n} \sum_{j = 1}^n y_j.

The total sum of squares (see [baron2014] page 398) is:

SS_{TOT}
= \sum_{j = 1}^n \left(y_j - \bar{y}_j\right)^2
= \left(\vect{y} - \bar{\vect{y}}\right)^T \left(\vect{y} - \bar{\vect{y}}\right).

where \bar{\vect{y}} = (\bar{y}, ..., \bar{y})^T \in \Rset^n. The regression sum of squares is:

SS_{REG}
= \sum_{j = 1}^n \left(\hat{y}_j - \bar{y}_j\right)^2
= \left(\hat{\vect{y}} - \bar{\vect{y}}\right)^T \left(\hat{\vect{y}} - \bar{\vect{y}}\right).

The error sum of squares is:

SS_{ERR}
= \sum_{j = 1}^n \left(y_j - \hat{y}_j\right)^2
= \left(\vect{y} - \hat{\vect{y}}\right)^T \left(\vect{y} - \hat{\vect{y}}\right).

Coefficient of determination

The coefficient of determination is (see [baron2014] page 399):

R^2 = \frac{SS_{REG}}{SS_{TOT}}.

The coefficient of determination measures the part of the variance explained by the linear regression model.

Variance

The unbiased estimator of the variance is (see [baron2014] page 400, [bingham2010] page 67):

\hat{\sigma}^2 = \frac{SS_{ERR}}{n - p}.

ANOVA F-test

The F-statistic is based on the hypothesis that all coefficients are simultaneously zero (see [baron2014] page 400). More precisely, the ANOVA F-test is based on the hypothesis:

H_0 : a_1 = . . . = a_p = 0
\qquad \textrm{vs} \qquad
H_A : \textrm{at least one } a_k \neq 0.

Let \vect{y} \in \Rset^n be the vector of observations and \vect{\hat{y}} \in \Rset^n be the vector of predictions.

The F-statistic is (see [bingham2010] Kolodziejcyzk’s theorem 6.5 page 154, [baron2014] page 400):

f = \frac{SS_{reg} / p}{SS_{ERR} / (n - p)}.

The p-value is computed from the Fisher-Snedecor distribution F_{p, n - p - 1} (see [baron2014] page 400, [faraway2014] page 35).

T-test for individual coefficients

The T-test is based on the hypothesis that one single coefficient is zero. More precisely, let \hat{\sigma}^2 be the estimator of the variance (see [baron2014] page 400):

\hat{\sigma}^2 = \frac{SS_{ERR}}{n - p}.

The variance of the estimator of the parameters is:

\Var{\hat{a}_k} = \hat{\sigma}^2 (\Tr{\mat{\Psi}} \mat{\Psi})^{-1}

Let \operatorname{SD}(\hat{a}_k) be the standard deviation of the estimator of a_k:

\operatorname{SD}(\hat{a}_k) = \sqrt{\Var{\hat{a}_k}}

for any k \in \{1, ..., p\}. For k = 1, ..., p, the T-test is (see [baron2014] page 401):

H_0 : a_k = 0 \qquad \textrm{vs} \qquad H_A : a_k \neq 0.

The T-statistic is (see [baron2014] page 401, [rawlings2001] eq. 4.47 page 122):

t = \frac{\hat{a}_k}{\operatorname{SD}(\hat{a}_k)}

for k = 1, ..., p. The p-value is computed from the Student’s T distribution with n - p - 1 degrees of freedom (see [baron2014] page 401).