10. Simple Linear Regression#
Contributed by Jason Naramore.
The simple linear regression model is:
where data is in the form of paired \(x_{i1}\) and \(y_i\) values from \(i = 1,2,...,n\). \(\beta_1\) and \(\beta_0\) are the slope and intercept, respectively. The goal is to find the best fit line for predicting \(y\) based on \(x\). The best fit line is almost never perfect, so one of the typical assumptions is that the error \(\epsilon\) about the \(\hat{y}\) fit is normally distributed.
Classical statistics has an elegant approach to solving this problem. We can find the estimate of \(\hat{\beta}_1 = \frac{SXY}{SXX}\), where
and then find \(\hat{\beta}_0 = \bar{y} - \hat{\beta_1} \bar{x} \). Furthermore, we can assess the model fit using \(R^2\), which is found using SSE, SSR, and SST. \(R^2\) is the variance in \(y\) explained by the model.
In the Bayesian model for simple linear regression, we set priors for \(\beta_0\), \(\beta_1\), and \(\sigma^2\):
Typical non-informative priors for \(\beta_0\) and \(\beta_1\) are centered at zero, with a wide standard deviation. A non-informative prior for \(\tau\) could be \(Ga(a = 0.001, b = 0.001)\). In PyMC
we can define a Normal distribution with either the \(\tau\) or \(\sigma\) parameter, so we could define the \(\sigma^2\) distribution directly instead of using a deterministic relationship to \(\tau\). Other possible non-negative priors for \(\sigma^2\) might be Inverse-Gamma, Half-Normal, or Half-flat.