1. Numerical Approaches#
Lecture errata#
There is an error in the exponential terms when sampling from the Cauchy distribution at slide 7, which appears at 4:46. The exponents in the numerator and denominator were accidentally dropped from the last version of the expression. This is how it should be:
Moving beyond conjugate cases#
When we move beyond conjugate cases, we might possess the posterior in an analytical form due to Bayes’ theorem. However, this will not represent any recognizable distribution, irrespective of how we manipulate it. Many normalizing constants lack a closed-form solution. As illustrated in lessons 2 and 4 of this unit, we can attempt to numerically approximate these constants, but this method can be imprecise or computationally intractable. The time complexity increases exponentially with the number of parameters, as shown in section 2.1 of [Blei et al., 2017], linked here.
See also
This article by Michael Betancourt for an overview of probabilistic computation.
The second half of this course concentrates more on modeling rather than the specific method used to get the posterior, but in order to do that, we need to choose a method for sampling from or approximating the posterior. There are many methods out there, but we’re going to focus on Markov Chain Monte Carlo (MCMC). These algorithms have a lot of advantages, including a relatively easy algorithm to implement and understand.
Specifically, we will learn the basics of the Metropolis-Hastings and Gibbs sampling algorithms.