import arviz as az
import numpy as np
import pymc as pm
from pymc.math import switch, ge
7. Hypothesis Testing*#
Psoriasis: Two-sample problem with paired data#
This is our first example of hypothesis testing.
Adapted from Unit 6: psoriasis.odc.
Woo and McKenna (2003) investigated the effect of broadband ultraviolet B (UVB) therapy and topical calcipotriol cream used together on areas of psoriasis. One of the outcome variables is the Psoriasis Area and Severity Index (PASI), where a lower score is better.
The PASI scores for 20 subjects are measured at baseline and after 8 treatments. Do these data provide sufficient evidence to indicate that the combination therapy reduces PASI scores?
Classical Analysis:
d = baseline - after;
n = length(d);
dbar = mean(d); dbar = 6.3550
sdd = sqrt(var(d)); sdd = 4.9309
tstat = dbar/(sdd/sqrt(n)); tstat = 5.7637
Reject H_0 at the level alpha=0.05 since the p_value = 0.00000744 < 0.05
95% CI is [4.0472, 8.6628]
# fmt: off
baseline = np.array((5.9, 7.6, 12.8, 16.5, 6.1, 14.4, 6.6, 5.4, 9.6, 11.6,
11.1, 15.6, 9.6, 15.2, 21.0, 5.9, 10.0, 12.2, 20.2,
6.2))
after = np.array((5.2, 12.2, 4.6, 4.0, 0.4, 3.8, 1.2, 3.1, 3.5, 4.9, 11.1,
8.4, 5.8, 5, 6.4, 0.0, 2.7, 5.1, 4.8, 4.2))
# fmt: on
with pm.Model() as m:
# priors
mu = pm.Normal("mu", mu=0, sigma=316)
prec = pm.Gamma("prec", alpha=0.001, beta=0.001)
sigma = pm.Deterministic("sigma", 1 / pm.math.sqrt(prec))
ph1 = pm.Deterministic("ph1", switch(mu >= 0, 1, 0))
diff = pm.Normal("diff", mu=mu, sigma=sigma, observed=baseline - after)
# start sampling
trace = pm.sample(5000)
Show code cell output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu, prec]
Sampling 4 chains for 1_000 tune and 5_000 draw iterations (4_000 + 20_000 draws total) took 1 seconds.
az.summary(trace, hdi_prob=0.95, var_names=["~prec"])
/Users/aaron/mambaforge/envs/pymc/lib/python3.11/site-packages/arviz/stats/diagnostics.py:592: RuntimeWarning: invalid value encountered in scalar divide
(between_chain_variance / within_chain_variance + num_samples - 1) / (num_samples)
mean | sd | hdi_2.5% | hdi_97.5% | mcse_mean | mcse_sd | ess_bulk | ess_tail | r_hat | |
---|---|---|---|---|---|---|---|---|---|
mu | 6.355 | 1.167 | 4.149 | 8.766 | 0.009 | 0.007 | 16065.0 | 12098.0 | 1.0 |
sigma | 5.125 | 0.883 | 3.568 | 6.887 | 0.007 | 0.005 | 15790.0 | 13476.0 | 1.0 |
ph1 | 1.000 | 0.000 | 1.000 | 1.000 | 0.000 | 0.000 | 20000.0 | 20000.0 | NaN |
Our model agrees with the BUGS results:
mean |
sd |
MC_error |
val2.5pc |
median |
val97.5pc |
start |
sample |
|
---|---|---|---|---|---|---|---|---|
pH1 |
1 |
0 |
3.16E-13 |
1 |
1 |
1 |
1001 |
100000 |
mu |
6.352 |
1.169 |
0.003657 |
4.043 |
6.351 |
8.666 |
1001 |
100000 |
sigma |
5.142 |
0.8912 |
0.003126 |
3.74 |
5.026 |
7.203 |
1001 |
100000 |
pval |
4.20E-04 |
0.02049 |
6.04E-05 |
0 |
0 |
0 |
1001 |
100000 |
Equivalence of generic and brand-name drugs#
Adapted from Unit 6: equivalence.odc.
The manufacturer wishes to demonstrate that their generic drug for a particular metabolic disorder is equivalent to a brand name drug. One of indication of the disorder is an abnormally low concentration of levocarnitine, an amino acid derivative, in the plasma. The treatment with the brand name drug substantially increases this concentration.
A small clinical trial is conducted with 43 patients, 18 in the Brand Name Drug arm and 25 in the Generic Drug arm. The increases in the log-concentration of levocarnitine are in the data below.
The FDA declares that bioequivalence among the two drugs can be established if the difference in response to the two drugs is within 2 units of log-concentration. Assuming that the log-concentration measurements follow normal distributions with equal population variance, can these two drugs be declared bioequivalent within a tolerance +/-2 units?
The way the data is set up in the .odc file is strange. It seems simpler to just have a separate list for each increase type.
# fmt: off
increase_type1 = [7, 8, 4, 6, 10, 10, 5, 7, 9, 8, 6, 7, 8, 4, 6, 10, 8, 9]
increase_type2 = [6, 7, 5, 9, 5, 5, 3, 7, 5, 10, 8, 5, 8, 4, 4, 8, 6, 11,
7, 5, 5, 5, 7, 4, 6]
# fmt: on
We’re using pm.math.switch
and pm.math.eq
to recreate the BUGS step()
function for the probint
variable.
with pm.Model() as m:
# priors
mu1 = pm.Normal("mu1", mu=10, sigma=316)
mu2 = pm.Normal("mu2", mu=10, sigma=316)
mudiff = pm.Deterministic("mudiff", mu1 - mu2)
prec = pm.Gamma("prec", alpha=0.001, beta=0.001)
sigma = 1 / pm.math.sqrt(prec)
probint = pm.Deterministic(
"probint",
switch(ge(mudiff + 2, 0), 1, 0) * switch(ge(2 - mudiff, 0), 1, 0),
)
y_type1 = pm.Normal("y_type1", mu=mu1, sigma=sigma, observed=increase_type1)
y_type2 = pm.Normal("y_type2", mu=mu2, sigma=sigma, observed=increase_type2)
# start sampling
trace = pm.sample(5000)
Show code cell output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [mu1, mu2, prec]
Sampling 4 chains for 1_000 tune and 5_000 draw iterations (4_000 + 20_000 draws total) took 1 seconds.
az.summary(trace, hdi_prob=0.95)
mean | sd | hdi_2.5% | hdi_97.5% | mcse_mean | mcse_sd | ess_bulk | ess_tail | r_hat | |
---|---|---|---|---|---|---|---|---|---|
mu1 | 7.334 | 0.470 | 6.414 | 8.269 | 0.003 | 0.002 | 27558.0 | 15811.0 | 1.0 |
mu2 | 6.198 | 0.397 | 5.438 | 6.984 | 0.002 | 0.002 | 26341.0 | 14856.0 | 1.0 |
prec | 0.264 | 0.058 | 0.159 | 0.381 | 0.000 | 0.000 | 24987.0 | 13816.0 | 1.0 |
mudiff | 1.136 | 0.613 | -0.032 | 2.382 | 0.004 | 0.003 | 28981.0 | 15171.0 | 1.0 |
probint | 0.921 | 0.270 | 0.000 | 1.000 | 0.002 | 0.002 | 16160.0 | 16160.0 | 1.0 |
BUGS results:
mean |
sd |
MC_error |
val2.5pc |
median |
val97.5pc |
start |
sample |
|
---|---|---|---|---|---|---|---|---|
mu[1] |
7.332 |
0.473 |
0.001469 |
6.399 |
7.332 |
8.264 |
1001 |
100000 |
mu[2] |
6.198 |
0.4006 |
0.001213 |
5.406 |
6.199 |
6.985 |
1001 |
100000 |
mudiff |
1.133 |
0.618 |
0.00196 |
-0.07884 |
1.134 |
2.354 |
1001 |
100000 |
prec |
0.2626 |
0.05792 |
1.90E-04 |
0.1617 |
0.2584 |
0.3877 |
1001 |
100000 |
probint |
0.9209 |
0.2699 |
9.07E-04 |
0 |
1 |
1 |
1001 |
100000 |
%load_ext watermark
%watermark -n -u -v -iv -p pytensor
Last updated: Mon Oct 30 2023
Python implementation: CPython
Python version : 3.11.5
IPython version : 8.15.0
pytensor: 2.17.1
pymc : 5.9.0
numpy: 1.25.2
arviz: 0.16.1