import arviz as az
import numpy as np
import pymc as pm
from pymc.math import exp

6. Time-to-event Models: Gastric Cancer*#

Adapted from code for Unit 8: gastric.odc.

Data can be found here.

Problem statement#

Stablein, Carter, and Novak [1981] provide data on 90 patients affected by locally advanced, nonresectable gastric carcinoma. The patients are randomized to two treatments: chemotherapy alone (coded as 0) and chemotherapy plus radiation (coded as 1). Survival time is reported in days. Recorded times are censored if the patient stopped participating in the study before it finished.

Data#

Columns are, from left to right:

  • type: Treatment type, chemotherapy (0) or chemotherapy + radiation (1)

  • censored: If censored, meaning the patient survived the observation period, the time in days appears here rather than in the times column. 0 if not censored.

  • times: Recorded days without cancer recurrence. NaN if censored.

Model changes#

PyMC really did not like the noninformative exponential prior on v (α in this model). For some reason, the equivalent Gamma distribution is more stable. I found passing an initial value also helps avoid divergences here.

Method 1: pm.Censored#

The way PyMC censoring works is described in some detail in this notebook (Vincent [2023]). For right-censoring, try this: pm.Censored("name", dist, lower=None, upper=censored, observed=y). The censored values can be an array of the same shape as the y values.

If the y value equals the right-censored value, pm.Censored returns the complement to the CDF evaluated at the censored value. If the y value is greater than the censored value, it returns -np.inf. Otherwise, the distribution you passed to the dist parameter works as normal. What I’ve been doing is setting the values in the censored array to np.inf if the corresponding y value is not censored, and equal to the y value if it should be censored.

Note

I’ve noticed that this method is unstable with some distributions. Try using the imputed censoring model (below) if this one isn’t working.

data = np.loadtxt("../data/gastric.txt")
data.shape
(90, 3)
x = data[:, 0].copy()
censored = data[:, 1].copy()
y = data[:, 2].copy()
# for pymc, right-censored values must be greater than or equal to than the "upper" value
y[np.isnan(y)] = censored[np.isnan(y)]
censored[censored == 0] = np.inf

Warning

PyMC and BUGS do not specify the Weibull distribution in the same way!

α = v

β = λ ** (-1 / α)

log2 = np.log(2)

with pm.Model() as m:
    beta0 = pm.Normal("beta0", 0, tau=0.01)
    beta1 = pm.Normal("beta1", 0, tau=0.1)
    α = pm.Gamma("α", 1, 0.001, initval=0.25)

    λ = exp(beta0 + beta1 * x)
    β = λ ** (-1 / α)

    obs_latent = pm.Weibull.dist(alpha=α, beta=β)
    likelihood = pm.Censored(
        "likelihood",
        obs_latent,
        lower=None,
        upper=censored,
        observed=y,
    )

    median0 = pm.Deterministic("median0", (log2 * exp(-beta0)) ** (1 / α))
    median1 = pm.Deterministic(
        "median1", (log2 * exp(-beta0 - beta1)) ** (1 / α)
    )

    S = pm.Deterministic("S", exp(-λ * (likelihood**α)))
    f = pm.Deterministic("f", λ * α * (likelihood ** (α - 1)) * S)
    h = pm.Deterministic("h", f / S)

    trace = pm.sample(
        10000,
        tune=2000,
        init="jitter+adapt_diag_grad",
        target_accept=0.9,
    )
Hide code cell output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag_grad...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [beta0, beta1, α]

Sampling 4 chains for 2_000 tune and 10_000 draw iterations (8_000 + 40_000 draws total) took 14 seconds.
az.summary(trace, var_names=["~S", "~f", "~h"], hdi_prob=0.9)
mean sd hdi_5% hdi_95% mcse_mean mcse_sd ess_bulk ess_tail r_hat
beta0 -6.794 0.669 -7.918 -5.721 0.008 0.006 7103.0 9414.0 1.0
beta1 0.262 0.232 -0.114 0.648 0.001 0.001 24321.0 19909.0 1.0
α 1.028 0.098 0.867 1.188 0.001 0.001 7153.0 9646.0 1.0
median0 524.207 88.400 383.214 664.780 0.587 0.415 22412.0 25454.0 1.0
median1 405.738 70.421 295.325 522.211 0.446 0.315 24736.0 26524.0 1.0

Method 2: pm.Potential#

This method uses pm.Potential to achieve the same thing as above by evaluating the censored datapoints differently. It came from this notebook(Junpeng Lao [2023]).

x = data[:, 0].copy()
censored_vals = data[:, 1].copy()
y = data[:, 2].copy()

# we need to separate the observed values and the censored values
observed_mask = censored_vals == 0

y_censored = censored_vals[~observed_mask]
y_uncensored = y[observed_mask]
x_censored = x[~observed_mask]
x_uncensored = x[observed_mask]

n_right_censored = int(x_censored.shape[0])
n_observed = int(x_uncensored.shape[0])
# see https://www.pymc.io/projects/examples/en/latest/survival_analysis/weibull_aft.html
def weibull_lccdf(x, alpha, beta):
    """Log complementary cdf of Weibull distribution."""
    return -((x / beta) ** alpha)
log2 = np.log(2)

with pm.Model() as m:
    beta0 = pm.Normal("beta0", 0, tau=0.01)
    beta1 = pm.Normal("beta1", 0, tau=0.1)
    α = pm.Gamma("α", 1, 0.001, initval=0.25)

    λ_censored = exp(beta0 + beta1 * x_censored)
    β_censored = λ_censored ** (-1 / α)

    λ_uncensored = exp(beta0 + beta1 * x_uncensored)
    β_uncensored = λ_uncensored ** (-1 / α)

    pm.Weibull(
        "observed",
        alpha=α,
        beta=β_uncensored,
        observed=y_uncensored,
        shape=n_observed,
    )
    pm.Potential("censored", weibull_lccdf(y_censored, α, β_censored))

    median0 = pm.Deterministic("median0", (log2 * exp(-beta0)) ** (1 / α))
    median1 = pm.Deterministic(
        "median1", (log2 * exp(-beta0 - beta1)) ** (1 / α)
    )

    trace = pm.sample(10000, tune=2000, target_accept=0.9)
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [beta0, beta1, α]

Sampling 4 chains for 2_000 tune and 10_000 draw iterations (8_000 + 40_000 draws total) took 15 seconds.
az.summary(trace, hdi_prob=0.9)
mean sd hdi_5% hdi_95% mcse_mean mcse_sd ess_bulk ess_tail r_hat
beta0 -6.768 0.666 -7.842 -5.656 0.006 0.004 13330.0 14984.0 1.0
beta1 0.262 0.232 -0.121 0.641 0.002 0.001 17578.0 16409.0 1.0
α 1.025 0.097 0.869 1.187 0.001 0.001 13312.0 15338.0 1.0
median0 522.726 88.403 376.215 658.102 0.582 0.413 23193.0 26359.0 1.0
median1 404.411 69.819 292.821 517.047 0.460 0.326 22932.0 25841.0 1.0

Old imputed censoring method#

This method is from an older version of this notebook by Luis Mario Domenzain, George Ho, and Dr. Ben Vincent. The newer version doesn’t work for our purposes at this time, so I’ll be on the lookout for another way to do imputed censoring with varying censoring cutoff values.

I’m just going to preserve it here for posterity.

Warning

pm.Bound is deprecated, so this method has stopped working.

data = np.loadtxt("../data/gastric.txt")
x = data[:, 0].copy()
censored_vals = data[:, 1].copy()
y = data[:, 2].copy()

# we need to separate the observed values and the censored values
observed_mask = censored_vals == 0

censored = censored_vals[~observed_mask]
y_uncensored = y[observed_mask]
x_censored = x[~observed_mask]
x_uncensored = x[observed_mask]
log2 = np.log(2)

with pm.Model() as m:
    beta0 = pm.Normal("beta0", 0, tau=0.0001)
    beta1 = pm.Normal("beta1", 0, tau=0.0001)
    α = pm.Exponential("α", 3)

    λ_censored = exp(beta0 + beta1 * x_censored)
    β_censored = λ_censored ** (-1 / α)

    λ_uncensored = exp(beta0 + beta1 * x_uncensored)
    β_uncensored = λ_uncensored ** (-1 / α)

    impute_censored = pm.Bound(
        "impute_censored",
        pm.Weibull.dist(alpha=α, beta=β_censored),
        lower=censored,
        shape=censored.shape[0],
    )

    likelihood = pm.Weibull(
        "likelihood",
        alpha=α,
        beta=β_uncensored,
        observed=y_uncensored,
        shape=y_uncensored.shape[0],
    )

    median0 = pm.Deterministic("median0", (log2 * exp(-beta0)) ** (1 / α))
    median1 = pm.Deterministic(
        "median1", (log2 * exp(-beta0 - beta1)) ** (1 / α)
    )

    trace = pm.sample(10000, tune=2000, target_accept=0.9)
Hide code cell output
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[13], line 14
     11 λ_uncensored = exp(beta0 + beta1 * x_uncensored)
     12 β_uncensored = λ_uncensored ** (-1 / α)
---> 14 impute_censored = pm.Bound(
     15     "impute_censored",
     16     pm.Weibull.dist(alpha=α, beta=β_censored),
     17     lower=censored,
     18     shape=censored.shape[0],
     19 )
     21 likelihood = pm.Weibull(
     22     "likelihood",
     23     alpha=α,
   (...)
     26     shape=y_uncensored.shape[0],
     27 )
     29 median0 = pm.Deterministic("median0", (log2 * exp(-beta0)) ** (1 / α))

AttributeError: module 'pymc' has no attribute 'Bound'
az.summary(trace, hdi_prob=0.9, kind="stats")
mean sd hdi_5% hdi_95%
beta0 -6.619 0.654 -7.658 -5.505
beta1 0.261 0.236 -0.135 0.642
α 1.002 0.096 0.844 1.158
impute_censored[0] 1470.516 624.422 882.003 2238.512
impute_censored[1] 1485.333 638.066 892.025 2267.560
impute_censored[2] 1623.087 636.271 1031.002 2399.749
impute_censored[3] 1629.358 644.748 1033.058 2417.048
impute_censored[4] 1896.556 636.864 1306.001 2680.457
impute_censored[5] 1927.291 645.113 1335.035 2706.771
impute_censored[6] 2044.492 647.964 1452.027 2827.988
impute_censored[7] 2064.511 637.872 1472.006 2851.689
impute_censored[8] 1144.755 823.971 381.008 2153.162
impute_censored[9] 1295.370 823.647 529.021 2299.400
impute_censored[10] 1717.105 832.210 945.080 2742.788
impute_censored[11] 1948.776 828.073 1180.049 2974.388
impute_censored[12] 2045.309 839.943 1277.020 3052.989
impute_censored[13] 2169.509 847.547 1397.058 3197.841
impute_censored[14] 2286.423 854.267 1512.011 3298.995
impute_censored[15] 2287.177 829.874 1519.089 3316.162
median0 520.012 90.909 369.907 658.412
median1 400.322 70.953 290.464 517.338
%load_ext watermark
%watermark -n -u -v -iv -p pytensor
Last updated: Wed Mar 22 2023

Python implementation: CPython
Python version       : 3.11.0
IPython version      : 8.9.0

pytensor: 2.10.1

numpy: 1.24.2
arviz: 0.14.0
pymc : 5.1.2