In pharmaceutical Quality Control, measurement uncertainty (MU) is usually documented in method validation reports, but rarely quantified in batch disposition decisions.
The following box explains how Monte Carlo integrates method uncertainty and process behaviour when only one QC measurement is available.
Yet, whenever a batch result is close to a specification limit, MU determines:
Monte Carlo simulation provides a simple and transparent way to quantify these risks using the fundamental relationship:
\[X_m = X_t + \varepsilon\]where:
This case study focuses on a one-sided upper specification limit (USL) and shows how to model:
Every batch has a true quality attribute value (assay, pH, purity, etc.).
This value is not observed directly: what we observe is the result of the analytical measurement.
Two sources of variation act on the reported QC value:
These two components together determine the distribution of possible measured results — the values on which batch release or rejection decisions are based.
Batch release decisions are made on the measured QC value, not on the (unknown) true value of the batch.
Whenever measurement uncertainty is present, these two quantities may lie on opposite sides of the specification limit — leading to wrong decisions.
For example, in the case of a one-sided upper specification limit (USL), two types of errors are possible:
A simple decision table helps visualize the two possible errors:
| Decision Outcome | $X_t \le \mathrm{USL}$ (In spec) | $X_t > \mathrm{USL}$ (OOE) |
|---|---|---|
| $X_m \le \mathrm{USL}$ | Correct release | False Acceptance (FA) |
| $X_m > \mathrm{USL}$ | False Non-Compliance (FNC) | Correct rejection |
In practical GMP terms:
These two probabilities quantify the decision error directly, linking measurement uncertainty and process behaviour to the real GMP consequences of acceptance or rejection.
Monte Carlo simulation provides a structured way to understand how process variability and measurement uncertainty jointly influence the QC decision.
Instead of relying only on the single observed result, the simulation reconstructs all the plausible scenarios that could have produced that measurement.
A process model (capability analysis, historical data, prior knowledge) represents the range of possible true values for the batch.
This reflects what could be true before considering measurement error.
For each simulated true value, analytical noise consistent with the method’s validated uncertainty is added.
This step mimics the behaviour of the measurement system.
The simulation produces many “pseudo-measurements” that represent how the test would vary if repeated multiple times under identical conditions.
By comparing true and measured values in each scenario, Monte Carlo directly calculates:
These are decision risks, not analytical precision metrics, and cannot be derived from the single QC result alone.
Monte Carlo–based interpretation is consistent with modern guidelines:
To illustrate how Monte Carlo integrates process behavior and measurement uncertainty, we construct a simple and realistic model of a QC assay.
The manufacturing process is assumed to be centred slightly below the upper specification limit (USL) and to show moderate batch-to-batch variability.
Thus, the true value for the batch is modelled as:
This distribution represents all plausible true assay values before measurement error is applied.
Analytical methods introduce additional noise.
Assume the method validation report provides a standard uncertainty of:
[ u = 0.25\% ]
This is added to each simulated true value to generate a simulated measured value, mimicking the real laboratory behaviour.
We consider a specification window commonly encountered in GMP practice:
These limits allow us to quantify both types of decision risk:
R Code Block 1 — Simulating Process + Measurement Uncertainty
set.seed(123)
N <- 100000
# Process model (true values)
mean_true <- 99.5
sd_true <- 0.40
X_true <- rnorm(N, mean = mean_true, sd = sd_true)
# Measurement uncertainty (standard uncertainty u)
u <- 0.25
error <- rnorm(N, mean = 0, sd = u)
# Observed / measured values
X_meas <- X_true + error
LSL <- 98.0
USL <- 100.0
# Outcomes of decision based on measured values
FNC <- mean(X_meas > USL & X_true <= USL)
FA <- mean(X_meas <= USL & X_true > USL)
FNC; FA
R Code Block 2 — Risk as a Function of MU
We evaluate FA and FNC risk for different uncertainty levels (u = 0.10–0.50%).
set.seed(123)
N <- 100000
mean_true <- 99.5
sd_true <- 0.40
LSL <- 98.0
USL <- 100.0
# Fix X_true once, then vary only MU
X_true <- rnorm(N, mean = mean_true, sd = sd_true)
res <- data.frame(
u = seq(0.10, 0.50, by = 0.05),
FA = NA_real_,
FNC = NA_real_
)
for (i in seq_along(res$u)) {
eps <- rnorm(N, mean = 0, sd = res$u[i])
Xm <- X_true + eps
res$FA[i] <- mean(Xm <= USL & X_true > USL)
res$FNC[i] <- mean(Xm > USL & X_true <= USL)
}
res
# Optional sanity checks (didactic)
prop_true_ooe <- mean(X_true > USL) # should be > 0
range_true <- range(X_true) # should include values > USL
prop_true_ooe; range_true
R Code Block 3 — Plotting Risk vs Measurement Uncertainty
yrange <- range(res$FA, res$FNC)
plot(res$u, res$FA, type = "b", pch = 19,
xlab = "Standard Uncertainty u (%)",
ylab = "Probability",
ylim = yrange,
main = "False Acceptance / False Non-Compliance vs Measurement Uncertainty")
lines(res$u, res$FNC, type = "b", col = "red", pch = 19)
legend("topleft",
legend = c("False Acceptance", "False Non-Compliance"),
col = c("black", "red"), pch = 19)
Figure 15.1 here below illustrates the resulting risk curves for FA and FNC across different values of measurement uncertainty (u).
Figure 15.1 – Decision risks of False Acceptance (FA) and False Non-Compliance (FNC) estimated via Monte Carlo uncertainty propagation. Risks increase monotonically with the method-validated standard uncertainty u, showing an asymmetric trade-off driven by the process distribution centered near the upper specification limit (USL = 100%).
Monte Carlo simulation makes this trade-off explicit and quantifiable.
Minor numerical differences across independent runs are expected due to Monte Carlo sampling variability, even when the underlying process and method uncertainty model remain unchanged.
For all uncertainty levels, the simulation returned the following values:
| u (%) | False Acceptance (FA) | False Non-Compliance (FNC) |
|---|---|---|
| 0.10 | 0.01470 | 0.02216 |
| 0.15 | 0.02017 | 0.03634 |
| 0.20 | 0.02476 | 0.05082 |
| 0.25 | 0.02755 | 0.06672 |
| 0.30 | 0.03062 | 0.08411 |
| 0.35 | 0.03275 | 0.10086 |
| 0.40 | 0.03546 | 0.11740 |
| 0.45 | 0.03627 | 0.13389 |
| 0.50 | 0.03823 | 0.14982 |
These numerical values clearly show that:
– both FA and FNC increase monotonically with the standard uncertainty u;
– FNC is consistently higher than FA, because the process mean is close to the USL.
Monte Carlo simulation quantifies this asymmetry with complete transparency.
Expected pattern:
FA increases with measurement uncertainty (higher risk of releasing bad batches).
FNC also increases (more chance of rejecting good batches).
This mirrors the classical metrology trade-off.
Monte Carlo simulation provides a transparent and auditable justification for decisions under uncertainty.
This case study shows how to combine process variability and measurement uncertainty to quantify regulatory decision risks. Monte Carlo simulation translates MU into concrete probabilities of False Acceptance and False Non-Compliance, supporting modern QRM practices and inspection-ready justification.
| ← Previous: Chapter 14 — Case Study 8 — Percentile-Based Capability for Non-Normal Data | ▲ Chapter Index | Next → Chapter 16 — Decision and Risk |