Acceptance Sampling
Acceptance sampling is a statistical quality control method where a random sample is inspected from a lot to decide whether to accept or reject the entire lot. It balances inspection cost against the risk of accepting defective lots or rejecting good ones.
Why It Matters
When 100% inspection is impractical or destructive (tensile testing, burst pressure, shelf life), acceptance sampling provides a statistically grounded accept/reject decision. Standards like ANSI/ASQ Z1.4 (attribute) and Z1.9 (variable) define sampling plans based on lot size, acceptable quality level (AQL), and desired protection against bad lots.
Variable sampling plans (Z1.9) are more efficient than attribute plans because they use measurement data rather than pass/fail counts, requiring smaller sample sizes for the same protection. However, Z1.9 plans assume normally distributed measurements — a critical limitation when dealing with non-normal quality characteristics.
The consequences of a poor sampling plan are asymmetric: accepting a bad lot reaches the customer and creates quality escapes, recalls, or warranty claims. Rejecting a good lot wastes material and delays shipments. Getting the sampling plan right — with accurate distribution assumptions — directly impacts both quality risk and operational efficiency.
The EntropyStat Perspective
EntropyStat improves variable acceptance sampling by removing the normality assumption from lot quality estimation. Traditional Z1.9 plans estimate the fraction nonconforming from the sample mean and standard deviation, assuming a normal distribution to extrapolate tail probabilities. When measurements are non-normal, this extrapolation can significantly overstate or understate the true defect rate.
Using the EGDF to estimate the lot's distribution directly from sample measurements produces more accurate fraction-nonconforming estimates. The EGDF captures the actual tail behavior — whether skewed, heavy-tailed, or truncated — so the accept/reject decision is based on realistic defect probability rather than a Gaussian approximation.
The EGDF's ability to produce stable estimates from small samples also enables tighter sampling plans. If 5–8 measurements can characterize the lot distribution reliably, the required sample size for a given discrimination ratio (consumer risk vs. producer risk) may be smaller than what Z1.9 tables require — reducing inspection costs without sacrificing quality protection.
Related Terms
AQL (Acceptable Quality Level)
AQL is the maximum percentage of defective items in a lot that is considered acceptable for ongoing production. It serves as the primary index for acceptance sampling plans, defining the quality level at which lots will be accepted most of the time (typically 95%).
OC Curves (Operating Characteristic)
An OC curve plots the probability of accepting a lot as a function of the lot's true quality level (fraction defective). It characterizes a sampling plan's ability to discriminate between good and bad lots, showing both producer's risk (rejecting good lots) and consumer's risk (accepting bad lots).
Sample Size Determination
Sample size determination is the process of calculating the minimum number of measurements needed to achieve a desired level of statistical confidence and precision. It depends on the expected variability, the required precision (margin of error), and the acceptable error rates (Type I and Type II).
Process Capability (Cpk/Ppk)
Process capability indices (Cpk and Ppk) quantify how well a manufacturing process can produce parts within specification limits. Cpk measures short-term capability using within-subgroup variation, while Ppk measures long-term performance using overall variation.
Non-Normal Data
Non-normal data is process data whose distribution does not follow the Gaussian (bell curve) pattern. Common non-normal patterns in manufacturing include skewed distributions, bimodal distributions, truncated distributions, and heavy-tailed distributions.
Related Articles
First Pass Yield vs. Cpk: Which Metric Tells the Real Story?
First pass yield says 98.2%. Cpk says 0.94. One measures what happened. The other predicts what will happen next. When they disagree, something important is hiding — and knowing which to trust prevents costly mistakes.
Mar 17, 2026
PPAP Submissions: Capability Evidence That Survives Customer Audits
Your PPAP got rejected — not for bad parts, but for bad statistics. OEM auditors now scrutinize whether your Cpk method matches your data. Build a PPAP capability evidence chain that withstands the toughest audits.
Mar 14, 2026
EntropyStat vs. Minitab: What Distribution-Free Analysis Actually Means
Minitab offers non-normal options. EntropyStat is distribution-free. Those aren’t the same thing. Offering a menu of distributions to choose from is distribution-flexible — not distribution-free. Here’s why that distinction determines whether your Cpk is correct.
Mar 10, 2026
See Entropy-Powered Analysis in Action
Upload your data and compare traditional SPC with entropy-based methods. Free demo — no credit card required.