OC Curves (Operating Characteristic)
An OC curve plots the probability of accepting a lot as a function of the lot's true quality level (fraction defective). It characterizes a sampling plan's ability to discriminate between good and bad lots, showing both producer's risk (rejecting good lots) and consumer's risk (accepting bad lots).
Why It Matters
OC curves are the diagnostic tool for evaluating sampling plans. Every sampling plan has a tradeoff: sample more to catch bad lots reliably, or sample less to save cost and time. The OC curve makes this tradeoff explicit and quantitative.
A steep OC curve means the plan discriminates sharply between good and bad lots — it rarely accepts bad lots or rejects good ones. A flat OC curve means the plan is indecisive, accepting or rejecting lots almost randomly in the gray zone between clearly good and clearly bad quality levels.
Quality engineers use OC curves to negotiate sampling plans with customers. When a customer proposes a sampling plan, the OC curve shows whether the plan actually provides the protection both parties need. Without understanding the OC curve, you might agree to a sampling plan that routinely accepts lots with defect rates 5x higher than the stated AQL.
The EntropyStat Perspective
EntropyStat enhances OC curve analysis for variable sampling plans by providing more accurate lot quality estimation. Traditional variable OC curves are derived under the assumption that measurements follow a normal distribution. When this assumption is violated, the actual operating characteristic of the plan deviates from the theoretical curve — the plan does not perform as designed.
By using the EGDF to estimate the actual fraction nonconforming from sample measurements, EntropyStat enables "empirical OC curves" that reflect real distributional behavior. This shows quality engineers the true discriminating power of their sampling plan when data is non-normal, which may be significantly better or worse than what the textbook OC curve suggests.
The practical implication: a variable sampling plan designed under normality assumptions might accept lots at a 10% rate for a given quality level, while the actual acceptance rate with skewed data could be 25%. EntropyStat's distribution-aware analysis reveals these discrepancies, allowing teams to adjust sample sizes or switch to plans that account for the actual data characteristics.
Related Terms
Acceptance Sampling
Acceptance sampling is a statistical quality control method where a random sample is inspected from a lot to decide whether to accept or reject the entire lot. It balances inspection cost against the risk of accepting defective lots or rejecting good ones.
AQL (Acceptable Quality Level)
AQL is the maximum percentage of defective items in a lot that is considered acceptable for ongoing production. It serves as the primary index for acceptance sampling plans, defining the quality level at which lots will be accepted most of the time (typically 95%).
Sample Size Determination
Sample size determination is the process of calculating the minimum number of measurements needed to achieve a desired level of statistical confidence and precision. It depends on the expected variability, the required precision (margin of error), and the acceptable error rates (Type I and Type II).
Type I and Type II Errors
A Type I error (false positive, alpha risk) occurs when a statistical test incorrectly rejects a true null hypothesis. A Type II error (false negative, beta risk) occurs when a test fails to reject a false null hypothesis. In quality engineering, these map to false alarms and missed signals.
Non-Normal Data
Non-normal data is process data whose distribution does not follow the Gaussian (bell curve) pattern. Common non-normal patterns in manufacturing include skewed distributions, bimodal distributions, truncated distributions, and heavy-tailed distributions.
Related Articles
Small Sample Capability: How to Trust Cpk With Only 10 Parts
With a small sample of 10 parts, traditional Cpk has a confidence interval 0.6 units wide — your 1.38 could be anywhere from 1.05 to 1.71. Entropy-based methods extract more from limited data without the normality assumption.
Mar 7, 2026
Why Your SPC Software Lies About Non-Normal Data
Your SPC software computes Cpk assuming data follows a bell curve — but 60–80% of manufacturing data doesn’t. That silent assumption produces capability numbers that are confidently wrong, costing real money in both directions.
Mar 6, 2026
See Entropy-Powered Analysis in Action
Upload your data and compare traditional SPC with entropy-based methods. Free demo — no credit card required.