Tolerance Intervals
Tolerance intervals define a range expected to contain a specified proportion of the population with a given confidence level. Unlike confidence intervals (which estimate a parameter) or prediction intervals (which bound the next observation), tolerance intervals bound a percentage of all future production.
Why It Matters
Tolerance intervals answer a question that engineers constantly ask but rarely formulate precisely: "What range of values will 99% of our parts fall within?" This is different from the specification limits (which define what the customer wants) — tolerance intervals describe what the process actually produces.
The gap between tolerance intervals and specification limits directly predicts the defect rate. If the 99% tolerance interval fits comfortably within spec limits, the process is highly capable. If it extends beyond spec, rework or scrap is inevitable.
Traditional tolerance intervals (like the k-factor method) assume normality and require sample size corrections that produce very wide intervals for small samples. A 99%/95% tolerance interval from 10 normally-distributed observations spans roughly ±4.3σ — so wide that it is often useless for engineering decisions.
The EntropyStat Perspective
EntropyStat computes tolerance intervals directly from the EGDF, which produces tighter intervals than parametric methods — particularly for non-normal data and small samples. Because the EGDF captures the actual distribution shape, the tolerance interval reflects the real process spread rather than a normal approximation that may overestimate tail probabilities.
Traditional tolerance intervals based on the normal distribution are inherently conservative: they must account for both sampling uncertainty and the assumption that the distribution is normal. EntropyStat's entropy-based intervals only need to account for sampling uncertainty, because the distribution shape is learned directly from the data. This typically produces intervals 10–30% tighter than the normal-based equivalents, especially for skewed distributions.
For quality engineering, tighter tolerance intervals mean more confident decisions about process capability. A tolerance interval that fits within specification limits with entropy-based methods — but not with normal-based methods — may eliminate unnecessary process improvement projects that were triggered by artificially inflated statistical intervals.
Try our free Cpk calculator → to compute traditional capability indices from your specification limits — then upload your data to see how entropy-based tolerance intervals compare.
Related Terms
Process Capability (Cpk/Ppk)
Process capability indices (Cpk and Ppk) quantify how well a manufacturing process can produce parts within specification limits. Cpk measures short-term capability using within-subgroup variation, while Ppk measures long-term performance using overall variation.
EGDF (Entropic Global Distribution Function)
The EGDF is Machine Gnostics' primary distribution estimation method. It constructs a smooth, continuous cumulative distribution function directly from data using entropy-based algebraic optimization, without assuming any parametric form such as normal or Weibull.
Normal Distribution
The normal (Gaussian) distribution is a symmetric, bell-shaped probability distribution fully described by its mean and standard deviation. It is the foundational assumption behind most classical statistical quality methods, including Cpk, Shewhart charts, and Six Sigma calculations.
Small Sample Statistics
Small sample statistics deals with drawing reliable conclusions from limited data — typically fewer than 30 observations. Traditional methods lose reliability with small samples because parametric distribution estimates become unstable, and the Central Limit Theorem provides weaker guarantees.
Non-Normal Data
Non-normal data is process data whose distribution does not follow the Gaussian (bell curve) pattern. Common non-normal patterns in manufacturing include skewed distributions, bimodal distributions, truncated distributions, and heavy-tailed distributions.
Related Articles
First Pass Yield vs. Cpk: Which Metric Tells the Real Story?
First pass yield says 98.2%. Cpk says 0.94. One measures what happened. The other predicts what will happen next. When they disagree, something important is hiding — and knowing which to trust prevents costly mistakes.
Mar 17, 2026
PPAP Submissions: Capability Evidence That Survives Customer Audits
Your PPAP got rejected — not for bad parts, but for bad statistics. OEM auditors now scrutinize whether your Cpk method matches your data. Build a PPAP capability evidence chain that withstands the toughest audits.
Mar 14, 2026
EntropyStat vs. Minitab: What Distribution-Free Analysis Actually Means
Minitab offers non-normal options. EntropyStat is distribution-free. Those aren’t the same thing. Offering a menu of distributions to choose from is distribution-flexible — not distribution-free. Here’s why that distinction determines whether your Cpk is correct.
Mar 10, 2026
See Entropy-Powered Analysis in Action
Upload your data and compare traditional SPC with entropy-based methods. Free demo — no credit card required.