Shapiro-Wilk Test
The Shapiro-Wilk test is a statistical test for normality that compares ordered sample values against their expected values under a normal distribution. It is widely considered the most powerful normality test for small to moderate sample sizes (n < 50).
Why It Matters
The Shapiro-Wilk test is the gold standard for normality testing with small samples — precisely the situation quality engineers face most often. When you have 15 measurements from a pilot run or 25 parts from a capability study, Shapiro-Wilk gives you the most reliable answer about whether your data is normally distributed.
But "most reliable" is relative. With 15 measurements, even Shapiro-Wilk struggles to detect moderate skewness or kurtosis. The test tells you "I don't have enough evidence to say this isn't normal" — which is very different from "this data is normal." Engineers routinely misinterpret failure to reject as confirmation of normality, then proceed with Gaussian-based capability calculations on data that may well be non-normal.
The deeper problem is that the test asks the wrong question for quality engineering purposes. "Is this data exactly normally distributed?" matters less than "Will my capability calculations be accurate?" A slightly non-normal distribution might produce negligible Cpk error, while a moderately non-normal one could produce significant error — and the Shapiro-Wilk test does not quantify this practical impact.
The EntropyStat Perspective
EntropyStat eliminates the need for normality testing altogether. The EGDF does not require normality — or any distributional assumption — so the Shapiro-Wilk question becomes irrelevant. Whether your 15-measurement pilot run is normally distributed or not, the EGDF produces a reliable distribution estimate and accurate capability indices.
This is especially valuable for the small-sample scenarios where Shapiro-Wilk is supposed to shine. With 10–20 measurements, the test's statistical power is limited, meaning it often fails to detect meaningful non-normality. Meanwhile, the EGDF's entropy-based optimization is specifically designed for small samples — 5–8 measurements are sufficient for stable distribution estimates, compared to the 30+ that parametric methods require.
The practical benefit: teams can skip the "test for normality → choose method" decision tree entirely. One analytical approach works across all data types and sample sizes, removing a subjective decision point that often varies between engineers and between software packages.
Related Terms
Anderson-Darling Test
The Anderson-Darling test is a statistical goodness-of-fit test that measures how well data follows a specified distribution. It gives extra weight to the tails of the distribution, making it more sensitive than the Kolmogorov-Smirnov test for detecting departures from normality.
Normal Distribution
The normal (Gaussian) distribution is a symmetric, bell-shaped probability distribution fully described by its mean and standard deviation. It is the foundational assumption behind most classical statistical quality methods, including Cpk, Shewhart charts, and Six Sigma calculations.
Non-Normal Data
Non-normal data is process data whose distribution does not follow the Gaussian (bell curve) pattern. Common non-normal patterns in manufacturing include skewed distributions, bimodal distributions, truncated distributions, and heavy-tailed distributions.
Small Sample Statistics
Small sample statistics deals with drawing reliable conclusions from limited data — typically fewer than 30 observations. Traditional methods lose reliability with small samples because parametric distribution estimates become unstable, and the Central Limit Theorem provides weaker guarantees.
Kolmogorov-Smirnov Test
The Kolmogorov-Smirnov (K-S) test is a nonparametric goodness-of-fit test that measures the maximum distance between an empirical cumulative distribution function and a reference distribution. It determines whether a sample plausibly comes from a specified distribution.
Related Articles
The Distribution Fitting Trap: Weibull, Lognormal, or None of the Above?
Distribution fitting replaces the normality assumption with a different guess. With typical sample sizes, Weibull, lognormal, and gamma all pass goodness-of-fit tests — giving different Cpk values. The distribution fitting step that should fix your analysis becomes its own error source.
Mar 13, 2026
Small Sample Capability: How to Trust Cpk With Only 10 Parts
With a small sample of 10 parts, traditional Cpk has a confidence interval 0.6 units wide — your 1.38 could be anywhere from 1.05 to 1.71. Entropy-based methods extract more from limited data without the normality assumption.
Mar 7, 2026
Why Your SPC Software Lies About Non-Normal Data
Your SPC software computes Cpk assuming data follows a bell curve — but 60–80% of manufacturing data doesn’t. That silent assumption produces capability numbers that are confidently wrong, costing real money in both directions.
Mar 6, 2026
See Entropy-Powered Analysis in Action
Upload your data and compare traditional SPC with entropy-based methods. Free demo — no credit card required.