Skip to main content

Anderson-Darling Test

The Anderson-Darling test is a statistical goodness-of-fit test that measures how well data follows a specified distribution. It gives extra weight to the tails of the distribution, making it more sensitive than the Kolmogorov-Smirnov test for detecting departures from normality.

Why It Matters

The Anderson-Darling test is the most commonly used normality test in quality engineering software. Minitab, JMP, and most SPC packages default to Anderson-Darling when testing whether data follows a normal distribution. Its tail sensitivity makes it well-suited for quality applications where tail behavior directly determines defect rates.

In practice, the test is a prerequisite step before computing capability indices. The workflow goes: collect data → run Anderson-Darling → if normal, compute Cpk with standard formula; if not normal, transform the data or use non-parametric methods. This binary decision point is a weak link in traditional quality analysis because the test itself has known limitations.

With small samples (n < 20), the test lacks statistical power — it cannot reliably detect non-normality even when it exists. With large samples (n > 200), it becomes oversensitive — rejecting normality for trivial departures that have no practical impact on capability calculations. The "right" sample size for reliable Anderson-Darling results falls in a narrow range, and real manufacturing data does not always cooperate.

The EntropyStat Perspective

EntropyStat renders the Anderson-Darling test unnecessary for capability analysis. Since the EGDF learns the actual distribution shape directly from data, there is no normality gate to pass through. The traditional workflow of "test → decide → compute" becomes simply "compute."

This eliminates a significant source of error in quality analysis. Engineers no longer need to interpret p-values, choose significance levels, or decide whether a borderline Anderson-Darling result (say, p = 0.06) means the data is "normal enough." The EGDF produces accurate results regardless of the underlying distribution — normal, skewed, or multimodal.

That said, EntropyStat does use the Kolmogorov-Smirnov statistic as a fit validation tool — but for a fundamentally different purpose. Instead of testing whether data fits a pre-assumed distribution, the K-S test validates how well the EGDF itself fits the data. This is a quality check on the model, not a gatekeeping step that determines which formula to use.

Related Terms

See Entropy-Powered Analysis in Action

Upload your data and compare traditional SPC with entropy-based methods. Free demo — no credit card required.