Your PPAP got rejected. The auditor didn't question your measurements, your MSA, or your control plan. They questioned your Cpk — specifically, whether the statistical method behind it was appropriate for your data.
This is happening more often. OEM quality teams are getting sharper about statistical methods. A submission that reports Cpk = 1.45 without addressing normality, sample size, or measurement uncertainty is no longer automatically accepted. The PPAP package that worked five years ago may not survive scrutiny today.
IATF 16949 requires “appropriate statistical methods” — not normal-based Cpk. For the 60–80% of datasets that fail normality tests, the standard actually supports distribution-free methods like entropy-based analysis.
Here's how to build capability evidence that withstands the toughest customer audits.
What Auditors Actually Look For
PPAP Element 10 — Initial Process Studies — requires statistical evidence that your process can consistently meet specifications. Most quality teams interpret this as "compute Cpk and make sure it's above 1.33."
That's the minimum. Here's what an experienced auditor examines beyond the number:
Data integrity. Where did the measurements come from? What gage? When were they collected? Are they consecutive production parts or cherry-picked "good" runs? Auditors look for measurement traceability and data provenance.
Statistical validity. Was the method appropriate for the data? If the data isn't normally distributed, was a non-normal method used? If the sample size is small, were confidence intervals reported? The AIAG SPC Manual explicitly states that standard Cpk formulas require normally distributed data.
Measurement system analysis. Is the measurement system capable of detecting the variation you're claiming to control? A Gage R&R study showing 30% measurement contribution means your Cpk is measuring gage noise, not process variation.
Sample representativeness. Do the 30 parts represent the full range of production conditions? Single cavity from a four-cavity mold? One material lot from a multi-lot production run? Auditors ask because they know aggregate data hides variation sources.
The days of "Cpk = 1.45, stamp approved" are ending. Auditors want evidence that the number means what you claim.
Three Common PPAP Rejection Reasons
1. Normality Not Addressed
You report Cpk = 1.38 using the standard formula. The auditor runs a normality test on your data attachment and gets p = 0.003. Your data is demonstrably non-normal, and you used a method that assumes normality.
The rejection isn't about the number. It's about the method. APQP requires appropriate statistical techniques. Normal-based Cpk on non-normal data isn't appropriate — the AIAG SPC Manual says so explicitly.
The fix: Test normality before computing capability. Document the result. If normality is rejected, use a method designed for non-normal data and document which method and why.
2. Sample Size Insufficient or Unreported
Thirty parts is the conventional minimum for initial process studies. But "30 parts" means 30 consecutive production parts under normal conditions — not 30 parts from a pilot run, not 30 hand-picked samples, not 30 measurements of 10 parts.
Auditors also look at whether the sample size supports the precision claimed. Cpk = 1.45 from 30 parts has a 95% confidence interval of roughly [1.14, 1.76]. Reporting 1.45 without the interval implies precision that doesn't exist at that sample size.
The fix: Report sample size explicitly. When possible, report confidence intervals. When sample size is smaller than ideal (prototype or short-run situations), state the method used and why it's defensible at that sample size.
3. MSA Gap
Your Gage R&R shows 25% total measurement variation. That means a quarter of the observed spread is measurement noise, not process variation. Your Cpk is computed from inflated total variation — it's measuring gage uncertainty as if it were process capability.
Some OEMs now require process capability recomputed with measurement uncertainty separated. Others accept the conservative estimate (capability computed from total variation including measurement noise). Either way, submitting a PPAP without a current MSA is increasingly a rejection trigger.
The fix: Complete a Gage R&R before the capability study. If measurement contribution exceeds 10%, discuss the impact on reported capability. If the customer requires measurement-adjusted capability, provide it.
The Capability Evidence Chain
Audit-proof PPAP capability follows a chain — each link supports the next:
Link 1: Measurement system validation. Gage R&R completed with acceptable results (< 10% preferred, < 30% acceptable with justification). Documented in PPAP Element 6 (MSA Results).
Link 2: Data collection under production conditions. Minimum 30 consecutive parts under normal production — representative of all variation sources (cavities, shifts, lots). Data traceable to specific production dates and conditions.
Link 3: Distribution assessment. Normality test with documented result. If normal: standard Cpk. If non-normal: documented alternative method with justification. This is the step most PPAP packages skip — and where rejections increasingly originate.
Link 4: Capability computation with appropriate method. Cpk/Ppk computed using the method justified in Link 3. If sample size warrants, confidence intervals included. If multiple variation sources exist (multi-cavity, multi-lot), per-source capability provided alongside aggregate.
Link 5: Evidence documentation. One-page summary: data source, sample size, normality test result, method used, capability indices, confidence intervals. An auditor should be able to verify your methodology in 60 seconds.
Building Audit-Proof Capability Reports
Four practices that transform a PPAP capability study from "stamp and ship" to "audit-proof":
1. Test before you compute. Run normality, homogeneity, and MSA checks before computing Cpk. Document each test result. A PPAP that shows "we checked, and the method matches the data" is stronger than one that shows only the final number.
2. Report both methods when they differ. When non-normality is detected, report traditional Cpk alongside distribution-free Cpk. When they agree, confidence increases. When they disagree, you've demonstrated that the method matters — and you chose the correct one.
3. Include the distribution plot. A histogram of your data with the fitted distribution overlay (or EGDF curve) shows the auditor what the data looks like. Bimodal data with a Gaussian overlay makes the normality problem visible. A clean EGDF fit on the same data shows the alternative is more faithful.
4. Address variation sources. If your process has known variation sources (cavities, fixtures, lots), show per-source capability alongside aggregate. An auditor who sees "Aggregate Cpk = 1.42, Cavity 1 = 1.85, Cavity 2 = 0.87" understands you've investigated the process. One who sees only "Cpk = 1.42" will wonder what you're hiding.
When the Auditor Pushes Back
The conversation is changing. OEM quality teams increasingly ask:
"Your data failed the normality test. Why did you use normal-based Cpk?"
"What's the confidence interval on this Cpk at n = 25?"
"You have a four-cavity mold. Where's the per-cavity capability?"
"Your MSA shows 22% contribution. How does that affect the reported Cpk?"
These aren't trick questions. They're basic statistical rigor applied to PPAP submissions. The suppliers who can answer them — with documented evidence, not improvised explanations — are the ones whose submissions get approved without rework.
The IATF 16949 standard requires "appropriate statistical methods." The AIAG SPC Manual defines what appropriate means. Auditors are reading both more carefully than they used to.
Your PPAP Should Survive the Statistics
Capability evidence isn't a box to check. It's a claim about your process — backed by mathematics. When the mathematics is appropriate for the data, the claim holds up under scrutiny. When it isn't, the rejection is deserved.
Build the evidence chain. Test before computing. Document the method. Report with honesty.
Generate PPAP-ready capability reports with entropy-based methods — normality test, distribution-free Cpk, and distribution plots in one upload. Analyze your data free →
Six Sigma’s core insight — reduce variation to reduce defects — is timeless. But the normality default, manual data collection, and belt-certification gatekeeping need updating. Here’s what modern Six Sigma looks like with distribution-free methods and Quality 4.0.
First pass yield says 98.2%. Cpk says 0.94. One measures what happened. The other predicts what will happen next. When they disagree, something important is hiding — and knowing which to trust prevents costly mistakes.