Your PPAP got rejected — not for bad parts, but for bad statistics. OEM auditors now scrutinize whether your Cpk method matches your data. Build a PPAP capability evidence chain that withstands the toughest audits.
First pass yield measures what already happened. Cpk predicts what will happen next. When they agree, your process is well-understood. When they disagree, something important is hiding in the gap.
What First Pass Yield Actually Tells You
First pass yield (FPY) is the percentage of units that pass inspection without rework, repair, or rejection on the first attempt. It's straightforward: 1,000 parts in, 982 pass, FPY = 98.2%.
FPY is a lagging indicator. It tells you what the process did — past tense. It's computed from completed production, after the variation has already happened and the defects have already been made.
Strengths of FPY:
Simple and intuitive. Everyone understands percentages. No statistical assumptions required.
Directly tied to cost. Every point of yield loss maps to scrap, rework, and material waste. Finance trusts it.
Multi-characteristic. FPY captures all failure modes in one number — dimensional, cosmetic, functional. A part either passes or it doesn't.
Limitations:
No predictive power. 98.2% yesterday doesn't guarantee 98.2% tomorrow. FPY doesn't tell you how much margin you have before defects increase.
Masks variation. A process running at the edge of specification can have high FPY today and crash tomorrow with a small shift. FPY doesn't see the shift coming.
Inspection-dependent. FPY depends on what you inspect and how. Tighten the gage or add a check, and FPY drops — even if the process didn't change.
What Cpk Actually Tells You
Cpk measures how centered your process is within specification limits, scaled by the process spread. It's a leading indicator: it tells you how much room you have before defects start happening.
Cpk = 1.33 means the nearest specification limit is 4σ from your process center. Cpk = 1.0 means 3σ. Cpk = 0.67 means 2σ. The higher the Cpk, the larger the buffer between your process and the specification boundary.
Strengths of Cpk:
Predictive. Cpk tells you the defect probability going forward, not just the defect count from the past.
Sensitive to shifts. A small process mean shift drops Cpk before it shows up in yield. You see the problem before it costs money.
Standard metric. OEMs, IATF 16949, PPAP — everyone speaks Cpk. It's the common currency of process capability.
Limitations:
Single characteristic. Cpk applies to one dimension at a time. A part with 15 critical dimensions needs 15 Cpk values.
Assumption-dependent. Standard Cpk assumes normal distribution. When that doesn't hold, the number can be wrong in either direction.
Doesn't capture all failures. Cpk measures dimensional variation. Cosmetic defects, functional failures, and assembly issues don't have Cpk values.
When FPY and Cpk Disagree
The interesting cases — and the decisions that go wrong — happen when FPY and Cpk tell different stories.
High FPY, Low Cpk
FPY = 99.1%. Cpk = 0.85. How?
The process is currently centered well within specifications — most parts pass. But the spread (σ) is large relative to the tolerance. Right now, the process mean happens to be in a good spot. A small shift — tool wear, material change, temperature drift — and defects will spike.
FPY says "everything's fine." Cpk says "you're one shift away from a problem."
The right response: Trust Cpk. Investigate the variation source. The yield number is temporarily masking an unstable situation. When the process drifts, yield will drop fast — and you won't have warning because you trusted the lagging indicator.
Low FPY, High Cpk
FPY = 94%. Cpk = 1.52. How?
Two explanations. First: the Cpk is wrong. If the data isn't normally distributed and you used standard Cpk, the number is unreliable. A skewed distribution can produce high Cpk while the tail generates defects that show up in yield.
Second: the defects aren't dimensional. FPY captures all failure modes — scratches, contamination, assembly fit issues. Cpk only covers the measured characteristic. The process might be dimensionally capable while failing on attributes Cpk doesn't measure.
The right response: Investigate which failure modes are driving the yield loss. If dimensional, recompute Cpk with distribution-free methods. If non-dimensional, Cpk is doing its job — the problem is elsewhere.
The DPMO Connection
DPMO (Defects Per Million Opportunities) bridges yield and capability by converting both to the same scale.
Under normal-distribution assumptions:
Cpk
Sigma Level
DPMO
Yield
0.67
2σ
45,500
95.45%
1.00
3σ
2,700
99.73%
1.33
4σ
63
99.9937%
1.67
5σ
0.57
99.99994%
These conversions assume normality. For non-normal data, the DPMO calculated from standard Cpk doesn't match the actual defect rate — which is exactly the FPY/Cpk disagreement showing up in a different form.
When your observed DPMO (from actual defect counts) doesn't match the predicted DPMO (from Cpk), the distributional assumption is probably wrong. That gap is diagnostic: it tells you the normal model isn't capturing your process shape.
Many organizations convert Cpk to "sigma level" and report it as a performance metric. Cpk 1.33 = "4 sigma." Cpk 2.0 = "6 sigma." Clean, impressive, and frequently misleading.
The conversion assumes normality. If your Cpk was computed from non-normal data using the standard formula, the sigma level is doubly wrong — the Cpk is wrong, and the conversion from Cpk to sigma assumes a distribution that doesn't match.
If you must report sigma levels, compute them from actual DPMO (observed defects / opportunities), not from Cpk. The observed DPMO is distribution-independent — it counts what actually happened, regardless of what bell curve the process may or may not follow.
Which Metric for Which Decision
Neither FPY nor Cpk is universally "better." They answer different questions for different audiences:
Use FPY for:
Product acceptance decisions (did this batch pass?)
Cost accounting (what's the scrap/rework cost?)
Customer quality reporting (what percentage shipped is conforming?)
Comparing across different products and failure types
Use Cpk for:
Process improvement targeting (which characteristics need work?)
PPAP and customer capability submissions
Predicting future defect rates under process shifts
Diagnosing FPY/Cpk disagreements (distributional issues or multi-modal failures)
Complete quality dashboards (lagging + leading indicators)
Supplier scorecards (yield for accountability, capability for prediction)
Measure Both. Trust the Math Behind Each.
FPY counts what happened. Cpk predicts what will happen. Neither is wrong — but both can mislead when their assumptions are violated.
FPY is honest by construction — it counts actual results. Cpk is honest only when the statistical method matches the data. For the 60–80% of industrial data that isn't normal, standard Cpk predicts a defect rate that doesn't match observed yield. Distribution-free Cpk closes that gap.
When FPY and Cpk finally agree, you know two things: your process is performing, and your statistical model is correct. That's the best position to be in.
Upload your data and see if your Cpk agrees with your yield — distribution-free capability that matches what you observe on the floor. Analyze your data free →
Minitab offers non-normal options. EntropyStat is distribution-free. Those aren’t the same thing. Offering a menu of distributions to choose from is distribution-flexible — not distribution-free. Here’s why that distinction determines whether your Cpk is correct.
With a small sample of 10 parts, traditional Cpk has a confidence interval 0.6 units wide — your 1.38 could be anywhere from 1.05 to 1.71. Entropy-based methods extract more from limited data without the normality assumption.