Back to blog

OEE & Availability

OEE Changeover: Why Availability Is the Easiest Win to Close

OEE changeover losses hide in planned downtime and never trigger an alert. They are also the only OEE loss you can close without touching a machine, without capex, and without a six-month project.

Published: 2026-03-23 | 9–11 min read | Category: OEE / availability

Last reviewed: 2026-03-23

Key points

  • Changeover overruns hide in planned downtime — most OEE configurations never flag them, so the availability loss is invisible until you calculate it directly.
  • It hits two OEE components at once — availability (duration) and quality (first-run scrap). Both are fixable through consistent process ownership alone.
  • No capex required — the machine is already capable. The loss is in how consistently the changeover process is executed, shift to shift, operator to operator.
  • Three numbers tell you your OEE floor — actual vs. planned duration, run-to-run variance, and first-run scrap rate. Run one week, you have a baseline.

The OEE changeover loss hiding in your planned downtime

In many OEE configurations, changeover time is classified as planned downtime and excluded from the availability denominator entirely. So it does not flag. It becomes part of the schedule — a cost of doing business, absorbed into the shift plan, and never interrogated.

That classification is hiding a recoverable loss. Performance and quality losses have engineering roots — they need machine work, tooling trials, or process revalidation. Changeover losses have execution roots. The machine is already capable of running at full speed the moment the changeover ends. The question is how many minutes of availability you are losing because the changeover ran longer than it should — and why.

How OEE changeover losses drain availability — with a real number

Take a concrete shift: 480 minutes available. Two changeovers are scheduled — a 35-minute transition between products A and B, and a 50-minute transition between B and C. That is 85 minutes of planned downtime, leaving 395 minutes of planned run time.

In practice: the first changeover runs 43 minutes (+8 min overrun). The second runs 67 minutes (+17 min overrun). Neither overrun appears as an alert. They are absorbed into the changeover record as "changeover time." The 25-minute total overrun is invisible unless someone runs the planned vs. actual comparison manually.

ScenarioProduction timeAvailability
Changeovers run to plan395 min82%
25 min total overrun370 min77%
Availability lost to overrun5 points

Five percentage points of availability from a 25-minute overrun across two changeovers. No machine fault. No unplanned breakdown. Just execution that ran longer than the plan — and was never flagged because it was classified as planned downtime.

Scale that to a 20-line plant running 240 days a year: 240 days × 20 lines × 25 min = 120,000 minutes per year — 2,000 hours — of avoidable availability loss that does not appear in any root cause report because it is buried inside "planned stops."

Why it is genuinely easier than the other two OEE losses

Performance and quality losses have engineering roots — they need machine work, tooling trials, or process revalidation before a gain is confirmed. Changeover losses have execution roots. The machine is already capable of running at specification the moment the changeover is complete. The loss accumulates because the process is not followed consistently from run to run, shift to shift, operator to operator. That is a documentation and visibility problem — and it has three practical consequences:

  • No capex. You do not need new equipment or tooling. The machine that exists today is sufficient. The investment is in process documentation and execution tracking.
  • Measurable in one week. Log actual vs. planned changeover time for every changeover on one line for five working days. You have a baseline. No trial runs. No measurement system validation.
  • Reversible if it does not work. If a process change makes things worse, you revert to the previous step sequence. No machine retooling. No quality revalidation. No change control cycle.

None of the other two OEE components offer all three at once. That is the argument — not that changeover is easy to fix, but that the conditions for fixing it are uniquely favourable compared to the alternatives.

The OEE delta you are probably not calculating

Most plants have an OEE number. Almost none have isolated what portion of it is attributable to changeover execution specifically. Without that number, changeover improvement has no target — and no business case.

Here is how to calculate your changeover availability floor:

  1. Pull changeover records for the last 30 days. For each event you need two numbers: planned duration (what the schedule shows) and actual duration (what happened).
  2. Sum the overrun across all events where actual exceeded planned: total overrun = Σ (actual − planned)
  3. Divide by total planned production time across the same period: availability loss % = total overrun ÷ planned production time × 100

The result is your changeover availability floor — the maximum OEE your line can reach until changeover execution is fixed, regardless of what else you improve.

If your current OEE is 71% and your changeover overrun represents 8% of planned production time, your OEE cannot exceed 79% without addressing changeover — no matter how much you optimise performance or quality. That ceiling is invisible until you calculate it. Once you have it, the conversation with management changes: it is not "we should improve changeovers." It is "here is the hard limit on our OEE until we do."

What consistent changeover execution actually recovers

Here is a before/after from a single changeover type on a sauce fill line switching between 250g and 500g formats — same line, same operators, same machine. The only change was structured process ownership: role assignments per step, product parameters surfaced at the right moment, and step-level timing tracked every run.

MetricBeforeAfterOEE component
Average changeover duration55 min38 minAvailability ↑
Run-to-run variance±12 min±4 minAvailability ↑ (reduces schedule buffer)
First-run scrap rate3.1%0.9%Quality ↑

Map each row back to OEE:

  • Duration (55 → 38 min): 17 minutes recovered per changeover. At three changeovers per shift on a 480-minute schedule, that is 51 minutes of availability recovered — moving availability from roughly 77% to 88% in isolation.
  • Variance (±12 → ±4 min): Tighter variance means you can schedule to plan without building in a safety buffer. That buffer — the extra time padded into the schedule "just in case" — is dead availability. Reducing variance recovers it without changing average duration at all.
  • First-run scrap (3.1 → 0.9%): A 2.2 percentage point reduction in scrap rate moves the OEE quality factor directly. On a line producing 300 units per hour, 2.2% scrap is 6–7 units per hour going into the reject bin — avoidable, on every run, immediately after every changeover.

None of this required machine work. The process existed before — it was just not followed consistently. Consistent step ownership is what closed the gap.

Before: untracked execution

68%

OEE Availability

Planned Actual
Flush & clean heads ? owner
planned +8 min over
Set product params untracked
planned +14 min over
First-article check ? owner
planned on time
First-run scrap 3.1%

After: consistent step ownership

83%

OEE Availability

Planned Actual
Flush & clean heads Mechanic
planned on target
Set product params Operator
planned on target
First-article check Quality
planned on target
First-run scrap 0.9%

Same changeover type. Left: steps 1 and 2 overrun their planned targets (vertical line), unknown owners, 3.1% scrap — availability 68%. Right: all steps meet target with role ownership — availability 83%, scrap 0.9%.

Same changeover type before and after execution standardisation. Availability and first-run scrap both move — two OEE components from one process change.

A one-week baseline you can run now

Before you can improve, you need three numbers. These are the minimum inputs for a changeover OEE baseline — run for five working days on one line.

1. Actual vs. planned duration per changeover event

Log per event, not as a monthly average. Most teams average it monthly — which hides whether the gap is consistent (same step always slow, one fix needed) or random (multiple causes, requires more runs to isolate). A monthly average makes both look identical. Per-event data tells them apart.

Changeover availability loss = total overrun ÷ planned production time × 100

2. Variance across operators and shifts

The same changeover type should take approximately the same time regardless of who runs it. If it does not, the instinct is to call it a training problem. It is not — it is a documentation problem. The steps, ownership, and sequence are not written down clearly enough for every operator to follow them the same way. More training on an undocumented process does not reduce variance.

3. First-run scrap rate in the hour after restart

Count units produced before steady-state quality is reached after each changeover. Most plants never collect this per changeover — it lands in the quality report, disconnected from the changeover that caused it. Until you link it back to the event, it appears as a random quality problem rather than a systematic one with a known trigger.

Changeover quality loss = scrap units ÷ total units (first hour post-restart) × 100

With these three numbers you can calculate two OEE deltas: your changeover-attributable availability loss and your changeover-attributable quality loss. Together they tell you what your OEE could be if changeover execution were consistent and on-target. That is the improvement case — in numbers, not in instinct.

Related: How Much Does a Slow Changeover Really Cost? — how to translate the same time loss into a financial cost layer by layer.

How ProChangeover moves each OEE component

ProChangeover addresses changeover losses at the execution layer — the point where OEE is actually won or lost during the changeover itself.

  • Availability (duration). Every changeover step runs with a time target. When a step overruns that target, it is visible in real time — not discovered after the fact when the total is already 20 minutes over. That step-level signal is what lets you find the specific task that is consistently slow and fix it at the source, rather than averaging the problem away in a monthly report.
  • Availability (variance). When every operator follows the same digital step sequence with the same role assignments, the run-to-run variance shrinks. Not because individual operators improved, but because the process no longer depends on shift-to-shift knowledge transfer or individual memory. Consistent process in, consistent duration out.
  • Quality (first-run scrap). Product-specific parameters are shown to the operator at the step where they are needed — not pulled from last run's binder or estimated from memory. Operators verify and confirm before proceeding. The structural cause of first-run scrap — wrong parameters set at restart — is addressed at the point of execution, on every run. See also: machine parameters per product pair.

Each run produces a timestamped record of every step: actual duration versus target, deviations logged, role sign-offs confirmed. After ten runs on the same changeover type you know exactly which step to fix first — and you have the data to show that the fix worked.

Related: From Paper Checklists to Live Changeover Tracking — how to migrate from paper to live execution without disrupting production.

Next step

Run your first tracked changeover and get a timestamped baseline for every step: start your free trial.

FAQ

Related articles

How Much Does a Slow Changeover Really Cost?

The four cost layers most teams never calculate — from lost output to first-run scrap.

From Paper Checklists to Live Changeover Tracking

How to migrate from paper to live execution tracking without disrupting production.

Why Your Changeover Checklist Keeps Failing

The structural gaps that let overruns propagate run after run even when every box is ticked.

How to Track Changeover Performance with a Gantt Timeline

What becomes visible once you have step-level timing on every run.

After your first run you'll have:

See your changeover OEE impact after one run.

ProChangeover records a timestamped step-by-step breakdown of every changeover. Run it once and you have actual versus planned for every step. Run it ten times and you know exactly which step to fix first.

  • Timestamped sign-off record

    Audit-ready from run one

  • Gantt timeline of every task

    See exactly where time was lost

  • A repeatable standard

    Not dependent on whoever showed up today

7-day free trial · Self-serve setup · No IT project required