You tracked the total. You don't know what drove it.
Every changeover ends with one number: total duration. It goes on the shift report, into a spreadsheet, onto a whiteboard. That number tells you whether the run was slow. It tells you nothing about why.
Not having task-level data doesn't hurt once. It hits at every review, every debrief, every week you can't explain the variance.
1
After the slow run
58 minutes vs. a 45-minute target. The total is in the shift report.
The debrief runs 20 minutes. The foreman blames the tooling. Maintenance blames the handoff. The shift leader blames the operator. Everyone is wrong in a different direction. The meeting ends with an action item that fixes nothing, because the evidence to identify the real cause was never captured.
2
The comparison you can't make
Team 1 averages 41 min, Team 2 averages 58 min on the same changeover.
You've confirmed a gap exists. You haven't confirmed where it is. "Investigate Team 2" is the full extent of the action plan — which means no specific task gets standardised, no specific step gets addressed, and the 17-minute gap persists into next month.
3
Week over week, nothing changes
You cut the mould swap standard from 12 minutes to 10. Performance doesn't improve.
Was the standard wrong, or was compliance the real issue? Without data showing which tasks are drifting and by how much, you can't tell what you're actually fixing. You optimise on the wrong variable, the bottleneck moves, and the total stays the same.
Why total duration fails as a diagnostic
Most manufacturing plants capture changeover data at the wrong level. Total duration is a useful summary metric. It is a useless diagnostic. You can't improve what you can't decompose.
Every second you spend in a debrief asking "does anyone know why it ran long?" is time spent reconstructing data from memory that should have been captured automatically during the run.
The problem isn't effort. It's resolution. You're recording at the wrong level.
What you have vs. what you need
You have
Total duration
You're missing
Time per task
A slow total is a symptom. The task that caused it is the diagnosis.
You have
A slow run
You're missing
Which step made it slow
You can't write a corrective action for "the run was slow".
You have
A pattern you suspect
You're missing
Data to confirm and act on it
Suspecting a bottleneck and proving it are two different action plans.
When the run ends, the Gantt is ready.
Operators mark tasks complete on a tablet as they work through the changeover. Every tap is a timestamp, attributed to a role — the run record builds itself as the work happens. There's nothing to fill in after the fact.
When the run ends, a task-level Gantt timeline is generated automatically. You see exactly which step ran long, which role was on it, how the idle gaps emerged, and how every task compared to its standard. The result is a complete root cause record for every slow run — produced without asking anyone to fill anything in.
Critical path visible. See which tasks drove the total and where the next SMED reduction cycle should focus.
Idle gaps surfaced. Waiting time between handoffs shows up as hatched gaps — immediately actionable.
Role attribution clear. Every task bar is coloured by role — so you know whether the delay was an Internal, External, or Post-run problem.
Line 3 · SKU-A100 → SKU-B200 *
Run completed 14:22 — Wed 26 Feb
58 min totalPost-run · +13 min over target
0m10m20m30m40m50m58m
Belt clean
Internal
Temp ramp
Internal
Mould swap
External
+8 min over std
Waiting
— idle —
IDLE
Set pressure
Internal
First-piece QC
Post
▲ target 45 min
InternalExternalPostIdle gapTarget (45 min)
* Product may differ from mockup.
Track changeover performance across runs, teams, and lines.
Each run adds to the run history. Same changeover type, eight runs, two teams — the improvement is visible and attributable without aggregating anything manually.
The breakdown chart shows how Internal, External, and Post tasks contributed to each run. The trend line shows where the improvement came from — in this case, External tasks dropped from 42% of run time to 26% as the mould swap standard was tightened and compliance improved.
Breakdown per run
Internal / External / Post split across 8 runs
target 45 min
Run 1
61 min
Run 2
58 min
Run 3
55 min
Run 4
52 min
Run 5
49 min
Run 6
46 min
Run 7
44 min
Run 8
43 min
InternalExternalPostTarget
Total duration — 8 runs
From 61 min down to 43 min
Total duration Target (45 min)Over targetUnder target
The run record is the source. Stack runs into a history and every comparison you need is there: team against team, shift against shift, task against task, line against line, this week against last month.
You don't build the report — you decide what to fix.
Your next slow run will be the last one you can't explain.
Every run generates a complete task-level Gantt automatically. Configure once, and every changeover from that point on produces the data you need for root cause analysis and trend tracking.