Welcome — Quality & Lean Tools

Your Complete CI Toolkit

Master 9 essential Quality & Lean tools — from data analysis to waste elimination to error-proofing

What You'll Learn

This interactive guide covers the nine most powerful tools used by Lean, Six Sigma, and Continuous Improvement practitioners worldwide. Each tool is explained in depth — what it is, when and how to use it, worked examples, and common mistakes to avoid. You'll finish with a scored knowledge test and a personalised completion certificate.

7 QC Tools
Check sheets, Pareto, histograms, scatter diagrams, control charts, fishbone & stratification — Kaoru Ishikawa's essential seven
Explore →
8 Wastes (TIM WOODS)
The eight categories of waste that exist in every process. Identify and eliminate them to unlock hidden capacity
Explore →
PDCA Cycle
Plan-Do-Check-Act — the Deming Cycle for iterative, continuous improvement in any organisation
Explore →
DMAIC (Six Sigma)
Define-Measure-Analyze-Improve-Control — rigorous data-driven methodology for complex problems
Explore →
Kanban System
Visual workflow management. Limit WIP, improve flow, pull work based on real demand
Explore →
Value Stream Mapping
See the full flow of value. Identify waste and delays across the entire end-to-end process
Explore →
A3 Problem Solving
Toyota's one-page structured thinking tool. PDCA on a single sheet — clear, disciplined, powerful
Explore →
Poka-Yoke (Error-Proofing)
Design processes so mistakes are impossible or immediately obvious. Prevention over detection
Explore →
Kaizen
The philosophy and practice of continuous small improvements — every person, every day, every process
Explore →
How These Tools Work Together
Start with Kaizen — the mindset that all improvement is possible.
Use 8 Wastes to identify where opportunity lies. VSM to map the whole picture.
PDCA for simple problems. DMAIC when complexity demands rigour.
7 QC Tools throughout — to collect data, find root causes, and verify results.
A3 to document and communicate. Poka-Yoke to lock in improvements.
Kanban to manage the ongoing flow of work.
⚠️ The Most Important Insight
In most organisations, only 5–10% of activities are value-added from the customer's perspective. The other 90–95% is waste — waiting, rework, movement, duplication. These tools help you see it, measure it, and eliminate it. A 20% reduction in waste typically translates to 30–50% faster delivery, 15–25% cost reduction, and 60–80% fewer defects.
Tool 1 of 9

7 Quality Control Tools

Developed by Kaoru Ishikawa — simple enough for anyone, powerful enough for any problem

What are the 7 QC Tools?

A set of basic statistical and graphical tools for quality improvement

Ishikawa claimed that 95% of quality problems in factories could be solved using these seven tools alone. They require no advanced statistics — just consistent data collection and structured thinking. Select each tool below to explore it in depth.

1. Check Sheet
2. Pareto Chart
3. Histogram
4. Scatter Diagram
5. Control Chart
6. Fishbone (Ishikawa)
7. Stratification

✓ Check Sheet — Structured Data Collection

A check sheet is a structured form for collecting and analysing data in real time, at the location where the data is generated. It converts raw observations into meaningful counts with minimal effort.

When to Use
• Track how often defects occur
• Record types and locations of problems
• Confirm process steps are completed
• Collect data before drawing a Pareto chart
• Monitor machine downtime causes
Common Mistakes
• Collecting data without a clear question to answer
• Not defining categories before collection
• Inconsistent recording across shifts
• Too many categories (keep to 5–8 max)
• Not using the data after collection
Worked Example — Invoice Errors
Question: What types of errors appear on customer invoices?

After 2 weeks of recording, the check sheet shows:
Wrong price: |||| |||| ||| (13) · Missing PO: |||| |||| (9) · Wrong address: |||| (4) · Duplicate: || (2) · Other: | (1)

Total: 29 errors in 2 weeks. This data is now ready for a Pareto chart to prioritise action.

Pareto Chart — The 80/20 Rule in Action

A Pareto chart is a bar chart sorted in descending order, with a cumulative line. It visually shows which causes account for the majority of problems — typically 20% of causes produce 80% of effects (Pareto Principle).

When to Use
• Prioritise which problems to tackle first
• Focus improvement effort where it has most impact
• Communicate "vital few vs trivial many" to management
• Validate whether a fix addressed the main cause
• Justify resource allocation decisions
Common Mistakes
• Using fewer than 20–30 data points
• Pareto on opinions, not real data
• Solving bars 3–7 while ignoring bar 1
• Not running a second Pareto after fixing bar 1
• Forgetting the 80/20 is a guideline, not a law
Pro Tip: The Second Pareto
After fixing the #1 cause, run a new Pareto. The ranking will shift — what was bar 2 is now bar 1. Each cycle of Pareto → Fix → Re-Pareto drives ever-deeper improvement. This is how teams achieve 60–80% defect reduction.

📉 Histogram — Understanding Variation

A histogram shows how data is distributed across a range of values. It reveals the shape, centre, and spread of a process — and immediately shows whether the process is centred within specification limits.

Histogram Shapes to Know
Bell curve (normal): Process in control, natural variation
Skewed right: Outliers pulling high, investigate upper tail
Bimodal (two peaks): Two different populations — e.g., two shifts, two machines
Flat (uniform): No dominant value, process may be unstable
Cliff edge: Data may be being sorted/discarded
Common Mistakes
• Using too few data points (need at least 30, ideally 100+)
• Choosing wrong bin width (too wide hides patterns, too narrow creates noise)
• Not adding specification limits to the chart
• Treating bimodal as normal variation (investigate the two peaks!)
• No axis labels or units
Connecting Histogram to Process Capability
Once you have a histogram, overlay the specification limits (LSL and USL). If the histogram fits comfortably inside the limits — good capability. If it touches or exceeds them — defects are being produced. Cp and Cpk formalise this: Cp measures width, Cpk measures centring. Target is Cpk ≥ 1.33 (4-sigma).

⚫ Scatter Diagram — Testing Relationships

A scatter diagram plots two variables against each other to test whether a relationship (correlation) exists. It cannot prove causation, but it validates hypotheses from fishbone analysis.

Correlation Patterns
Strong positive: Both rise together — likely relationship
Strong negative: One rises as other falls
No correlation: Scattered randomly — X does not predict Y
Curved: Non-linear relationship (e.g., too little and too much both cause problems)
Stratified clusters: Multiple groups present — stratify by source
Common Mistakes
• Fewer than 30 data pairs
• Assuming correlation = causation
• Not stratifying when data mixes sources
• Ignoring outliers (investigate them — they often explain the most)
• Using ordinal data (numbers that mean rank, not measure)
Hypothesis Flow
Fishbone: "We think temperature causes more defects" → Scatter diagram: Plot temperature (X) vs defect count (Y) for 60 days → If strong positive correlation: Hypothesis supported — investigate temperature control. Always confirm with at least 3 independent data sources before taking action.

Control Chart (SPC) — Monitoring Process Stability

A control chart plots process data over time with statistically calculated upper and lower control limits (UCL/LCL = ±3 sigma from the mean). It distinguishes normal "common cause" variation from abnormal "special cause" variation that requires investigation.

Special Cause Rules (Act on These)
1 point beyond UCL or LCL
8 consecutive points same side of centre
6 points in a row trending up or down
2 of 3 consecutive points in outer third
• Any of these = investigate immediately. Something changed.
Common Mistakes
• Reacting to every data point (tampering — makes things worse!)
• Using specification limits as control limits (they are different!)
• Not investigating special causes when they occur
• Recalculating limits too often
• Ignoring trends because "nothing crossed the line"
⚠️ Specification Limits ≠ Control Limits
Control limits are calculated from the data — they show what the process IS doing (3-sigma bounds of natural variation).
Specification limits show what the customer REQUIRES.

You can have a stable process (no special causes) that still produces defects — if the process is centred outside spec. This is a capability problem, not a control problem. Requires process redesign, not just monitoring.

Fishbone Diagram (Ishikawa / Cause-and-Effect)

A fishbone diagram organises potential causes of a problem into categories, structured like a fish skeleton. The "head" is the effect (problem), the "bones" are categories of causes. Forces structured team thinking and prevents jumping to solutions.

The 6M Categories (Manufacturing)

Man
Training, attention, fatigue, skill, motivation
Machine
Equipment, tooling, calibration, wear, setup
Method
Procedures, instructions, techniques, sequence
Material
Raw materials, components, suppliers, storage
Measurement
Gauges, test methods, sampling, interpretation
Mother Nature
Environment, temperature, humidity, vibration
Service version: 4P or 4S categories
For service processes: Use People, Process, Policy, Place — or Suppliers, Systems, Surroundings, Skills.
The 6M categories are for manufacturing. Service fishbones work equally well with modified categories.

After the fishbone: Don't try to fix all causes. Use a Pareto chart or multi-voting to identify the 1–2 most likely root causes to test first. Then verify with data before acting.

📑 Stratification — Finding Hidden Patterns

Stratification means separating data into groups based on a factor (shift, machine, operator, material lot, day of week) to identify whether that factor affects the outcome. Often a histogram or scatter diagram that "looks normal" reveals dramatic differences when stratified.

Classic Stratification Discoveries
• Night shift produces 3× more defects than day shift
• Machine B accounts for 70% of all breakdowns
• Supplier A materials fail at 5× the rate of Supplier B
• Monday morning defects are double any other time
• One operator is responsible for 80% of quality rejects
Common Mistakes
• Analysing total data without asking "does the subgroup matter?"
• Not recording the stratification variable during data collection
• Blaming individuals without investigating the system cause
• Stopping at one stratification layer (stratify the stratified data too!)
• Not having enough data per subgroup for reliable comparison

Quick Reference: Which Tool When?

  • Need to collect data? → Check Sheet
  • Need to prioritise problems? → Pareto Chart
  • Need to understand variation? → Histogram
  • Need to test if X causes Y? → Scatter Diagram
  • Need to monitor a process over time? → Control Chart
  • Need to find root causes? → Fishbone Diagram
  • Suspect a hidden subgroup effect? → Stratification
  • Complex problem? → Use tools in sequence: Check Sheet → Pareto → Fishbone → Scatter → Control Chart
Tool 2 of 9

8 Wastes — TIM WOODS

The eight categories of non-value-adding activity hiding in every process

What is Waste (Muda)?

Any activity that consumes resources but creates no value for the customer

In Lean thinking, waste (muda) is any step in a process that the customer would not pay for if they knew it was happening. Taiichi Ohno at Toyota originally identified seven wastes. A eighth — unused talent — was added as Lean spread beyond manufacturing. The mnemonic TIM WOODS makes them easy to remember.

Click any waste card to expand it:

T
TIM WOODS
Transportation
Moving materials, products, information, or people more than necessary. Every move is a risk of damage, delay, or loss — and adds zero value.
Examples: Moving files between offices. Shipping parts to distant storage then back. Routing emails through three managers before action. Walking to a printer at the end of a corridor.
I
TIM WOODS
Inventory
Any excess raw materials, WIP, or finished goods beyond what is immediately needed. Inventory hides problems, ties up cash, risks obsolescence, and requires space and handling.
Examples: 6 months' supply of components sitting on shelves. Inbox with 400 unprocessed emails. Finished goods waiting in a warehouse. 50 reports printed but never read.
M
TIM WOODS
Motion
Unnecessary movement of people during their work — reaching, bending, walking, searching. Motion is different from transportation: motion is people moving; transportation is product moving.
Examples: Walking to a shared printer 30 times a day. Reaching for tools not stored at point of use. Scrolling through multiple screens to find customer data. Standing up to get supplies that should be within reach.
W
TIM WOODS
Waiting
Time when work is idle — waiting for approval, materials, information, machines, or the next process step. Waiting is often invisible but is typically 60–80% of total lead time in service processes.
Examples: Waiting for a manager's signature. Invoice on hold pending PO number. Machine idle while operator fetches tooling. Customer on hold for 20 minutes. Batch waiting for enough work to justify a run.
O
TIM WOODS
Overproduction
Producing more than needed, faster than needed, or earlier than needed. Ohno called this the "root of all wastes" — overproduction creates all the other wastes (inventory, transportation, storage, rework of excess).
Examples: Printing 500 reports when 50 are read. Producing to forecast when actual demand is unknown. Sending CC emails to 20 people when 2 need it. Making 1,000 parts per day when the customer takes 800.
O
TIM WOODS
Overprocessing
Doing more work, adding more features, or applying more precision than the customer requires. Using a 10-tonne press to stamp paper. Running a 5-page report when a single number is needed.
Examples: Multiple approval layers for routine low-risk decisions. Gold-plating a report nobody reads in full. Cleaning parts that immediately get dirty in the next step. Re-entering data already captured in a prior system.
D
TIM WOODS
Defects
Any product or service that doesn't meet requirements — requiring rework, scrap, inspection, or customer complaints. Defects are waste multiplied: the original defect plus all the handling that follows.
Examples: Wrong invoices requiring re-issue. Parts out of tolerance. Incorrect data entered and needing correction. Incorrect customer address causing failed delivery and re-shipping.
S
TIM WOODS
Skills (Unused Talent)
Not using people's full potential — their ideas, creativity, knowledge, and capabilities. Often considered the greatest waste of all because it compounds over time.
Examples: Skilled engineer doing manual data entry. No suggestion system for frontline workers. Micro-management preventing independent problem-solving. Experts asked to follow scripts rather than apply judgement.

Waste Walk Checklist — Go to the Gemba

Print this checklist and walk your process. Tick every waste you observe:

  • 🚶 Walk the process end-to-end. Watch what actually happens, not what the SOP says
  • ⏱️ Time how long each step takes. Then time how long work sits idle between steps
  • Ask: "Does this step change the form, fit, or function of the product for the customer?"
  • Look for materials, files, or work-in-progress that has been waiting more than 1 hour
  • Count how many times a document, part, or piece of information changes hands
  • Ask workers: "What frustrates you about this process?" (They know the waste)
  • Quantify each waste found: How often? How much time? Estimated cost per year?
  • Prioritise: Which wastes are biggest? Easiest to fix? Most impactful to the customer?
⚠️ The Hidden Cost
Studies across service and manufacturing consistently find only 5–10% of total process time is value-adding. In a 10-day order-to-delivery process, the actual work may take only 2 hours — the other 9.75 days is waste. Before benchmarking against competitors, benchmark against your own current-state map.
Tool 3 of 9

PDCA — Plan, Do, Check, Act

The Deming Cycle — the heartbeat of continuous improvement

What is PDCA?

An iterative scientific method for improving processes, products, or services

Developed by Walter Shewhart and popularised by W. Edwards Deming, PDCA is the foundation of Lean, ISO standards, and Kaizen. It is deliberately simple — a reminder that improvement is a cycle, not a project with an end date. The key insight: small, fast cycles beat large, slow projects.

↻ Repeat continuously — never stop improving
P
Plan
Define the problem
Analyse root causes
Set a measurable target
Design the solution
Plan the test
D
Do
Test on small scale
Implement pilot
Collect data
Train the team
Document as you go
C
Check
Compare to target
Analyse results
What worked?
What didn't?
Capture learnings
A
Act
If success: standardise
Share with others
If not: adjust plan
Start next cycle
Never stop

PDCA in Action — Real Example

PLAN — Week 1
Problem: Order processing takes 5 days. Customer expects 2.
Analysis: Value stream mapping shows 80% of time is waiting for 3-level approval chain.
Root cause: All orders regardless of size require VP sign-off.
Solution: Delegate authority — orders under £500 need only 1 approval.
Target: Reduce to 2 days for 80% of orders within 4 weeks.
DO — Weeks 2–3
Pilot: Test with 30 orders on one team.
Training: Supervisors briefed on new authority limits.
Data collection: Logged processing time for each order in pilot.
Communication: Updated procedure document, team notified.
CHECK — Week 4
Result: Average processing time fell from 5 days to 2.3 days.
Target status: 85% of orders now processed in ≤2 days — exceeded 80% target!
Gap found: Orders over £500 still average 4.2 days (still 3 approvals required).
Learning: Single approval layer works for routine transactions.
ACT — Week 5
Standardise: New approval policy made official, rolled out to all teams.
Share: Results presented to management. Other departments enquiring about same approach.
Next PDCA cycle: Begin new cycle targeting the high-value order approval process.
Continuous: Monthly monitoring established — any drift above 2.5 days triggers review.

PDCA Implementation Checklist

  • Plan: Can you state the problem in one sentence with a number?
  • Plan: Do you have a baseline measurement and a specific, time-bound target?
  • Do: Test on the smallest scale possible before rolling out
  • Do: Document changes made in real time — memory is unreliable
  • Check: Use objective data — "it feels better" is not a check
  • Check: Compare to the original baseline, not to a different period
  • Act: If it worked, write a standard. Don't rely on people remembering
  • Act: If it didn't work, analyse why — failure is data for the next cycle
Five PDCA Success Rules
1. Start small. Pilot before rollout. Proof before scale.
2. Never skip the Check. The temptation is PLAN → DO → ACT. Skipping Check means you never learn.
3. Cycle fast. PDCA cycles should be days or weeks. Months means the problem has already grown.
4. Measure before and after. Without a baseline, you can't prove improvement.
5. Document everything. PDCA builds organisational memory — but only if it's written down.
Tool 4 of 9

DMAIC — Six Sigma Methodology

Data-driven rigour for complex, high-stakes process improvement

What is DMAIC?

Define–Measure–Analyze–Improve–Control — the Six Sigma project framework

DMAIC is the structured problem-solving methodology at the heart of Six Sigma. More rigorous and slower than PDCA, it is used when a problem is complex, has multiple potential causes, requires statistical validation, and has significant financial or quality stakes. A DMAIC project typically takes 3–6 months and is led by a Green Belt or Black Belt.

D
Define
Problem statement
Project charter
Voice of customer
SIPOC map
Scope & team
M
Measure
Current baseline
Data collection plan
Measurement system
Process capability
Sigma level
A
Analyze
Statistical analysis
Root cause ID
Hypothesis testing
Fishbone/Pareto
Value stream map
I
Improve
Generate solutions
Pilot test
DOE (if needed)
Implement best
Verify improvement
C
Control
Control plan
SPC charts
SOP updates
Training
Sustain gains

What Happens in Each Phase

DEFINE — Answer: What are we trying to fix and why does it matter?
Write a Problem Statement: "Process X has Y defect rate causing Z cost." Never define the solution in Define phase. Create a Project Charter with scope, team, timeline, and business case. Map the SIPOC (Suppliers→Inputs→Process→Outputs→Customers).
MEASURE — Answer: How bad is it really, and can we trust our measurement?
Establish the baseline (current DPMO, sigma level, cycle time). Validate the measurement system (Gauge R&R — is the measurement system itself reliable?). If you can't trust the data, the analysis phase is meaningless.
ANALYZE — Answer: What is causing the problem?
Use the 7 QC tools. Generate hypotheses, then test them with data. Never assume. Use hypothesis testing (t-tests, ANOVA, regression) to validate statistical significance. Find the "vital few" X's that drive most of the Y variation.
IMPROVE — Answer: What is the best solution, and does it actually work?
Generate creative solutions (Kaizen, TRIZ, Design of Experiments). Run a pilot. Verify improvement against the baseline from Measure. Calculate financial benefit. Get stakeholder buy-in before full rollout.
CONTROL — Answer: How do we ensure the gains don't fade?
The most underinvested phase. Create a Control Plan. Update SOPs. Set up SPC charts with reaction plans. Train all affected operators. Hand over to the process owner. Define a 90-day monitoring period. 70% of improvement gains are lost within 6 months without a robust Control phase.
✓ When to Use DMAIC
• Complex problems with multiple potential causes
• High-cost, high-impact problems (£50K+ annual impact)
• Statistical analysis is required
• Prior attempts at fixing have failed
• Need to prove to stakeholders that the fix will stick
Typical duration: 3–6 months
✗ When NOT to Use DMAIC
• Simple, obvious problems (use PDCA or Just Do It)
• Need results in 2 weeks
• Little historical data available
• Process doesn't exist yet (use DMADV)
• The fix is already obvious and agreed
DMAIC is powerful but heavyweight — don't overcomplicate simple problems
PDCA vs DMAIC — When to Use Which
PDCA: Fast, light, iterative. 2–4 weeks per cycle. Anyone can lead it. Best for daily improvement, Kaizen events, simple problems.
DMAIC: Rigorous, data-intensive, 3–6 months. Requires a trained belt. Best for complex, costly, persistent problems.

The 95/5 rule: Use PDCA 95% of the time. Reserve DMAIC for the critical 5% that genuinely needs deep statistical analysis.

DMAIC Project Checklist

  • Define: Problem statement written — measurable, time-bound, no solution implied
  • Define: Project charter signed by sponsor, scope clear, team assigned
  • Measure: Baseline data collected over minimum 4–8 weeks
  • Measure: Measurement system validated (Gauge R&R conducted)
  • Analyze: Root causes tested with data — not assumed
  • Analyze: Statistical significance confirmed for identified X–Y relationships
  • Improve: Pilot conducted before full rollout
  • Control: Control plan documented, SPC charts established, SOPs updated
Tool 5 of 9

Kanban System

Visual workflow management — limit WIP, improve flow, pull based on demand

What is Kanban?

看板 — "signboard" in Japanese. A visual pull system that controls work-in-progress

Kanban was developed by Taiichi Ohno at Toyota in the 1940s, inspired by how supermarkets restock shelves — only when items are taken, not based on forecast. Today it is used in software development, service operations, logistics, and any knowledge work environment. The core principle: pull work based on actual demand, not predicted demand.

The 6 Core Kanban Practices

  • 1. Visualise Work: Every piece of work is visible on the board. Nothing is hidden in someone's inbox or head.
  • 2. Limit WIP: Set a maximum number of items allowed in each column. This is the key practice — everything else follows from it.
  • 3. Manage Flow: Watch how work moves. Stuck cards reveal bottlenecks. Fix bottlenecks before starting new work.
  • 4. Make Policies Explicit: Clear, written rules for what "In Progress" means, what "Done" means, when to pull, who owns what.
  • 5. Implement Feedback Loops: Regular cadences (daily standups, retrospectives, reviews) to inspect and adapt.
  • 6. Improve Collaboratively: Use metrics to drive improvement. Small changes, frequent experiments, continuous evolution.

Interactive Kanban Board — Click cards to select them

TO DO (WIP Max: 5)
Review supplier SLA data
Owner: J. Chen
High
Update QMS procedure P-003
Owner: M. Patel
Medium
Customer survey analysis
Owner: L. Torres
Low
IN PROGRESS (WIP Max: 2)
Root cause — line 4 rejects
Started: 2 days ago
High
5S audit — warehouse zone B
⚠️ Blocker: Access key
Medium
REVIEW (WIP Max: 2)
Poka-yoke design for press #2
Awaiting: Engineering sign-off
High
DONE
Kaizen event — packaging line
Completed: 3 days ago
✓ Done
Control chart setup — KPI-07
Completed: 1 week ago
✓ Done

The Four Kanban Metrics That Matter

CYCLE TIME
How long one item takes from "In Progress" to "Done." Reduce by removing blockers, limiting WIP, eliminating handoffs.
Target: Shorter and more consistent
LEAD TIME
Total time from request to delivery — includes queue time. This is what the customer experiences. Reduce queue time by limiting WIP.
Lead time = queue time + cycle time
THROUGHPUT
Items completed per time period (e.g., 12 tasks/week). Little's Law: Throughput = WIP ÷ Cycle Time. Lower WIP → higher throughput.
The goal is not busy — it's done
WIP (WORK IN PROGRESS)
Items currently in progress. High WIP causes context switching, delays, and quality problems. Limit WIP to improve everything else.
Stop starting. Start finishing.
Start Kanban in 30 Minutes
Step 1: Post three columns on a wall: To Do | In Progress | Done.
Step 2: Write each active work item on a sticky note. Place it in the correct column.
Step 3: Set a WIP limit for "In Progress" — start with current average, then reduce by 20%.
Step 4: Hold a 10-minute daily standup: What moved yesterday? What's blocked? What's next?
Step 5: After 2 weeks, measure average cycle time. Then find and fix the bottleneck.

Pro tip: Start on a physical wall before going digital. Sticky notes are more engaging and easier to experiment with.
Tool 6 of 9

Value Stream Mapping (VSM)

See the full flow — from customer demand to delivery. Every step, every delay, every handoff

What is VSM?

A pencil-and-paper tool for mapping the flow of materials and information

Value Stream Mapping is a visual tool developed at Toyota to document every step, delay, inventory point, and information flow between a customer request and the delivery of the finished product or service. Unlike a process flowchart, VSM includes time data — it shows not just what happens, but how long each step takes and how long work waits between steps. The gap between the two reveals where your improvement opportunity lies.

Example Current-State Value Stream

A simplified service process showing value-added time (VA) and non-value-added wait time (NVA):

Customer
Order placed
⏳ Wait
NVA: 2 days
Queue
Order Review
VA: 0.5h
1 person
⏳ Wait
NVA: 1 day
Approval queue
Approval
VA: 0.25h
Manager
⏳ Wait
NVA: 0.5 day
Processing
Fulfilment
VA: 1h
2 people
Delivery
To customer
VALUE-ADDED TIME
1.75 hours
NON-VALUE-ADDED
3.5 days
PROCESS EFFICIENCY
2.4%
Value-added (1.75h) Non-value-added waiting (3.5 days)
What VSM Reveals
• The entire end-to-end flow in one view
• Exactly where work waits and for how long
• Total lead time vs actual value-added time
• Information flow gaps that cause delays
• Inventory/queue build-up points
• Bottleneck operations (the constraint)
• Push vs pull handoff points
Common VSM Mistakes
• Mapping from memory instead of actual observation
• Mapping the ideal process, not the real one
• Only mapping the "happy path" (no exceptions, no rework loops)
• Not including information flow (just material/task flow)
• Making the map but never creating a Future State
• A team of one (VSM requires cross-functional input)

How to Build a Value Stream Map

  • Define the scope: Select one product family or service. Start at customer order, end at delivery.
  • Go to the Gemba: Walk the actual process. Observe what really happens. Don't use desk research alone.
  • Map process steps: Draw each step as a box. Include: cycle time, number of people, reliability %.
  • Map inventory/queues: Draw triangles between steps to show how much work waits (number of items, time waiting).
  • Map information flow: Show how orders, schedules, and instructions travel. Where are the gaps?
  • Calculate totals: Sum all VA time. Sum all NVA wait time. Calculate process efficiency = VA ÷ Total Lead Time.
  • Draw the Future State: Design the ideal flow — what waste can be eliminated? What steps can be removed?
  • Create the action plan: Identify the 3–5 biggest improvements to get from current state to future state.
⚠️ The Most Common VSM Shock
The first time a team calculates their process efficiency, they expect something like 60–70%. The actual number is almost always under 10%, and often under 3% as in the example above. This is not failure — it is opportunity. Every percentage point of wait time eliminated is money recovered, speed improved, and customers served better.
Tool 7 of 9

A3 Problem Solving

Toyota's one-page structured thinking tool — PDCA discipline on a single sheet

What is an A3?

A structured problem-solving report on a single A3-sized sheet of paper

The A3 gets its name from the paper size (297×420mm). Toyota mandated that all significant problems be addressed on a single A3 sheet — not because paper was scarce, but because the constraint forces rigorous, structured thinking. If you can't fit it on one page, you don't understand the problem well enough yet. The A3 is simultaneously a thinking tool, a communication tool, and a record of organisational learning.

The A3 Template — Click each section to understand it

📌 1. Background / Problem Statement
Why is this problem important? What is the business context? State the problem without implying a solution. Include relevant data.
2. Current Condition
What is happening now? Show with data, diagrams, and visual mapping. Quantify the gap between current and target state.
3. Goal / Target State
What measurable outcome do we need? Specific, time-bound target. "Reduce defect rate from 8% to 2% by Q3."
4. Root Cause Analysis
What is actually causing this? Use fishbone, 5 Whys, data analysis. Test hypotheses. Never skip this — jumping to solutions without root cause is the #1 improvement mistake.
5. Countermeasures
What specific actions will address the root causes? List owner, action, deadline for each. Why will this work? Connect each countermeasure to a root cause.
6. Effect Confirmation
How will we know it worked? What data will we collect? Set go/no-go criteria before implementation — not after.
7. Follow-Up / Standardisation Actions
If countermeasures were effective: How do we standardise and sustain? Update SOPs, train team, establish monitoring. What is the next PDCA cycle? What learnings should be shared with other departments?

5 Whys — The Heart of A3 Root Cause Analysis

The 5 Whys technique digs through symptoms to find the actual root cause. Ask "Why?" five times:

Problem: Machine stopped.
Why 1: The circuit breaker tripped. → Why?
Why 2: The motor was overloaded. → Why?
Why 3: The bearing wasn't lubricated. → Why?
Why 4: The lubrication pump wasn't working. → Why?
Why 5 — Root Cause: The pump inlet filter was clogged. There is no scheduled cleaning procedure for this filter.
Countermeasure Goes to the Root
Fixing the circuit breaker (Why 1) = symptom fix. It will happen again.
Real fix: Create a scheduled maintenance procedure for cleaning the pump inlet filter. This prevents the chain of events.

This is why A3 insists on root cause analysis before action. 80% of "failed improvements" fixed the wrong level.

A3 Discipline Checklist

  • Problem statement contains a number — never just words
  • Current condition shown visually (diagram, chart, or map — not just text)
  • Root cause analysis is data-based — not assumed or guessed
  • Every countermeasure is connected to a specific root cause
  • Effect confirmation criteria defined BEFORE countermeasures are implemented
  • Owner and deadline assigned to every action
  • A3 reviewed with the team — not completed in isolation
  • If improvement achieved: SOP updated and team trained before project is closed
Tool 8 of 9

Poka-Yoke — Error-Proofing

Design mistakes out of the process. Prevention is always cheaper than detection

What is Poka-Yoke?

ポカよけ — "Mistake-proofing" developed by Shigeo Shingo at Toyota

Poka-Yoke (poh-kah-yoh-kay) means "inadvertent error prevention" in Japanese. Developed by Shigeo Shingo in the 1960s, it is the practice of designing processes, equipment, and environments so that mistakes either cannot be made — or are immediately detected and corrected before they cause defects. The philosophy: humans are not perfect, but systems can compensate for human imperfection.

Three Types of Poka-Yoke — Click to explore each:

🛑
Prevention
Makes the mistake physically impossible to make
🔔
Detection
Identifies the mistake immediately after it occurs
Warning
Alerts the operator before or during a potential error

🛑 Prevention Poka-Yoke — The Gold Standard

Prevention poka-yoke makes it physically or logically impossible to make the error. It is the most powerful type because no human attention is required.

Physical Prevention Examples
• USB plug only fits one way (asymmetric design)
• Wrong-shaped nozzle on diesel vs petrol pump
• Guide pins on assembly fixtures (part can only go in one orientation)
• Missing field validation in a form (won't submit without required data)
• Equipment interlock — door must be closed before machine starts
Digital/Service Prevention
• Mandatory fields in CRM — can't save without customer reference
• Dropdown replaces free-text to prevent spelling variation
• System blocks order if credit limit exceeded
• Date field accepts only valid date formats
• Confirmation screen showing "You are sending to 500 people — confirm?"

The Poka-Yoke Design Process

  • Identify the error — use defect data, customer complaints, and process observation
  • Ask: "At what step does this error occur? What causes the operator to make the mistake?"
  • Try prevention first: Can the physical design make the error impossible?
  • If prevention isn't feasible: Design detection that catches the error at the same step it occurs
  • Test the poka-yoke with the actual operator — does it work in real conditions?
  • Measure before and after: How much did the defect rate fall?
  • Ensure the poka-yoke itself doesn't create new problems (false positives, throughput impact)
The Poka-Yoke Mindset
"The worker is not wrong — the process is wrong."

When a mistake occurs, the instinct is to blame the person. Poka-Yoke assumes the opposite: people make predictable errors under predictable conditions. The job of the process designer is to create conditions where those errors cannot occur. Toyota estimates that poka-yoke devices prevent approximately 70% of all potential defects.
Tool 9 of 9

Kaizen — Continuous Improvement

改善 — "Change for the better." The philosophy and practice that underlies all Lean thinking

What is Kaizen?

The belief that every process can be improved, by everyone, every day, in every area

Kaizen (改善) literally means "change for the better." It is both a philosophy — the belief that improvement is always possible — and a practice: the structured methodology of making small, continuous improvements involving everyone in the organisation. Kaizen is why Toyota improves by 1 million ideas per year. Not a million big changes — a million small ones. 1% better every day = 37× better after one year.

Kaizen vs Innovation
Innovation (Kaikaku):
• Large, infrequent step changes
• Requires significant investment
• Top-down, project-driven
• High risk, high reward
• Gains can erode without Kaizen

Kaizen:
• Small, frequent improvements
• Low cost, sometimes no cost
• Bottom-up, everyone involved
• Low risk, cumulative effect
• Sustains and builds on innovation
Why Kaizen Fails Without Culture
• Leaders say "that's not your job"
• Suggestions go into a black hole (no response)
• Improvement effort is not rewarded
• Mistakes are punished instead of learned from
• Kaizen events happen but gains aren't sustained
• "We tried that before" thinking
• Improvement seen as extra work, not core work

The Kaizen Event (Blitz)

A Kaizen event is a focused 3–5 day workshop where a cross-functional team improves a specific process area intensively.

Day 1
Observe & map current state
Day 2
Analyse waste & root causes
Day 3
Design future state & test
Day 4
Implement & refine
Day 5
Standardise & present results
Typical Kaizen Event Results
A well-run 5-day Kaizen event on a single process area typically achieves:
30–60% reduction in cycle time or lead time
40–70% reduction in floor space or file storage
50–80% reduction in inventory or WIP
20–50% reduction in defect rate

These aren't outliers — they are typical first-event results, because first events are attacking waste that has accumulated for years.

Everyday Kaizen — Building the Habit

  • Every time something annoys you in a process, ask: "Why does this work this way?"
  • If you spot waste, write it down — don't just accept it as "how things are"
  • Share one improvement idea per week with your team (just one — small is fine)
  • When implementing a change, always measure before and after to confirm it worked
  • If a change didn't work, document why — failure is data for the next attempt
  • Share what worked across teams — avoid everyone solving the same problem independently
  • Celebrate small improvements publicly — recognition reinforces the behaviour
🌟 The Kaizen Mindset in Three Sentences
No process is ever perfect. Every person is the expert on their own work and can improve it. Improvement is not a project with an end date — it is how we work every day.
Knowledge Test — 15 Questions

Test Your Knowledge

Answer all 15 questions to receive your completion certificate. Immediate feedback on every answer.

Select your answer for each question. Each question shows whether you are correct and explains the reasoning. Complete all 15 to see your score and certificate.

Question 1 of 15 — 7 QC Tools
Which of the 7 QC tools would you use first when starting to investigate a quality problem — before any analysis?
A
Check Sheet — to systematically collect and count occurrences
B
Control Chart — to monitor the process over time
C
Fishbone Diagram — to identify root causes
D
Scatter Diagram — to test correlations
✓ The Check Sheet comes first. You need to collect structured data before you can draw Pareto charts, fishbones, or any other analysis tool. "Collect data before you analyse" is the foundation of all quality work.
Question 2 of 15 — 7 QC Tools
A Pareto chart shows that 3 defect types account for 78% of all defects. What is the correct next action?
A
Try to fix all defect types simultaneously to maximise improvement
B
Focus exclusively on the top 1–2 defect types — address the vital few first
C
Collect more data before taking any action
D
Ignore the top causes and start with the easiest to fix
✓ The Pareto principle (80/20 rule) tells you to focus resources on the "vital few" causes that produce the majority of problems. Fixing all defect types simultaneously spreads effort thin and typically yields less improvement than addressing the top 1–2 causes in depth.
Question 3 of 15 — 8 Wastes
In the TIM WOODS acronym for 8 wastes, what does "O" (Overproduction) represent, and why is it considered the most serious waste?
A
Using outdated machines — makes all other equipment inefficient
B
Too many managers — creates bureaucratic delay
C
Producing more than needed — creates all other wastes (inventory, transport, storage, rework of excess)
D
Overtime hours — increases cost without adding value
✓ Taiichi Ohno called overproduction the "root of all wastes" because it directly creates all other waste categories. When you produce more than needed, you must store it (Inventory), move it (Transportation), and often rework it when the design changes. It ties up cash and hides problems.
Question 4 of 15 — 8 Wastes
A skilled engineer spends 4 hours per day doing manual data entry that could be automated. Which waste is most prominently at work?
A
Motion — unnecessary physical movement
B
Waiting — the data has to wait to be entered
C
Skills (Unused Talent) — expert capability applied to low-value work
D
Overprocessing — more work than necessary
✓ While Overprocessing and Motion are present, the dominant waste is Unused Talent (Skills). Using an engineer's expert capability for work that requires no engineering skill wastes both the engineer's potential and the organisation's investment in their development. This is the 8th waste added to the original 7.
Question 5 of 15 — PDCA
A team implements a fix (Do phase) and the problem improves. They immediately roll it out company-wide. What critical mistake did they make?
A
They moved too slowly — should have rolled out sooner
B
They didn't use DMAIC instead of PDCA
C
They skipped the Check phase — no objective data confirms the fix actually worked
D
They should have involved more people in the Do phase
✓ Skipping the Check phase is the most common PDCA failure. "It feels better" is not Check. Check requires objective data comparing results against the original baseline and the stated target. Without Check, you might be rolling out a partial fix or — worse — a coincidental improvement that has nothing to do with your action.
Question 6 of 15 — PDCA
Which of the following best describes the Act phase of PDCA when a pilot was successful?
A
Close the project and move on to the next problem
B
Standardise the improvement, update SOPs, share learnings, then start the next PDCA cycle
C
Continue monitoring for 12 months before making the change permanent
D
Celebrate and return to the original process to confirm it was actually worse
✓ In the Act phase, success means standardise and spread. Write the new way into the SOP, train the team, share with other departments who face similar issues, and immediately begin the next PDCA cycle. PDCA never ends — the Act of one cycle feeds the Plan of the next.
Question 7 of 15 — DMAIC
During the Analyze phase of DMAIC, the team has identified 12 potential root causes. What is the correct approach?
A
Address all 12 simultaneously with separate action owners
B
Use data and statistical testing to identify the vital few X's that most strongly drive the problem
C
Select the two easiest causes to fix regardless of their impact
D
Ask management to select the most important causes based on experience
✓ The Analyze phase must use data, not opinions or convenience, to confirm which causes are statistically significant drivers of the problem. This is what distinguishes DMAIC from PDCA — every hypothesis must be tested. Typically 2–3 root causes account for 80% of the problem variation.
Question 8 of 15 — DMAIC
Which DMAIC phase is most commonly under-invested, causing improvement gains to fade within 6 months?
A
Define — teams rush the problem statement
B
Measure — baseline data collection is incomplete
C
Analyze — insufficient statistical testing
D
Control — insufficient standardisation and monitoring after improvement
✓ The Control phase is consistently under-invested. Once improvement is demonstrated, teams are pulled to the next project. Without a Control Plan, SPC monitoring, updated SOPs, and operator training, the process drifts back. Research shows 70% of improvement gains are lost within 6 months without a robust Control phase.
Question 9 of 15 — Kanban
A Kanban board has 2 items in "To Do," 8 items in "In Progress," and 1 item in "Done" today. The WIP limit for "In Progress" is 3. What does this indicate?
A
The team is highly productive — lots of work in progress
B
The WIP limit is being violated — there is a bottleneck, throughput is suffering, and quality is at risk
C
The WIP limit was set too low and should be increased to 10
D
This is normal — WIP limits are guidelines, not rules
✓ 8 items in a column with a WIP limit of 3 is a serious violation. WIP limits are rules, not suggestions. High WIP causes context switching (quality drops), long lead times, and hidden bottlenecks. The correct response is to stop starting new work and focus on finishing the current 8. Little's Law: Lead Time = WIP ÷ Throughput — high WIP = long waits.
Question 10 of 15 — Value Stream Mapping
A value stream map shows a total lead time of 12 days, with 45 minutes of actual value-added work. What is the process efficiency, and what does this mean?
A
About 50% — the process is moderately efficient
B
About 25% — below average but correctable
C
About 0.3% — 99.7% of elapsed time is non-value-adding waste
D
About 10% — typical for a service process
✓ Process efficiency = 45 min ÷ (12 × 8 hours × 60 min) = 45 ÷ 5760 ≈ 0.78%. Even rounding generously, this is well under 1%. This is not unusual — most service processes have efficiencies under 5%. The other 99%+ is waiting, checking, rework, approval, and handoffs. Every one of those is a Lean improvement target.
Question 11 of 15 — A3 Problem Solving
A team writes in their A3 problem statement: "We need to implement a new IT system to fix our order processing." What is wrong with this statement?
A
It doesn't include a budget figure
B
It defines the solution, not the problem — the problem statement should describe the current gap, not prescribe the answer
C
It's too short — problem statements should be at least one page
D
IT systems cannot be included in an A3
✓ A problem statement must describe the current state gap — what is happening and how much it deviates from the target — without implying a solution. "Order processing takes 8 days; customer expectation is 2 days, causing 15% order cancellation rate" is a correct problem statement. Defining the solution at the problem statement stage prevents root cause analysis from happening.
Question 12 of 15 — A3 / 5 Whys
Using 5 Whys, a team discovers the root cause of repeated machine failure is "no scheduled maintenance procedure for a critical component." What type of countermeasure should they implement?
A
Retrain the operator who failed to maintain the component
B
Replace the component with a more durable one
C
Create and implement a scheduled preventive maintenance procedure for that component
D
Install a sensor that detects when the machine fails
✓ The countermeasure must address the root cause — which is the absence of a maintenance procedure, not the operator's knowledge, the component's quality, or detection after failure. The correct fix is a scheduled maintenance procedure that makes the component failure predictable and preventable. Addressing symptoms (retraining, replacing, detecting) would leave the root cause intact.
Question 13 of 15 — Poka-Yoke
A USB port that only accepts the plug in one orientation is an example of which type of Poka-Yoke?
A
Prevention — physical design makes the wrong insertion impossible
B
Detection — a sensor detects when the wrong plug is inserted
C
Warning — a light flashes when plugged in the wrong way
D
Inspection — a person checks the connection after insertion
✓ This is Prevention — the strongest form of Poka-Yoke. The asymmetric physical design makes the error mechanically impossible without any human attention, alerting system, or inspection. Prevention Poka-Yokes work 100% of the time because they remove human behaviour from the equation entirely.
Question 14 of 15 — Kaizen
Taiichi Ohno called overproduction the "root of all wastes." What is the Kaizen philosophy's primary reason for prioritising small, frequent improvements over large, infrequent ones?
A
Small improvements require less management approval
B
Large improvements create too much paperwork
C
Small improvements generate faster feedback, involve everyone, compound over time, and sustain gains better than large step-changes
D
Large improvements only work in manufacturing, not services
✓ Kaizen's power comes from compounding: 1% better per day = 37× better in a year. But the deeper reason is cultural — when everyone makes small improvements every day, it builds capability, engagement, and organisational learning. Large innovation projects that succeed often regress without Kaizen to sustain and extend the gains. Small + frequent + everyone = sustainable improvement.
Question 15 of 15 — Tool Selection
Your team has a defect rate of 12% that has persisted for 2 years despite multiple attempts to fix it. Data exists going back 18 months. Which approach is most appropriate?
A
Another PDCA cycle — the previous ones weren't done properly
B
A Kaizen event — 5 days of intensive focus will solve it
C
A full DMAIC project — the problem is complex, persistent, and has sufficient data for statistical analysis
D
A Pareto chart — prioritise the top defect types and fix those
✓ A persistent, complex, high-impact problem with 18 months of data is exactly the situation DMAIC is designed for. Prior PDCA failures suggest the root cause has not been properly identified — which is precisely what the Analyze phase of DMAIC addresses through statistical hypothesis testing. The Pareto chart is part of the Analyze phase, not a standalone solution.