Result Validation & Troubleshooting Different Results
Learn to validate your LCA results, understand why different studies give different answers, troubleshoot unexpected results, and handle uncertainty properly.
Prerequisites:
Result Validation & Troubleshooting
"Why do different LCA studies of the same product give different results?" and "How do I validate my LCA results?" are questions that reveal the real-world complexity of LCA. This guide provides systematic approaches to validation, troubleshooting, and uncertainty management.
Understanding Result Variability
Why LCA Studies Differ
Same product, different results is normal. Major sources of variation:
| Source | Typical Variation | Example |
|---|---|---|
| System boundary | 20-50%+ | Including/excluding use phase |
| Functional unit | 10-100% | Per kg vs. per function |
| Geographic scope | 10-50% | US vs. EU electricity |
| Database choice | 10-30% | ecoinvent vs. GaBi |
| LCIA method | 5-30% | ReCiPe vs. CML |
| Allocation | 20-50% | Mass vs. economic |
| Data vintage | 10-30% | 2015 vs. 2023 data |
| Technology assumptions | 20-50%+ | Current vs. future tech |
When Differences Are a Problem
Acceptable variation: Results differ but conclusions are consistent
- "Both studies show manufacturing dominates"
- "Both rank Product A better than B"
Problematic variation: Results lead to opposite conclusions
- "Study 1 says A is better; Study 2 says B is better"
- Requires investigation to understand why
FAQ: Validation and Comparison
"Why do different LCA studies of the same product give different results?"
Root cause investigation checklist:
1. Check system boundaries
Study A: Cradle-to-gate = 50 kg CO₂-eq
Study B: Cradle-to-grave = 150 kg CO₂-eq
Not comparable! B includes use phase (electricity).
2. Check functional units
Study A: 1 kg of product = 5 kg CO₂-eq
Study B: 1 unit (0.5 kg) = 3 kg CO₂-eq
Study A per unit = 2.5 kg CO₂-eq (different!)
3. Check geographic context
Study A: Made in Norway (hydro electricity)
Study B: Made in Poland (coal electricity)
Same product, very different manufacturing impact.
4. Check data sources
Study A: Uses ecoinvent 3.8 Cut-off
Study B: Uses GaBi 2022 databases
Different background data = different results.
5. Check methodological choices
Study A: Economic allocation for co-products
Study B: Mass allocation for co-products
Allocation can shift 30-50% of impacts.
When comparing studies: Create a methodology comparison table showing boundary, FU, database, allocation, and LCIA method for each study. Differences will become obvious.
"How do I validate my LCA results?"
Validation strategy: Multiple checks
Level 1: Sanity checks
- Are magnitudes reasonable?
- Do results make physical sense?
- Is there anything obviously wrong?
Level 2: Benchmark comparison
- Compare to published studies
- Compare to EPD benchmarks
- Compare to industry averages
Level 3: Internal consistency
- Does hotspot analysis make sense?
- Are life cycle stage contributions logical?
- Does sensitivity analysis behave as expected?
Level 4: External review
- Peer review (internal or external)
- Critical review (for ISO compliance)
- Stakeholder feedback
"Why does my land use impact differ between SimaPro and openLCA?"
Common causes of software differences:
1. LCIA method version
SimaPro: ReCiPe 2016 v1.1
openLCA: ReCiPe 2016 v1.03
Different characterization factors!
2. Flow mapping differences
ecoinvent flow: "Occupation, forest, natural"
SimaPro mapping: → Forest land occupation
openLCA mapping: → May differ or be missing
3. Characterization factor coverage
Land use has many sub-categories
Not all flows may be characterized
Check "uncharacterized flows" in results
4. Background data linking
Default providers may differ
"Electricity, medium voltage" may link differently
Check linked processes explicitly
Debugging steps:
- Export results at the inventory level (before characterization)
- Compare key elementary flows
- Check if the same flows exist in both
- Apply characterization manually to isolate the difference
"How do I handle uncertainty in my results?"
Types of uncertainty:
| Type | Source | Handling |
|---|---|---|
| Parameter | Data variability | Monte Carlo, sensitivity |
| Scenario | Choice of assumptions | Scenario analysis |
| Model | Methodological choices | Compare methods |
Practical uncertainty approaches:
Sensitivity analysis (minimum): Test key parameters ±20-50%:
- Does the conclusion change?
- What's the break-even point?
Scenario analysis: Test alternative assumptions:
- Best case / worst case
- Current vs. future technology
- Different geographic contexts
Monte Carlo simulation (advanced):
- Requires uncertainty data for inputs
- Runs 1,000+ iterations
- Produces probability distributions
Reporting uncertainty:
Result: 100 kg CO₂-eq
Sensitivity range: 80-130 kg CO₂-eq (key parameters ±20%)
Monte Carlo 95% CI: 75-140 kg CO₂-eq
For comparative claims: ISO 14044 requires uncertainty analysis to confirm that differences are statistically significant. "A is better than B" is only valid if confidence intervals don't overlap significantly.
Validation Benchmarks
Quick Sanity Checks
Carbon footprint ballpark values:
| Product | Typical GWP (kg CO₂-eq) |
|---|---|
| 1 kg steel | 1.5 - 2.5 |
| 1 kg aluminum | 10 - 15 |
| 1 kg plastic (generic) | 2 - 4 |
| 1 kg paper | 1 - 2 |
| 1 kWh electricity (EU) | 0.3 - 0.5 |
| 1 tkm truck | 0.05 - 0.15 |
| 1 kg beef | 20 - 50 |
| 1 kg vegetables | 0.5 - 2 |
If your results are 10× different, investigate!
Contribution Analysis Check
Typical life cycle stage contributions:
| Product Type | Usually Dominant Stage |
|---|---|
| Energy-using products | Use phase (60-90%) |
| Food products | Agriculture (50-80%) |
| Metals/materials | Raw material production (60-80%) |
| Chemicals | Production + feedstock (50-70%) |
| Textiles | Production + use (washing) |
| Buildings | Use phase (heating/cooling) |
| Packaging | Production (if single-use) |
Red flags:
- Transport dominates (>30%) for heavy products → Check distances
- End-of-life dominates → Check waste treatment assumptions
- A minor input dominates → Check for data entry errors
Troubleshooting Common Problems
Problem 1: Results are way too high
Possible causes:
- Unit error (kg vs. tonnes, kWh vs. MWh)
- Wrong process selected (per unit vs. per kg)
- Scaling error in custom process
- Double-counting (same flow entered twice)
Fix: Verify units at each step, check process reference unit
Problem 2: Results are way too low
Possible causes:
- Missing important processes
- Wrong functional unit scaling
- Incomplete inventory
- Zero values for key flows
Fix: Mass balance check, compare inventory to known benchmarks
Problem 3: One category dominates unexpectedly
Possible causes:
- Uncharacterized flows defaulting to one category
- Unusual background process
- Data quality issue in database
Fix: Trace the contribution back through process tree
Problem 4: Negative results
Possible causes:
- Recycling credits (intended)
- System expansion credits
- Data error (negative input)
Fix: Review all credit-giving processes, verify system expansion logic
Problem 5: Toxicity categories give unexpected results
Possible causes:
- Toxicity methods are uncertain
- Minor flows can have large characterization factors
- Missing characterization for some flows
Fix: Report toxicity results with appropriate caveats
Systematic Validation Protocol
Pre-Analysis Validation
Before running LCIA:
☐ All material inputs have mass balance (in ≈ out + waste) ☐ Energy inputs are reasonable for process type ☐ Transport distances are realistic ☐ All flows are linked to database processes ☐ No orphan flows or missing links ☐ Units are consistent throughout
Post-Analysis Validation
After getting results:
☐ Overall magnitude is plausible (benchmark check) ☐ Contribution analysis makes sense ☐ No single minor input dominates unexpectedly ☐ Compare to at least one published reference ☐ Sensitivity analysis shows expected behavior ☐ Alternative LCIA method gives consistent ranking
Documentation Checklist
For transparency:
☐ Data sources listed for all inputs ☐ Data quality indicators applied ☐ Assumptions documented ☐ Limitations acknowledged ☐ Sensitivity/uncertainty discussed ☐ Comparison to literature provided
When to Seek External Review
Critical review is required for:
- Comparative assertions disclosed to public (ISO 14044)
- EPDs and verified claims
- Policy-supporting studies
Critical review is recommended for:
- High-stakes internal decisions
- First LCA study in a new domain
- Studies with unusual methodological choices
Finding a reviewer:
- Academic institutions
- Certified LCA practitioners
- LCA consulting firms
- Program operators (for EPDs)
Comparing to Literature
How to Use Published LCA Studies
Step 1: Find comparable studies
- Same product category
- Similar geography (ideally)
- Recent (within 5 years preferably)
Step 2: Create comparison table
| Study | System Boundary | FU | GWP Result | Database |
|---|---|---|---|---|
| This study | Cradle-grave | 1 unit | 15 kg CO₂-eq | ecoinvent 3.10 |
| Smith 2022 | Cradle-grave | 1 unit | 18 kg CO₂-eq | ecoinvent 3.8 |
| Jones 2021 | Cradle-gate | 1 kg | 5 kg CO₂-eq | GaBi |
Step 3: Explain differences
- If within ±30%, likely normal variation
- If >50% different, investigate methodology differences
- Document reasons for deviations
Resources for Benchmarking
| Product Type | Benchmark Source |
|---|---|
| Construction | EPD databases (EPD International, IBU) |
| Food | Poore & Nemecek 2018, GFLI |
| Electronics | GreenDelta data, Apple EPRs |
| Packaging | Industry associations |
| Chemicals | Plastics Europe, sector EPDs |
| General | Meta-analyses in academic literature |
Key Takeaways
- Different results are normal—understand why before concluding error
- Systematic validation catches problems—use checklists
- Benchmarking provides reality check—compare to published values
- Uncertainty is inherent—report it, don't hide it
- Software differences happen—trace to characterization and flow mapping
- Documentation enables review—transparent methods build credibility
Validation Flowchart
Results obtained
│
▼
Sanity check: Reasonable magnitude?
│ │
No Yes
│ │
▼ ▼
Check units, Contribution analysis sensible?
fix errors │ │
│ No Yes
│ │ │
└──────────►──┘ ▼
Compare to benchmark
│ │
>50% off Within range
│ │
▼ ▼
Investigate Sensitivity analysis
differences │
│ ▼
│ Conclusions robust?
│ │ │
│ No Yes
│ │ │
│ ▼ ▼
└──►Report caveats Valid result
Next Steps
The final lesson in this track is our Community Resources Guide—connecting you with forums, training programs, and the broader LCA community for continued learning.