Skip to main content
Lesson 9 of 10intermediate

Result Validation & Troubleshooting Different Results

Learn to validate your LCA results, understand why different studies give different answers, troubleshoot unexpected results, and handle uncertainty properly.

25 minUpdated Jan 15, 2025

Prerequisites:

impact-assessment-fundamentals

Result Validation & Troubleshooting

"Why do different LCA studies of the same product give different results?" and "How do I validate my LCA results?" are questions that reveal the real-world complexity of LCA. This guide provides systematic approaches to validation, troubleshooting, and uncertainty management.

Understanding Result Variability

Why LCA Studies Differ

Same product, different results is normal. Major sources of variation:

SourceTypical VariationExample
System boundary20-50%+Including/excluding use phase
Functional unit10-100%Per kg vs. per function
Geographic scope10-50%US vs. EU electricity
Database choice10-30%ecoinvent vs. GaBi
LCIA method5-30%ReCiPe vs. CML
Allocation20-50%Mass vs. economic
Data vintage10-30%2015 vs. 2023 data
Technology assumptions20-50%+Current vs. future tech

When Differences Are a Problem

Acceptable variation: Results differ but conclusions are consistent

  • "Both studies show manufacturing dominates"
  • "Both rank Product A better than B"

Problematic variation: Results lead to opposite conclusions

  • "Study 1 says A is better; Study 2 says B is better"
  • Requires investigation to understand why

FAQ: Validation and Comparison

"Why do different LCA studies of the same product give different results?"

Root cause investigation checklist:

1. Check system boundaries

Study A: Cradle-to-gate = 50 kg CO₂-eq
Study B: Cradle-to-grave = 150 kg CO₂-eq

Not comparable! B includes use phase (electricity).

2. Check functional units

Study A: 1 kg of product = 5 kg CO₂-eq
Study B: 1 unit (0.5 kg) = 3 kg CO₂-eq

Study A per unit = 2.5 kg CO₂-eq (different!)

3. Check geographic context

Study A: Made in Norway (hydro electricity)
Study B: Made in Poland (coal electricity)

Same product, very different manufacturing impact.

4. Check data sources

Study A: Uses ecoinvent 3.8 Cut-off
Study B: Uses GaBi 2022 databases

Different background data = different results.

5. Check methodological choices

Study A: Economic allocation for co-products
Study B: Mass allocation for co-products

Allocation can shift 30-50% of impacts.

"How do I validate my LCA results?"

Validation strategy: Multiple checks

Level 1: Sanity checks

  • Are magnitudes reasonable?
  • Do results make physical sense?
  • Is there anything obviously wrong?

Level 2: Benchmark comparison

  • Compare to published studies
  • Compare to EPD benchmarks
  • Compare to industry averages

Level 3: Internal consistency

  • Does hotspot analysis make sense?
  • Are life cycle stage contributions logical?
  • Does sensitivity analysis behave as expected?

Level 4: External review

  • Peer review (internal or external)
  • Critical review (for ISO compliance)
  • Stakeholder feedback

"Why does my land use impact differ between SimaPro and openLCA?"

Common causes of software differences:

1. LCIA method version

SimaPro: ReCiPe 2016 v1.1
openLCA: ReCiPe 2016 v1.03

Different characterization factors!

2. Flow mapping differences

ecoinvent flow: "Occupation, forest, natural"
SimaPro mapping: → Forest land occupation
openLCA mapping: → May differ or be missing

3. Characterization factor coverage

Land use has many sub-categories
Not all flows may be characterized
Check "uncharacterized flows" in results

4. Background data linking

Default providers may differ
"Electricity, medium voltage" may link differently
Check linked processes explicitly

Debugging steps:

  1. Export results at the inventory level (before characterization)
  2. Compare key elementary flows
  3. Check if the same flows exist in both
  4. Apply characterization manually to isolate the difference

"How do I handle uncertainty in my results?"

Types of uncertainty:

TypeSourceHandling
ParameterData variabilityMonte Carlo, sensitivity
ScenarioChoice of assumptionsScenario analysis
ModelMethodological choicesCompare methods

Practical uncertainty approaches:

Sensitivity analysis (minimum): Test key parameters ±20-50%:

  • Does the conclusion change?
  • What's the break-even point?

Scenario analysis: Test alternative assumptions:

  • Best case / worst case
  • Current vs. future technology
  • Different geographic contexts

Monte Carlo simulation (advanced):

  • Requires uncertainty data for inputs
  • Runs 1,000+ iterations
  • Produces probability distributions

Reporting uncertainty:

Result: 100 kg CO₂-eq
Sensitivity range: 80-130 kg CO₂-eq (key parameters ±20%)
Monte Carlo 95% CI: 75-140 kg CO₂-eq

Validation Benchmarks

Quick Sanity Checks

Carbon footprint ballpark values:

ProductTypical GWP (kg CO₂-eq)
1 kg steel1.5 - 2.5
1 kg aluminum10 - 15
1 kg plastic (generic)2 - 4
1 kg paper1 - 2
1 kWh electricity (EU)0.3 - 0.5
1 tkm truck0.05 - 0.15
1 kg beef20 - 50
1 kg vegetables0.5 - 2

If your results are 10× different, investigate!

Contribution Analysis Check

Typical life cycle stage contributions:

Product TypeUsually Dominant Stage
Energy-using productsUse phase (60-90%)
Food productsAgriculture (50-80%)
Metals/materialsRaw material production (60-80%)
ChemicalsProduction + feedstock (50-70%)
TextilesProduction + use (washing)
BuildingsUse phase (heating/cooling)
PackagingProduction (if single-use)

Red flags:

  • Transport dominates (>30%) for heavy products → Check distances
  • End-of-life dominates → Check waste treatment assumptions
  • A minor input dominates → Check for data entry errors

Troubleshooting Common Problems

Problem 1: Results are way too high

Possible causes:

  • Unit error (kg vs. tonnes, kWh vs. MWh)
  • Wrong process selected (per unit vs. per kg)
  • Scaling error in custom process
  • Double-counting (same flow entered twice)

Fix: Verify units at each step, check process reference unit

Problem 2: Results are way too low

Possible causes:

  • Missing important processes
  • Wrong functional unit scaling
  • Incomplete inventory
  • Zero values for key flows

Fix: Mass balance check, compare inventory to known benchmarks

Problem 3: One category dominates unexpectedly

Possible causes:

  • Uncharacterized flows defaulting to one category
  • Unusual background process
  • Data quality issue in database

Fix: Trace the contribution back through process tree

Problem 4: Negative results

Possible causes:

  • Recycling credits (intended)
  • System expansion credits
  • Data error (negative input)

Fix: Review all credit-giving processes, verify system expansion logic

Problem 5: Toxicity categories give unexpected results

Possible causes:

  • Toxicity methods are uncertain
  • Minor flows can have large characterization factors
  • Missing characterization for some flows

Fix: Report toxicity results with appropriate caveats

Systematic Validation Protocol

Pre-Analysis Validation

Before running LCIA:

☐ All material inputs have mass balance (in ≈ out + waste) ☐ Energy inputs are reasonable for process type ☐ Transport distances are realistic ☐ All flows are linked to database processes ☐ No orphan flows or missing links ☐ Units are consistent throughout

Post-Analysis Validation

After getting results:

☐ Overall magnitude is plausible (benchmark check) ☐ Contribution analysis makes sense ☐ No single minor input dominates unexpectedly ☐ Compare to at least one published reference ☐ Sensitivity analysis shows expected behavior ☐ Alternative LCIA method gives consistent ranking

Documentation Checklist

For transparency:

☐ Data sources listed for all inputs ☐ Data quality indicators applied ☐ Assumptions documented ☐ Limitations acknowledged ☐ Sensitivity/uncertainty discussed ☐ Comparison to literature provided

When to Seek External Review

Critical review is required for:

  • Comparative assertions disclosed to public (ISO 14044)
  • EPDs and verified claims
  • Policy-supporting studies

Critical review is recommended for:

  • High-stakes internal decisions
  • First LCA study in a new domain
  • Studies with unusual methodological choices

Finding a reviewer:

  • Academic institutions
  • Certified LCA practitioners
  • LCA consulting firms
  • Program operators (for EPDs)

Comparing to Literature

How to Use Published LCA Studies

Step 1: Find comparable studies

  • Same product category
  • Similar geography (ideally)
  • Recent (within 5 years preferably)

Step 2: Create comparison table

StudySystem BoundaryFUGWP ResultDatabase
This studyCradle-grave1 unit15 kg CO₂-eqecoinvent 3.10
Smith 2022Cradle-grave1 unit18 kg CO₂-eqecoinvent 3.8
Jones 2021Cradle-gate1 kg5 kg CO₂-eqGaBi

Step 3: Explain differences

  • If within ±30%, likely normal variation
  • If >50% different, investigate methodology differences
  • Document reasons for deviations

Resources for Benchmarking

Product TypeBenchmark Source
ConstructionEPD databases (EPD International, IBU)
FoodPoore & Nemecek 2018, GFLI
ElectronicsGreenDelta data, Apple EPRs
PackagingIndustry associations
ChemicalsPlastics Europe, sector EPDs
GeneralMeta-analyses in academic literature

Key Takeaways

  1. Different results are normal—understand why before concluding error
  2. Systematic validation catches problems—use checklists
  3. Benchmarking provides reality check—compare to published values
  4. Uncertainty is inherent—report it, don't hide it
  5. Software differences happen—trace to characterization and flow mapping
  6. Documentation enables review—transparent methods build credibility

Validation Flowchart

Results obtained
       │
       ▼
Sanity check: Reasonable magnitude?
       │          │
      No          Yes
       │          │
       ▼          ▼
Check units,     Contribution analysis sensible?
fix errors            │          │
       │            No          Yes
       │             │           │
       └──────────►──┘           ▼
                         Compare to benchmark
                              │          │
                          >50% off    Within range
                              │          │
                              ▼          ▼
                      Investigate    Sensitivity analysis
                      differences           │
                              │             ▼
                              │    Conclusions robust?
                              │         │        │
                              │        No        Yes
                              │         │        │
                              │         ▼        ▼
                              └──►Report caveats   Valid result

Next Steps

The final lesson in this track is our Community Resources Guide—connecting you with forums, training programs, and the broader LCA community for continued learning.