Skip to main content
Lesson 1 of 5advanced

Uncertainty Analysis in LCA

Understand the sources of uncertainty in Life Cycle Assessment and learn methods to quantify and communicate uncertainty in your results.

30 minUpdated Jan 15, 2025

Prerequisites:

what-is-lcalife-cycle-inventory-analysisimpact-assessment-fundamentals

Uncertainty Analysis in LCA

Every LCA result carries uncertainty. Data may be incomplete, models may oversimplify reality, and future scenarios may unfold differently than assumed. Understanding and communicating this uncertainty is essential for responsible LCA practice.

Why Uncertainty Matters

Consider two products being compared:

  • Product A: GWP = 45 kg CO₂ eq
  • Product B: GWP = 50 kg CO₂ eq

Is Product A really better? That depends on the uncertainty in both values. If the uncertainty ranges overlap significantly, the apparent 10% difference may not be meaningful.

Types of Uncertainty

Parameter Uncertainty

Uncertainty in input data values—the most common type in LCA:

  • Material quantities may be estimated rather than measured
  • Energy consumption varies between production batches
  • Emission factors come from literature with inherent variability
  • Database values represent averages across facilities/regions

Example: You estimate electricity use at 2.5 kWh per product. The actual value could range from 2.0-3.0 kWh depending on efficiency variations.

Model Uncertainty

Uncertainty in how the system is represented:

  • Simplified process flows may omit minor inputs
  • Linear scaling may not reflect real process behavior
  • System boundaries exclude some processes
  • Allocation choices among multiple products

Example: Your model assumes linear scaling of emissions with production volume, but reality may involve economies of scale.

Scenario Uncertainty

Uncertainty about future conditions or alternative situations:

  • Product lifetime assumptions
  • End-of-life pathways
  • Background system evolution (e.g., electricity grid)
  • User behavior variations

Example: You assume a 10-year product lifetime, but actual use could range from 5-15 years.

Characterization Factor Uncertainty

Uncertainty in LCIA conversion factors:

  • Scientific uncertainty in environmental fate models
  • Spatial variability in impact pathways
  • Temporal assumptions (e.g., 100-year GWP horizon choice)

Quantifying Parameter Uncertainty

Probability Distributions

Express uncertain parameters as probability distributions rather than single values:

DistributionUse WhenParameters
NormalSymmetric variation around meanMean, standard deviation
LognormalPositive values with right skewGeometric mean, geometric SD
TriangularKnown min, max, and most likelyMinimum, mode, maximum
UniformOnly bounds knownMinimum, maximum

Lognormal is most common in LCA because:

  • Many quantities are strictly positive
  • Variation is often proportional to magnitude
  • Multiplicative errors are common in measurement

The Pedigree Matrix

The pedigree matrix systematically assesses data quality across multiple dimensions:

Indicator1 (Best)2345 (Worst)
ReliabilityVerified dataNon-verifiedQualified estimateNon-qualified estimateUnknown
CompletenessAll sites>50% sites<50% sitesOne siteTheoretical
Temporal<3 years<6 years<10 years<15 years>15 years
GeographicSame areaSame regionSimilar regionDifferent regionUnknown
TechnologicalSame techSimilar techRelated techOlder techUnknown

Each score maps to an uncertainty factor. The combined factors determine the parameter's overall uncertainty.

Example pedigree assessment:

DimensionScoreReasoning
Reliability2Data from supplier, not independently verified
Completeness2Data from 3 of 5 production lines
Temporal1Collected this year
Geographic1Same production facility
Technological1Same process configuration

Correlation Considerations

Parameters may be correlated:

  • If electricity use increases, so might cooling water use
  • Material efficiency affects both input mass and waste output
  • Regional data shares common infrastructure

Ignoring correlations can underestimate or overestimate total uncertainty.

Conducting a Data Quality Assessment (DQA)

A Data Quality Assessment (DQA) is a systematic evaluation of the quality of data used in your LCA. While Monte Carlo simulation (covered below) provides statistical uncertainty propagation, DQA is often the practical choice for most studies because:

  • It's required or recommended by most LCA standards and PCRs
  • It doesn't require specialized software or statistical expertise
  • It can be completed with the pedigree matrix approach you likely already have
  • Results are easier to explain to non-technical stakeholders
  • It identifies where to focus data improvement efforts

Step-by-Step DQA Process

Step 1: Identify Key Data Points

Focus your DQA on data that matters most. Use contribution analysis to identify:

  • Processes contributing >5% to any impact category
  • Foreground data (your primary data) vs. background data (database values)
  • Data where you had to make assumptions or use proxies

You don't need to assess every parameter—prioritize based on influence on results.

Step 2: Score Each Data Point Using Pedigree Matrix

For each key data point, assign scores (1-5) across all five pedigree dimensions:

Indicator1 (Best)2345 (Worst)
ReliabilityVerified measurementVerified estimateNon-verified dataQualified estimateNon-qualified estimate
CompletenessAll relevant sites/periods>50% of sites<50% of sitesSingle site onlyTheoretical/stoichiometric
Temporal<3 years old3-6 years6-10 years10-15 years>15 years or unknown
GeographicSame areaLarger area including siteSimilar areaSlightly similar areaUnknown or very different
TechnologicalSame processIncluded processesSimilar processesRelated processesUnknown or very different

Step 3: Document Your Reasoning

For each score, briefly document why you assigned that value. This is critical for:

  • Reproducibility and transparency
  • Third-party review
  • Identifying improvement opportunities

Example DQA documentation:

ParameterValueSourceRelCompTempGeoTechNotes
Electricity (assembly)2.5 kWh/unitMeter data, 202412111Verified meter, 3 of 5 lines
Steel (housing)0.8 kg/unitBOM + purchasing21121BOM verified, supplier in region
Transport (inbound)450 kmEstimated avg33222Mix of suppliers, estimated
PCB manufacturingecoinvent 3.9Database22232Generic Asia data

Step 4: Calculate Data Quality Indicators

Aggregate scores into summary indicators. Common approaches:

Simple average (most common):

DQR = (Rel + Comp + Temp + Geo + Tech) / 5

Weighted average (when some dimensions matter more):

DQR = (w₁×Rel + w₂×Comp + w₃×Temp + w₄×Geo + w₅×Tech) / Σw

PEF studies use specific weights: Reliability (3), Completeness (2), Temporal (2), Geographic (2), Technological (2).

Step 5: Interpret and Report Results

Create a summary showing data quality by life cycle stage or process:

Life Cycle StageAvg DQRPrimary Data %Key Gaps
Raw materials2.820%Upstream supplier data
Manufacturing1.685%Good primary data
Distribution3.240%Distance estimates
Use phase2.460%Energy assumptions
End of life3.510%Generic EoL scenarios

Interpretation guidelines:

  • DQR 1.0–2.0: High quality data, high confidence in results
  • DQR 2.0–3.0: Acceptable quality, moderate confidence
  • DQR 3.0–4.0: Lower quality, results should be interpreted cautiously
  • DQR >4.0: Poor quality, consider as screening-level only

Using DQA to Guide Improvement

DQA isn't just documentation—it's a roadmap for improving your study:

  1. Prioritize data collection: Focus on high-contribution processes with poor DQR scores
  2. Justify scope decisions: Poor data quality in a low-contribution process may justify exclusion
  3. Support sensitivity analysis: Test parameters with high uncertainty scores
  4. Set improvement targets: Aim to achieve DQR <3.0 for processes contributing >10% to results

DQA vs. Monte Carlo: When to Use Which

ConsiderationDQAMonte Carlo
Required expertiseBasicStatistical knowledge
Software needsSpreadsheetLCA software with MC capability
Time investmentModerateHigher
Output typeQuality scores, qualitativeProbability distributions, confidence intervals
Best forMost studies, EPDs, screeningComparative assertions, high-stakes decisions
Satisfies ISO 14044YesYes

Use DQA when:

  • Conducting product footprints or EPDs
  • Time/budget is limited
  • Audience prefers qualitative assessment
  • Monte Carlo isn't supported by your tools

Add Monte Carlo when:

  • Making public comparative assertions
  • Small differences between alternatives need statistical validation
  • Client or standard specifically requires it
  • You need confidence intervals for decision-making

Propagating Uncertainty: Monte Carlo Simulation

Monte Carlo simulation propagates parameter uncertainties through the LCA model to estimate result uncertainty.

How Monte Carlo Works

  1. Define distributions for uncertain parameters
  2. Sample random values from each distribution
  3. Calculate LCA results with sampled values
  4. Repeat many times (1,000-10,000 iterations)
  5. Analyze the distribution of results

Monte Carlo Process

For iteration 1 to N:
    Sample electricity_use from Lognormal(2.5, 1.2)
    Sample transport_distance from Triangular(400, 500, 800)
    Sample emission_factor from Lognormal(0.42, 1.3)
    ...
    Calculate GWP with sampled values
    Store result
    
Analyze distribution of stored results
Report: mean, median, 5th percentile, 95th percentile

Interpreting Monte Carlo Results

Results typically include:

StatisticMeaning
MeanAverage result across simulations
Median50th percentile (middle value)
Standard deviationSpread of results
Coefficient of variationSD/mean (relative spread)
Confidence intervalRange containing specified probability (e.g., 95%)

For skewed distributions, median is often more representative than mean.

Comparative Probability

For product comparisons, Monte Carlo reveals the probability that one option outperforms another:

In 7,500 of 10,000 simulations, Product A had lower GWP than Product B.
→ 75% probability that A is better

This is more informative than comparing point estimates.

Sensitivity Analysis

While Monte Carlo quantifies total uncertainty, sensitivity analysis identifies which parameters matter most.

Local Sensitivity Analysis

Vary one parameter at a time while holding others constant:

  1. Baseline calculation with default values
  2. Vary one parameter (e.g., ±10%, ±20%)
  3. Record change in results
  4. Repeat for each parameter

Sensitivity ratio = (% change in result) / (% change in parameter)

Parameters with high sensitivity ratios deserve careful data collection.

Contribution to Variance

From Monte Carlo results, decompose total variance:

Total variance = Var(electricity) + Var(transport) + Var(materials) + ...

Parameters contributing most to variance should be prioritized for data improvement.

Tornado Diagrams

Visualize sensitivity across parameters:

                                    Baseline
Parameter A     |------------|======|============|
Parameter B         |---------|======|--------|
Parameter C           |-------|======|-----|
Parameter D              |----|======|---|
                    -30%      0%    +30%

The widest bars represent the most sensitive parameters.

Scenario Analysis

For scenario uncertainty, define and analyze discrete alternatives:

ScenarioDescriptionProbability
Base caseExpected conditions50%
OptimisticBest-case assumptions25%
PessimisticWorst-case assumptions25%

Calculate results for each scenario and report the range.

Common Scenario Dimensions

  • Energy mix: Current grid vs. future renewable vs. fossil-heavy
  • End-of-life: Landfill vs. recycling vs. incineration
  • Transport: Average distance vs. local vs. international
  • Lifetime: Expected vs. short vs. extended use

Communicating Uncertainty

Good Practices

Do:

  • Report confidence intervals, not just point estimates
  • Show sensitivity analysis results
  • Acknowledge data quality limitations
  • Use probability statements for comparisons
  • Visualize uncertainty ranges

Don't:

  • Report excessive decimal precision (implies false accuracy)
  • Make strong claims when uncertainty is high
  • Hide unfavorable uncertainty findings
  • Ignore model and scenario uncertainty

Visualization Options

Error bars: Show confidence intervals on bar charts

Box plots: Show distribution characteristics

Probability density functions: Show full distribution shape

Heat maps: Show probability of outperformance across categories

Narrative Communication

Instead of: "Product A has GWP of 45.2 kg CO₂ eq"

Write: "Product A has an estimated GWP of 45 kg CO₂ eq (95% confidence interval: 38-54 kg CO₂ eq)"

Or: "There is approximately 75% probability that Product A has lower climate impact than Product B, based on Monte Carlo analysis with 10,000 iterations."

Software Implementation

openLCA

  1. Add uncertainty to flows: Right-click amount → Define uncertainty
  2. Run Monte Carlo: Calculate → Monte Carlo simulation
  3. Set iterations and review distribution results

SimaPro

  1. Define uncertainty in process records
  2. Use built-in Monte Carlo analysis
  3. Access contribution to variance analysis

Brightway

Python scripting enables sophisticated uncertainty analysis:

from brightway2 import *
# Define distributions using stats_arrays
# Run Monte Carlo with MonteCarloLCA class
# Analyze results with numpy/scipy

Key Takeaways

  1. All LCA results carry uncertainty—acknowledge and quantify it
  2. Parameter uncertainty is most common; use pedigree matrix for assessment
  3. Monte Carlo simulation propagates uncertainty through the model
  4. Sensitivity analysis identifies which parameters matter most
  5. Report results with confidence intervals, not false precision
  6. For comparisons, probability statements are more meaningful than point estimates

Practice Exercise

You're comparing two packaging options. Run a Monte Carlo analysis (500 iterations) varying:

  • Material quantity (±15%)
  • Transport distance (±30%)
  • End-of-life scenario (50% recycling vs. 30% recycling)

Questions:

  1. What is the 95% confidence interval for each option's GWP?
  2. What is the probability that Option A outperforms Option B?
  3. Which parameter contributes most to variance?

What's Next?

The next lesson provides a detailed tutorial on implementing Monte Carlo simulation, including practical guidance on setting up distributions, running simulations, and interpreting results.


Further Reading

  • Heijungs, R., & Huijbregts, M.A.J. (2004). A Review of Approaches to Treat Uncertainty in LCA. iEMSs 2004 International Congress.
  • Weidema, B.P., et al. (2013). Overview and Methodology: Data Quality Guideline for the ecoinvent Database Version 3. ecoinvent Report.
  • Groen, E.A., et al. (2014). Methods for Uncertainty Propagation in Life Cycle Assessment. Environmental Modelling & Software.