Methodology

This page provides a high-level overview of SEPI’s approach. Detailed technical documentation and public benchmark outputs will be released in line with the validated publication roadmap and expert review process. This Trust Layer is designed for clarity, comparability, and institutional readability.

Positioning and publication scope

SEPI is not an ESG rating. It is a comparability-first benchmarking framework based on publicly available corporate information, with documented assumptions, explicit limitations, and versioning discipline.

METHODOLOGY STATUS: Trust Layer summary (v0.9).

Detailed technical documentation and benchmark outputs are released under the validated publication roadmap.

Method architecture

A high-level view of how SEPI translates disclosures into comparable, audit-ready benchmarking outputs

Input

Public corporate disclosures & structured sources

Comparability first

Standardisation

KPI definitions, units, and comparability-first normalization

Documented assumptions

Outputs

Benchmark outputs released under the validated roadmap

Versioning & audit trail

Scenarios

Scenario-based weighting and robustness checks (high level)

Explicit limitations

Key methodological choices

Roadmap gating

Public benchmark outputs are released only when validated through the expert review and publication roadmap.

Dual reference points

Interpretation uses both a Swiss baseline and a sector reference to separate national context from peer performance.

Distance-to-benchmark

Results are communicated as distances to Swiss and sector references to support decision-making without “blame & shame”

For governance, versioning principles and the correction mechanism, see Governance. For publication timing and scope expansion, see Roadmap.

Earlier drafts are superseded. Public releases follow the validated roadmap (first public digital release planned for December 2026).

  • Key takeaway: SEPI is designed for boards, investors, and public institutions seeking decision-grade comparability.

    • Primary use: peer benchmarking and hotspot identification.

    • Outputs are distance-to-reference signals (not ratings).

    • No public company results are released at this stage.

    See also: Outputs, Roadmap

  • Key takeaway: SEPI is disclosure-driven: it translates publicly available corporate information into comparable signals.

    • Sources include annual reports, sustainability reports, and structured datasets.

    • Narrative claims are not treated as performance substitutes.

    • Key assumptions are documented for traceability.

    See also: Governance, QA, audit trail & versioning

  • Key takeaway: SEPI focuses on measurable environmental pressure categories designed for cross-company comparability.

    • KPI definitions include units and scope boundaries.

    • Headline categories remain stable; refinements are versioned.

    • Sector context is handled via reference points, not “storytelling”.

    See also: Normalisation principles, Outputs

  • Key takeaway: Indicators are normalised to enable meaningful cross-sector comparison without false precision.

    • Consistent units and definitions across companies.

    • Documented assumptions reduce interpretability ambiguity.

    • Method avoids overstating accuracy when data is constrained.

    See also: QA, audit trail & versioning, Limitations

  • Key takeaway: SEPI uses scenario-based weighting to improve robustness and reduce gaming risk.

    • Multiple scenarios reflect different decision contexts.

    • Robustness checks are applied at a high level.

    • Final public documentation follows expert review.

    See also: Governance, Roadmap

  • Key takeaway: Missing data is explicitly handled through documented coverage rules to preserve comparability.

    • Coverage thresholds and missingness penalties are applied consistently.

    • No netting across unrelated KPIs (“no double counting” discipline).

    • Assumptions and exceptions are logged and versioned.

    See also: Governance, QA, audit trail & versioning

  • Key takeaway: SEPI follows a versioning discipline with documented assumptions and change logs.

    • Definitions, assumptions, and updates are traceable.

    • Reproducibility and comparability are treated as design constraints.

    • Corrections follow a documented mechanism.

    See also: Governance, Roadmap

  • Key takeaway: SEPI states limitations explicitly to avoid over-interpretation and false certainty.

    • Interpretation depends on scope, coverage, and sector context.

    • Public disclosures constrain precision and completeness.

    • Outputs are benchmarking signals, not due diligence.

    See also: Governance, Outputs