Scoring & Methodology

Anatomy of the CDP questionnaire: every module explained

Alba Ortiz · · 9 min read

How the CDP questionnaire is structured

Most teams approach the CDP questionnaire as a single 400 page document and miss the underlying logic. The 2024 corporate questionnaire is split into 13 modules, lettered C1 to C13, that follow a deliberate flow: introduction, identification of risks and opportunities, governance, business strategy, then environmental performance broken down by topic (climate, forests, water, plastics, biodiversity), closing with verification and sign off. Understanding this architecture is the first step to a defensible response and a higher score.

CDP scoring bands and how the field distributes from D to A

CDP scoring rewards consistency across modules. A target reported in C7.53 must reconcile with the inventory in C7.5 to C7.8 and the initiatives in C7.55. A risk identified in C3.1 must trace back to a process described in C2.2.2. Scorers cross check; inconsistencies cost points. Build the response so the same numbers and statements appear wherever they are referenced.

The questionnaire is also cumulative across themes. C8 (forests), C9 (water), and C10 (plastics) reuse the governance and risk language established in C2 and C4. A weak governance section in C4 reduces scoring across every environmental theme that follows.

C1 and C2: introduction and DIRO

C1 captures organisational context: language, currency, fiscal year, revenue (1.4.1), reporting boundary (1.5), countries of operation (1.7), and value chain mapping (1.24). The commodity questions (1.22, 1.24.2) anchor everything that follows in C8 forests. Skip them and the forests module collapses.

C2 introduces the DIRO process: dependencies, impacts, risks, and opportunities. Question 2.2.2 is the most consequential single question in the early modules. It asks for a full description of the identification and assessment process: who runs it, how often, what tools and frameworks are used, how outputs feed governance and strategy. Vague answers (“we use a risk register”) cap the score for the entire C2 to C3 chain.

Question 2.3 asks about priority locations across the value chain, and 2.5 about water pollutants. Both reward specificity tied to the inventory data later in C7 and C9.

C3 and C4: risks, opportunities, governance

C3 lists the actual environmental risks (3.1.1) and opportunities (3.6.1) with financial quantification (3.1.2, 3.6.2). The most common scoring loss here is providing risks without the financial metric. Even a rough order of magnitude scores; “not yet quantified” does not.

C4 is the module most underestimated by first time respondents. Questions 4.1.1 (board oversight), 4.1.2 (specific positions and oversight detail), 4.3.1 (highest senior management responsibility), 4.5 and 4.5.1 (monetary incentives) demand specific positions, frequency of board reviews, committees with terms of reference, and explicit linkage of compensation to environmental KPIs. Generic statements like “the board oversees climate” do not score; the specific committee, the agenda frequency, and the named role do.

For practical guidance on building this evidence trail, see the verification and assurance guide.

C7: climate, the heaviest module

C7 is where most of the climate score is decided. The flow is methodological:

  • 7.1 to 7.4: methodology, Scope 2 approach, exclusions
  • 7.5: base year and base year emissions
  • 7.6 to 7.8: gross global Scope 1, 2, and 3 emissions
  • 7.9: verification and assurance status with statements attached
  • 7.10: year over year comparison with reasons for change
  • 7.15 to 7.23: breakdowns by gas, country, division, subsidiary
  • 7.29 to 7.30: energy spend, energy related activities, fuel mix
  • 7.45: emissions intensity
  • 7.53 to 7.54: targets (absolute, intensity, low carbon energy, methane, net zero)
  • 7.55: emissions reduction initiatives, including estimated savings
  • 7.74: low carbon products
  • 7.79: carbon credits cancelled

The single biggest scoring shift in C7 comes from C7.55. Listing initiatives without quantified savings barely scores. Listing each initiative with implementation stage, estimated annual CO2e abated, payback, and lifetime moves the company up a band. The same applies to C7.53: a target with year base, target year, scope, percentage, and SBTi validation status fully reported scores three to four times what an unqualified statement does.

For deeper analysis of C7, see the scoring methodology guide and the Scope 3 data collection playbook.

C8 and C9: forests and water

C8 mirrors C7 but for forest commodities: cattle, cocoa, coffee, palm oil, rubber, soy, timber. The DIRO process from C2 is applied per commodity. Traceability levels (country, region, mill, plot) are scored progressively. C8.17 covers ecosystem restoration, increasingly weighted as nature disclosure expands.

C9 is the water security module. C9.2 captures volumes withdrawn, discharged, and consumed by source and basin. C9.2.4 specifically asks about withdrawals from water stressed areas. C9.15 covers water targets. The same governance and verification logic from C4 and 7.9 applies.

C10 to C13: plastics, biodiversity, sign off

C10 is the newest theme: plastics related targets (10.1), packaging weights and end of life (10.5, 10.6). C11 covers biodiversity actions (11.2), indicators (11.3), and operations near important areas (11.4). Both modules are still maturing in scoring weight but matter for sector materiality.

C13 is the sign off: verification beyond the climate, forests, and water specific ones already declared, plus the named approver and contact details. Skipping C13 puts the entire submission at risk.

Where most companies lose points

The pattern across hundreds of responses is consistent. Companies lose points in:

  • C2.2.2 with vague DIRO process descriptions
  • C4.1.2 and C4.5 with generic governance and incentive statements
  • C7.8 with screening only Scope 3 in material categories
  • C7.55 with unquantified initiatives
  • C7.9 with absent or limited verification

Fixing those five is usually a one band score improvement. To see how Dcycle structures the canonical data layer that feeds every C7, C8, and C9 question consistently, request a demo.

CDPSustainabilityComplianceCarbon Footprint

Collect once. Use everywhere.

See how Dcycle can cut your reporting time by 70% and give your auditors what they need , the first time.

See Dcycle in action