Why C7 decides most of your CDP climate score
Module C7 carries the bulk of the climate questionnaire weight. Within C7, the inventory section (7.5 to 7.8), the verification section (7.9), and the targets and initiatives section (7.53 to 7.55) account for most of the available points. Companies that approach C7 systematically tend to score B or above; those that treat it as a forms exercise rarely escape C.
This guide walks through the C7 flow in the same order CDP asks the questions, highlighting what scorers reward at each step.
7.1 to 7.4: methodology setup
7.1 confirms whether this is a first response, structural changes, or methodology changes. 7.2 captures the standards used (GHG Protocol Corporate Standard, ISO 14064-1, sector specific protocols). 7.3 asks about Scope 2 reporting; both location based and market based figures are required if any electricity is contracted. 7.4 covers exclusions; transparency about excluded sources scores better than omission.
7.5: base year
The base year determines target credibility. Pick a year with complete, verified data and stable boundary. Document the recalculation policy: when do you trigger a recalculation (acquisition, divestment, methodology change above 5 percent), and how. A vague base year undermines every target in 7.53.
7.6 to 7.8: gross emissions
- 7.6 Scope 1: combustion, fugitive, process, mobile. Break down by source where possible.
- 7.7 Scope 2: location based and market based, both reported. The market based figure must reflect the contractual instruments listed in 7.30.14.
- 7.8 Scope 3: all 15 categories assessed for relevance. For relevant categories, full inventory with methodology and data quality. Spend based is acceptable in year one but penalised in material categories without an activity based transition plan. See the Scope 3 data collection playbook for category by category guidance.
7.9: verification
This is the single biggest scoring lever in C7. Limited assurance for Scope 1 and 2 raises the score by approximately one band. Reasonable assurance for Scope 1 and 2, with limited assurance for material Scope 3 categories, is the Leadership benchmark. Attach the signed verification statement from an accredited body (UKAS, ANAB, ENAC depending on geography). The full guide is in the CDP verification and assurance guide.
7.10: year on year comparison
7.10 and 7.10.1 ask why emissions changed: organic growth, divestment, decarbonisation initiatives, methodology changes. Each driver should be quantified. 7.10.2 confirms whether the comparison is location based or market based. The narrative must reconcile with the figures in 7.6 to 7.8.
7.15 to 7.23: breakdowns
7.15 to 7.17 break down Scope 1 by gas type (using GWP values from IPCC AR6 where possible) and by country and division. 7.20 to 7.23 do the same for Scope 2 and capture subsidiary level data. These breakdowns are not heavy scoring on their own but feed downstream uses (CSRD ESRS E1, customer audits) so doing them once well saves repeated work.
7.29 to 7.30: energy
The energy section is the most error prone in C7. 7.30.1 covers total consumption in MWh. 7.30.6 and 7.30.7 cover fuel use by application and type. 7.30.9 covers electricity, heat, steam, and cooling generated and consumed. 7.30.14 captures the zero or near zero emission factor sources (PPAs, certified renewable energy, on site generation) used in the market based Scope 2 figure. 7.30.16 breaks consumption down by country.
The most common error: 7.30.14 not reconciling with 7.7 market based. Scorers verify the maths. The certificates listed must total at least the renewable share of the market based figure.
7.45: intensity
Combined Scope 1 and 2 per unit revenue is the default. Sector specific intensities (per tonne produced, per square metre, per FTE) are encouraged. Multiple intensities scored independently. Pick at least one intensity that aligns with how investors and customers benchmark your sector.
7.53 to 7.54: targets
7.53.1 absolute targets: base year, target year, scope, percentage, methodology, SBTi validation status. 7.53.2 intensity targets: same fields plus intensity unit. 7.54.1 low carbon energy targets. 7.54.2 other climate targets including methane. 7.54.3 net zero target with full coverage and removals strategy.
SBTi validation is the highest scoring signal here. The validation process takes 6 to 12 months, so it cannot be left until the cycle before submission. See the SBTi targets guide for the validation roadmap.
7.55: initiatives
7.55.1 totals by development stage. 7.55.2 details for each implementation initiative: estimated annual CO2e saved, payback period, lifetime, scope affected, methodology used to estimate. 7.55.3 covers investment mechanisms (internal carbon price, dedicated capex, ESG linked finance).
This is where many companies leave points on the table. A list of qualitative initiatives without quantified savings barely scores. The same list with annual abatement, payback, and methodology can shift the C7 score by a band on its own.
7.74 and 7.79: low carbon products and credits
7.74 captures low carbon products (those with at least 50 percent lower emissions than a sector benchmark or peer comparator). 7.79 covers project based carbon credits cancelled in the reporting year, with project type, geography, and standard. Credits do not substitute reductions in absolute targets but score for transparency.
What to prioritise
If you have one cycle to improve: verification (7.9) and quantified initiatives (7.55.2). If you have two cycles: add SBTi validation (7.53). If you have three: full Scope 3 with primary data in material categories. To see how Dcycle structures the canonical data layer that feeds every C7 subquestion consistently, request a demo.
A good C7 response is not the result of weeks of effort before the deadline. It is the by product of a continuous data architecture that already does most of the work. Build that once and the questionnaire becomes a render of what you already know, not a yearly fire drill.