GRESB Scoring Insights 2: Value beyond GRESB Scores

Chris Pyke
Chief Innovation Officer

The first article in this series addressed Why GRESB Scores. It described scoring as a mechanism for simplification, differentiation, and flexibility, concluding that scoring is necessary, but necessarily imperfect.

In this short article, I want to put GRESB Scores in context and describe the value of the GRESB Assessment process beyond scoring.

Standardized data definitions

Any good systematic assessment starts with a set of definitions—some call this a “control vocabulary.” The community using the assessment needs to understand what they are talking about and, exactly, how they propose to measure it.

GRESB does this by codifying a set of definitions in the Real Estate and Infrastructure Standards & Reference Guides. These are publicly available, annually updated documents that describe a specific set of questions and a range of valid responses. GRESB calls questions indicators, and each typically has a ‘yes or no’ structure with answer choices and requirements for supporting documentation.

For example, indicator RM1 provides a relatively simple example, “Does the entity have an Environmental Management System?” Answer choices include ISO 14001, the EU Eco- Management, and Audit Scheme. This superficially simple structure makes the underspecified question answerable and, critically, comparable across responses.

GRESB uses a similar approach for performance metrics. GRESB Real Estate is tailored to the circumstances associated with property ownership and management. Consequently, indicators like EN1 Energy Consumption have specific provisions for whole building, common areas, shared services, and tenant spaces. Participants report standardized energy consumption information for these divisions, breaking out fuels, district energy, and purchased electricity. The Reference Guide provides detailed specifications for this information.

The definitions and requirements associated with indicators and metrics are the foundation of each GRESB Standard. They are the essential prerequisites for systematic data collection and comparability.

Independent materiality assessment

Every indicator and metric in the GRESB Assessment is assigned a point value, which is a piece of the overall scoring process. Points assigned to each element are effectively an independent assessment of materiality supported by the volunteer, expert members of the GRESB Foundation. The points reflect their assessment of relative value or importance for the assessment of real asset management and performance.

These points can be considered a default or starting materiality value. They are applied consistently across GRESB Participants to enable comparability. As with any materiality prioritization, perspectives on the relative importance of each element vary. It is not surprising, as investors, management, and other stakeholders typically have different priorities and perspectives on what is material. GRESB point allocations provide one independent perspective on materiality. They are used as a baseline when comparing alternative priorities (i.e., different views on materiality).

Centralized data collection

A standardized language for indicators and performance metrics provides the raw material for reporting. The next step is to use technology to ensure that information is collected systematically across even large or complex portfolios.

This technology-enabled process separates a systematic assessment from a sea of emails, spreadsheets, and documents. An investor or manager may not see value in GRESB’s point allocations or scores, but they may appreciate the ability to request and collect information in something better than email and spreadsheets. Moreover, centralized data collection can enforce specific security rules and access controls, ensuring that potentially sensitive business information remains private.

Validation and corrections

Sustainability data can be complicated. Definitions need to be followed. Circumstances vary. People make mistakes. This is why GRESB validates data.

Each year, the process identifies thousands of errors and omissions across indicators and metrics. The validation process is followed by opportunities for corrections. This is used hundreds of times per year. This process will never be perfect, as it reflects communication between thousands of people about hundreds of thousands of real estate assets. However, the end result is better for it, with more complete and accurate information.

Data aggregation and reporting

After the data is in, validated, and corrected, it’s time to get it back out. This often starts with an annual Benchmark Report. This is closely followed by the Data Exporter, a tool to aggregate and securely distribute scores, rankings, indicators, and performance metrics. This is a process unto itself, requiring organizing multi-level data across dozens, sometimes hundreds of reports, into a format suitable to use in a simple, “flat” spreadsheet. This gives investors and managers data to analyze and compare individual entities and whole portfolios.

Takeaway

The GRESB Standards and reporting platform do more than score. They provide widely accepted and publicly available definitions for indicators and performance metrics. These definitions are associated with a consensus-based assessment of materiality, also known as points. In turn, GRESB provides a systematic, secure mechanism to collect this information from hundreds or thousands of organizations without a chaotic mess of emails, spreadsheets, and documents. This information is subject to validation and correction improving data quality. Finally, the data is aggregated and made available for export and further analysis.

Each step creates value beyond scores.

Related insights