When I worked at the electric utility growing ratepayer-funded electricity and water resource management programs, I was impressed by the cubicle rows housing dozens of PhDs schooled in evaluation, measurement, and verification, or the EM&V team. Some of my favorite people were in the EM&V team. The group was run as an internal consultancy, where the team conducted evaluations of the programs and facilitated evaluations done by third-party evaluators. About 10% of every program’s total budget went towards these program evaluations, and market evaluations also helped set new energy savings goals for the company every so often. The EM&V group’s work often led to insights on how to design new programs in the future with greater impact, often 4 or more years after the program had been completed.
At the same time, the group I directly supported was responsible for collecting the quarterly and annual impact metrics. This work was pure data collection — both numbers and stories. The templates were well-established, and each reporting season we put together the reports. It was pure “plug and chug” type work. Each report was a labor of love, but the path to the end product was well-worn, and rarely did I see insights from these reports change a programmatic or business decision. Instead, the reports were instrumental to holding my company accountable by stakeholders and regulators. Maybe in a different universe, shareholders would have looked at this report as well, but I digress.
About 2 years into my role with the impact reporting group, the state regulators asked the company to participate in a statewide effort among the regulators, utilities, and other stakeholders to define program performance metrics and key performance indicators. It was a deep dive into the most nuanced workplace semantics I think I’ve ever seen. The excitement was strong. For the first time, programs would be measured regularly, not just evaluated and verified at the end of the program cycle. This new framework would allow these major publicly traded corporations to report social and environmental impact aggregated at scale, overseen by a regulator, to benefit the public good. Nevermind 2021-era ESG SEC rulings, the year was 2009 and the regulator was the CPUC.
When I built the reporting and impact infrastructure for my company some 10 years later, I took these lessons with me and scaled them down in budget and resources. A seed-stage private company can hardly manage a CRM system rollout gracefully, let alone a 100-person statewide team developing a matrix of metrics that would appease dozens of stakeholders and ratepayer advocates. So we started with a Theory of Change. We met for a long retreat in Chicago, where we converged on the vision we saw for our mission-driven company. We had debates like: as a mission driven company, what were we responsible for in the vision we aligned around and what was outside our control? What could we hold ourselves accountable to that needed to be measured? And then we asked those things 3 more times to filter the aspirational do-gooder out and leave just the impact performance metrics.
The end result is something I’m forever proud of. The Theory of Change has been a north star for the organization for 4 years and still going strong. Mapped to the Theory of Change are KPIs that we report quarterly and annually. The KPIs themselves are additive, where some are reported always, and others are planned to be added as our margins allow. Balancing intent with available resources was a tough nut to crack, but we did it.
Up next: TCFD and other climate related reporting frameworks.
One thought on “Measuring ESG: A Practical Approach”