Carbon Offsets Round 1 - What Did We Learn?
Round 1 of carbon offsets extended from 1988 to roughly 2015. We’ve learned a lot about the practicalities of designing and implementing carbon offsets, and unfortunately much of that experience suggests that the idealized theory of offsets often fails in practice.
This thought tentatively identifies some of the key lessons of Round 1 based specifically on the experience of the Climatographers. It is not intended to be a “final” list of lessons. In some cases you’ll see that we’ve inserted a link to materials in the Climate Web that are particularly germane to the finding.
It is impossible to maximize for the two primary goals of offsets: “cost containment” and “climate benefit.” There is a fundamental conflict between the two objectives, and cost containment almost always wins.
Without constant attention to “willful blindness” and “capture of the system by stakeholders,” you quickly end up selling the equivalent of the emperor’s (invisible) clothing.
The primary threat to the environmental integrity of carbon markets has been the definition, interpretation, and implementation of the “additionality” criterion. While fraud and double-counting have certainly occurred, they haven’t compared in magnitude to the challenge posed by additionality.
“Additional” emissions reductions or carbon removals have to be traceable back to the workings of and incentives created by the carbon market they are being sold into. If the reductions or removals would have occurred in the absence of those market-driven incentives, they’re not additional. It is not enough to characterize “additional” projects as projects that “wouldn’t have happened anyway,” or that “weren’t business as usual.” These ideas are vague and often lead to gaming.
There is almost never the opportunity to know with certainty whether a particular ton of reduced emissions or carbon removal is additional. Because determining additionality involves a prediction about the future, it’s inherently uncertain and requiring of a judgment call.
Policy makers have generally failed to recognize that the key decisions relating to carbon offsets, including when it comes to additionality, are policy decisions, not technical decisions that can be delegated to technocrats or stakeholders. For example, “testing” for additionality is an example of hypothesis testing, which requires determining the acceptable balance between “false positives” and “false negatives,” given that they are inversely correlated.
In the face of policy and market uncertainties, offset developers are motivated to get the lowest-risk and lowest-cost reductions and sequestration tonnes accepted into offsets markets. In practice those tend to be non-additional tons.
Even when the first example of a particular kind of offset project is additional, the big question is what comes next. Project developers are often able to find very similar projects and get them approved using the same methodology, even if they’re not additional. The single most important missing step in today’s process of approving a new carbon offset methodology is the failure to try and assess how many non-additional tons might be able to slip into the market using the same methodology.
The verification process associated with carbon offsets does not extend to verifying their additionality. Additionality testing involves building an a priori counter-factual case that verifiers accept; it can rarely if ever be empirically tested for later on.
Additionality fatigue after 30 years of voluntary carbon offsets is similar to COVID-19 lockdown fatigue. Understandable, but in no way invalidating the need to tackle the underlying problem.
How “permanence” is defined for offset purposes can dramatically change the economics of carbon offsets from some sectors, and even eliminate them as a source of offsets. Permanence is ultimately a policy decision.
Like additionality, leakage can usually not be empirically measured when evaluating carbon offsets. Handling leakage is ultimately a policy decision.
A key problem for voluntary offset markets has been that any offset approved by any of the offset standards organizations can claim to be as good as any other offset. There has been no way for offsets of demonstrably higher quality to differentiate themselves in the marketplace.
An alternative is needed to the “least common denominator” approach to approving carbon offsets. A “quality score” for offsets would provide consumers with much more information, and create an incentive for better and self-reinforcing market performance.