I read an interesting article today. It's here:
http://dcgeoconsortium.org/2016/05/18/who-may-geoengineer-self-defense-civil-disobedience-and-revolution-part-one/
and is written by:
Patrick Taylor Smith, PhD, is an Assistant Professor in the Department of Political Science at the National University of Singapore. He is writing a book titled “A Leap Into Darkness: Domination and the Normative Structure of International Politics,” and researches climate change and climate engineering. His papers can be found
here.
I read it, disagreed with some of it, and thought it would be an interesting exercise to unpick it. I've been careful to avoid ad hominem attacks and, I hope, focussed on the ideas. My comments are in red....
__________________________________________________________________________________________
Much of the discussion about the appropriateness or usefulness of
geoengineering—particularly dangerous and risky geoengineering strategies [strong bias
demonstrated already] like sulfate
aerosol injection—has relied upon a shared assumption about who will
end up deploying these new tools. That is, we’ve (mostly) assumed that fairly
wealthy, high-emitting states, private actors based in those countries, or
international institutions dominated by those states will be the ones to
finally inject sulfates or fertilize the ocean. This is entirely reasonable [agreed]. Rich and high
emitting states have the resources (or contain private agents with the
resources) to engage in geoengineering research and, potentially, deployment.
Powerful states will have the political wherewithal to either ignore the
entreaties of global governance institutions and civil society, or to gain
their assent. From a practical perspective, the rich and powerful states are
those that are likely to fund the research that would be needed should risky [keeps saying risky,
we'll come back to that] geoengineering strategies ever be deployed, and perhaps even if they
are not.
Yet, there is something odd, from a normative perspective, about this
emphasis. After all, geoengineering is presented as a solution to a problem
that has been—to a great extent—created by rich and powerful, high emitting
nations. There is something unsavory—or as Stephen Gardiner has put it, morally
corrupting—about the idea that rich countries would geoengineer in order to allow
them to retain a greater proportion of the benefits they’ve accrued from
emitting in the first place [there is a lack of any discussion of alternative
outcomes in this discussion - inaction is also an ethical choice and there are
scenarios where not acting would be morally bankrupt - the issue here is a
tacit rejection of the severity of climate change]. Of course, one can
try to justify risky [again] geoengineering as a
way of reducing the negative impacts of climate change on the poor,
marginalized, and low-emitting. But again, this is an odd argument for those responsible for
those impacts to make: “I’ve caused a terrible threat to hang over your head
and I’ll remove it through a strategy that is risky [again] for you but more
convenient for me.” So, the idea that rich countries could justify risky [again, and this is the
serious one of course, as it strongly implies a less risky strategy is
available to us] climate strategies by appealing to the protection of the people their
policies endanger is problematic. [It is also true that those who want to
continue to burn fossil fuels use a similar, much more repugnant version of
this argument, citing the need for fossil fuels for development and therefore
conventional mitigation harms the Global South].
What can we say if this nation decides to engage in an act of
self-defense, protecting its territorial integrity and political autonomy from
the actions of more powerful nations?
This kind of worry doesn’t apply if those victims themselves decide to
geoengineer. They are simply defending themselves, or so the thought goes.
Let’s consider a scenario (borrowed liberally from Oliver Morton in The
Planet Remade); imagine a fairly wealthy but low emitting island
nation that will suffer catastrophic flooding. Adaptation measures are either
unavailable or prohibitively expensive. So, what can we say if this nation—that
is not responsible for climate change but nonetheless suffering from its ill
effects—decides to engage in an act of self-defense, protecting its territorial
integrity and political autonomy from the actions of more powerful nations?
Consider an analogous case. Suppose that a nation builds a dam which it knows
will destroy all of the arable land of a neighboring country. It seems pretty
clear that the the flooded state can—assuming that it met all of the conditions
of just war—engage in a military action to destroy the dam. In other words,
risky [again!] geoengineering could
very well be a response to a set of bad consequences that—under different
circumstances—would justify going to war. And if something as potentially risky
and dangerous as military action could be justified, then it seems hard to deny
that similarly risky [again] geoengineering could
be as well. [This
is a very odd analogy that really requires some thought. The beauty of Oliver's
book is that its analogies are so sharp. Firstly, the 'flooded country' must be
upstream of the dam? Secondly, destruction of the dam has consequences
(confusingly, flooding) that military action against geoengineering would not -
any comparison with the termination effect is moot due to the time necessary to
build up the risk, surely. I think this presupposes the naive, and roundly
rejected, idea of a 'Pinatubo-like release'. This is an accidental straw man
argument]
There is something appealing about this scenario. The weak and powerless
get to take matters into their own hands and defend themselves from the
predations and exploitation of the rich and powerful. Setting aside any
contingent issues about proportionality, effectiveness, or necessity, I want to
suggest that there nonetheless some problems with thinking about geoengineering
this way. Consider two four or five different scenarios.
1.
Accident: I am attacked by a ninja assassin. I
defend myself by firing a gun at the assassin, but I miss and the bullet goes
through the wall, striking an innocent bystander.
2.
Redirection: I see that a ninja assassin is about
to attack me, but I change the number on my apartment so that the assassin
attacks my innocent neighbor.
3.
Massacre: I worry, legitimately, about collateral
damage so I hide, hoping someone has called the police. The assassin kills me
and is seen doing so by my neighbour. The assassin then kills my
neighbour.
Also, not matter how one feels, for completeness the following scenario
should be added:
4. Murder: I shoot the ninja,
missing both bystander and neighbour. I've called this murder deliberately, as
it must be that there are consequences in any scenario, even were
geoengineering to be deemed "successful".
Finally, this outcome has been explicitly ruled out:
5. Change of heart: The ninja changes
their mind, and everyone renounces violence.
Unfortunately,
I think that's probably correct.
While fully working out the difference between the two examples would
take a lot more argument than a single blog post, it seems pretty clear to me
that Accident is much more easily justified or defended than Redirection.
And this is not merely due to risk; after all, I know that firing a gun in an
apartment complex is a dangerous thing to do and that changing my address might
not actually work. The difference, or so it seems to me, is how I use the
death of the innocent bystander. In one case (Redirection), the death
is—in some sense—a necessary part of defending myself and in the
other (Accident) it seems like a merely contingent feature of the case.
In Redirect, I seem to be allying myself with the ninja assassin in
order to kill my neighbor. That does not seem to be true in Accident.
The conclusion we can draw is that even when there is an uncontroversial and
obvious case of self-defense, you are not allowed to do just anything in
order to save yourself.
And
there, I think is the flaw in the logic, both in this thought experiment and
the piece throughout. Not firing the gun comes with inherent risk, not just to
those responsible, but also to those who are not. Geoengineering is only ever
defined as risky in a context where the risks from inaction are ignored.
Intervention is risky, by definition, but the trolley experiments (I agree
which are a very useful too in thinking this way) do capture the risk of
inaction. The above does not.
Potentially dangerous [ok, they are now potentially dangerous,
which is better] geoengineering activities—like iron fertilization or sulfate aerosol
injections—will [might] inflict harm on others in the course of saving our island
nation. And these people will be disproportionately those who are also
suffering and suffering innocently from climate change [it is absolutely true that the least responsible are the most at risk, from both geoengineering and climate change, this doesn't make, ipso facto, geoengineering risky] (note, if you
redirected the ninja assassin to kill another ninja assassin
that is coming to kill you, that might be okay, but that isn’t the case here).
So, is dangerous [again] geoengineering more
like Accident or Redirection? I leave it to the reader to make their own
judgment, but I want to point out two things. First, it is interesting that the
potential permissibility of dangerous geoengineering might ride on a fairly
subtle distinction in moral philosophy; trolley problems are not so impractical
or useless. Second, I think there is a strong case to be made that dangerous [one more] geoengineering is a
redirection (see my commentary in Ethics, Policy, and the
Environment for a somewhat longer case)1. The key feature of sulfate injections—for example—is
the very mechanisms that make it is so effective as a potential shield also
create the negative impacts. The bullet hitting the bystander in Accident plays
no role in making the gun a useless tool for defending myself, but the very
cooling effects that make SRM useful are also what make it dangerous; they seem
very closely linked.
[It is hard to imagine SRM not having deleterious effects. However, this is
same trap George
Monbiot fell into. Risks from SRM must be placed in a context of the effect
of climate change. George compares an incorrect analogy for SRM (A) with the
current climate (in this case the Sahel, B). A leads to B, which is bad. He really should
have looked at post-Pinatubo Sahelian rainfall (C) against a 50 year scenario
(D). The answer is not as clear cut].
Of course, I could be wrong about that. But the fact that dangerous [last one] geoengineering looks
like a redirection of a threat against an innocent population would, if true,
seriously undermine any claim that it can be used in self-defense.
I
believe that the risks around geoengineering should only be discussed in the
context of realistic counterfactuals. If you walked into a doctor's
surgery and said 'I've got a cold' and the doctor prescribed cutting your arm
off at the elbow with a pen knife without anaesthetic, you'd probably question
that judgement. If you’re dying,
pinned under a rock in a slot canyon in Utah, slowly dying, cutting your
arm of might be the best solution. The problem I have with this piece is just
this, the planet has more than just a head cold.
I
do not believe it has been adequately demonstrated SRM is more risky and
dangerous than, for example, RCP8.5. In fact, I suspect that is indefensible at
the current time. If anyone proves otherwise, that would be a major blow to the
legitimacy of SRM geoengineering with stratospheric aerosols as an idea.
Threats of military action, for example, around geoengineering are equally valid
under RCP8.5, aren't they?