Monday, 7 December 2015

Second thoughts on Ecomodernism

Earlier this year, I made some enthusiastic comments on David Brin’s blog about the Ecomodernists, who published their manifesto earlier this year.

Why? I have been seeking ways in which people might maintain many of the advantages of a technological civilization, jettison some (or a lot of) the more toxic aspects, and also start repairing some of the damage we’ve inflicted on the biosphere. Ecomodernism seemed attractive because it offered a pathway to realise some of these aims.

But now I’m not so sure.

Several things made me uneasy, including two informed critiques by George Monbiot and Chris Smaje at The Dark Mountain project. George Monbiot accuses them of subjecting the poor to “remote and confident generalisations” and historically this has meant that the the poor have “suffered gravely.” Smaje offers a long critique, highlighting the downsides of the nineteenth century style of (neo) liberal modernism that the Ecomodernists seem to be offering.

One pivotal issue is the claim that we can protect nature and wilderness areas better by ‘decoupling’ human activities from nature, increasing (material) living standards while decreasing the damage to the planet.

 This claim seems attractive but is currently unproven.

In fact, a recent paper disputes the claim that as nations develop economically, they ‘decouple’ at all. This claim has been supported in the past by some standard measures used by governments that seem to show that “some developed countries have increased the use of natural resources at a slower rate than economic growth (relative decoupling) or have even managed to use fewer resources over time (absolute decoupling).”

However, the study, by Thomas Wiedmann and team shows the opposite, that “achievements in decoupling in advanced economies are smaller than reported or even non-existent.” In fact, their research “confirms that pressure on natural resources does not relent as most of the human population becomes wealthier (my emphasis).”

This seems to me to dent the credibility of some ecomodern claims.

Finally, a word about manifestos. These can be useful for political movements, but seem worse than useless if you’re trying to sort through difficult issues objectively. This is because manifestos too often degenerate into dogmas, which devotees end up defending tooth and claw, often by ignoring or denigrating potentially threatening evidence.

And the survival of civilization seems to me too important for this sort of thing to happen.

Wednesday, 2 December 2015

Technology will save us all, right?

There’s a big problem with most debates over technology. Whenever discussions about genetic engineering, future energies, nanotechnology, applied neuroscience, etc. come up, it’s assumed that EITHER you belong in a group that embraces any kind of technology, especially if it's shiny, OR you belong in a group (often, very unhelpfully labelled ‘Luddites’) who’d like to reject all technology beyond the Stone Age and live in a cold, dripping cave knapping flints.

(Please don't misunderstand the last point: I'm well aware of the sophistication of Stone Age/traditional cultures, and think that we have things to learn from them. It's just that in this sort of debate it's assumed that there are two and only two sides, and they're polarized).

I’ve never really identified with either of these extremes, but have instead tried to evolve a realistic view about the kinds of risks and promises that new technology brings. One thing that’s become very clear to me is that progress cannot mean technological progress only.

Take the Luddites. Firstly, the fact that many Luddites were hanged (including a boy of twelve) often gets papered over in these discussions. Second, the Luddites did not reject technology out of hand, but they were concerned about its alienating and disempowering effects.

I’ve read a lot of stuff recently about how the coming age of robotics and how AI will free us from work. Well, I think it depends how it’s done. If AI is introduced into factories and other workplaces, making lots of workers redundant, then it is plainly not going to benefit anyone but the employers.

 If, on the other hand, AI is introduced at the same time as social and political reform (say a guaranteed national income, and/or with a mind to using AI applications to enable small business and single workers) then it might liberate us from punishing working hours, for at least some jobs.

This example suggests that expecting a new technology automatically to ‘save us,' to raise living standards or take away the pain of toil is very naïve. So my position is similar to the one Nicholas Agar outlined in his book The Sceptical Optimist (Oxford, 2015).

Agar surveys the debates, and concludes that ‘declarations that technological progress is good or bad may be effective as rallying calls,' but do not provide us with a way of making informed choices. Instead, the benefits and dangers of new technology should be intelligently balanced against each other.

I think he's right, but it seems to me that the main problem here is that the stance that individuals and various factions take on technology is more often dictated by shared values than by a balancing of danger vs. opportunity.

So a Transhumanist will tend to embrace any kind of human-enhancing technology, simply because it is high technology,  but a Deep Ecologist will tend to reject things like GM crops, nanotech and nuclear power.

I think that in practice it's pretty much impossible to make judgments that are divorced from your values. So maybe if you really want to make an informed judgment about technology, you need to figure out what your values actually are and be honest about them.

And my own values, these days, tend to revolve around whether a technology will genuinely enhance well-being and the health of the planet, as opposed to assuming that any innovation, for its own sake, is 'progress' and will magically solve all our problems.