Home Inmate Central

Inmate Central, where civil and family-friendly discourse about off-audio topics (other than religion and politics) is welcome.

Transhumanism

c/o Barry Ritholtz

"I feel extremely uneasy about the prospect that [the calculations above] might become recognised among politicians and decision-makers as a guide to policy worth taking literally. It is simply too reminiscent of the old saying 'If you want to make an omelette, you must be willing to break a few eggs,' which has typically been used to explain that a bit of genocide or so might be a good thing, if it can contribute to the goal of creating a future utopia. Imagine a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom's argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.



Here, then, are a few reasons I find longtermism to be profoundly dangerous. Yet there are additional, fundamental problems with this worldview that no one, to my knowledge, has previously noted in writing. For example, there's a good case to make that the underlying commitments of longtermism are a major reason why humanity faces so many unprecedented risks to its survival in the first place. Longtermism might, in other words, be incompatible with the attainment of 'existential security', meaning that the only way to genuinely reduce the probability of extinction or collapse in the future might be to abandon the longtermist ideology entirely.



To Bostrom and Ord, failing to become posthuman would prevent us from realising our vast, glorious potential



To understand the argument, let's first unpack what longtermists mean by our 'longterm potential', an expression that I have so far used without defining. We can analyse this concept into three main components: transhumanism, space expansionism, and a moral view closely associated with what philosophers call 'total utilitarianism'.



The first refers to the idea that we should use advanced technologies to reengineer our bodies and brains to create a 'superior' race of radically enhanced posthumans (which, confusingly, longtermists place within the category of 'humanity'). Although Bostrom is perhaps the most prominent transhumanist today, longtermists have shied away from using the term 'transhumanism', probably because of its negative associations. Susan Levin, for example, points out that contemporary transhumanism has its roots in the Anglo-American eugenics movement, and transhumanists such as Julian Savulescu, who co-edited the book Human Enhancement (2009) with Bostrom, have literally argued for the consumption of 'morality-boosting' chemicals such as oxytocin to avoid an existential catastrophe (which he calls 'ultimate harm'). As Savulescu writes with a colleague, 'it is a matter of such urgency to improve humanity morally ... that we should seek whatever means there are to effect this.' Such claims are not only controversial but for many quite disturbing, and hence longtermists have attempted to distance themselves from such ideas, while nonetheless championing the ideology.



Transhumanism claims that there are various 'posthuman modes of being' that are far better than our current human mode. We could, for instance, genetically alter ourselves to gain perfect control over our emotions, or access the internet via neural implants, or maybe even upload our minds to computer hardware to achieve 'digital immortality'. As Ord urges in The Precipice, think of how awesome it would be to perceive the world via echolocation, like bats and dolphins, or magnetoreception, like red foxes and homing pigeons. 'Such uncharted experiences,' Ord writes, 'exist in minds much less sophisticated than our own. What experiences, possibly of immense value, could be accessible, then, to minds much greater?' Bostrom's most fantastical exploration of these possibilities comes from his evocative 'Letter from Utopia' (2008), which depicts a techno-Utopian world full of superintelligent posthumans awash in so much 'pleasure' that, as the letter's fictional posthuman writes, 'we sprinkle it in our tea.'



The connection with longtermism is that, according to Bostrom and Ord, failing to become posthuman would seemingly prevent us from realising our vast and glorious potential, which would be existentially catastrophic. As Bostrom put it in 2012, 'the permanent foreclosure of any possibility of this kind of transformative change of human biological nature may itself constitute an existential catastrophe.' Similarly, Ord asserts that 'forever preserving humanity as it is now may also squander our legacy, relinquishing the greater part of our potential.' "



https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo





This post is made possible by the generous support of people like you and our sponsors:
  Schiit Audio  


Topic - Transhumanism - Vegman70 11:07:21 11/2/21 (5)

FAQ

Post a Message!

Forgot Password?
Moniker (Username):
Password (Optional):
  Remember my Moniker & Password  (What's this?)    Eat Me
E-Mail (Optional):
Subject:
Message:   (Posts are subject to Content Rules)
Optional Link URL:
Optional Link Title:
Optional Image URL:
Upload Image:
E-mail Replies:  Automagically notify you when someone responds.