2 ways

I only see 2 ways this is going.

Number 1: Humanity destroys itself.

Number 2: Some kind of Butlerian Jihad.  To be able to prevent the emergence of the kind of technology that might destroy humanity, this would require something like surveillance, but in a form that has not existed hitherto.  To my mind, I can only conceive of this working if something like an ASI were maintaining it.  That's right: we would need a superhuman AI to prevent the emergence of superhuman AI.  

Does this mean that technology needs to be utterly eliminated?  No.  Does it mean that it must be forever frozen at a certain stage of development?  Again, I think the answer is no.  But it does mean that technological advance would be slowed down to the extent that other forms of progress - moral, political, economic, cultural - could at least hope to keep pace.  

Look at the archaeological record: there are stone tools - arrow heads, hand axes - that take centuries, or millennia, to change.  They can be dated by very minor alterations.  This is the pace of technological advance that has been customary for humans for well over 99% of the time that we've been here - it is our norm.  Indeed, we have evolved and adapted for precisely this pace of advancement.  I'm not quite saying that our only hope is to slow things down quite this much.  But there may be some moderate position between humanity's norm and the current breakneck speed.

Comments

Popular posts from this blog

Why Capitalism is Ending

Liquefactionism

The Ego Is Not Selfish Enough