More recently, Musk posted on X that “it increasingly appears that humanity is a biological bootloader for digital superintelligence.” In other words, our purpose in this eschatological scheme is to give rise to superintelligent AI, which many advocates of The Mindset expect will then initiate a “colonization explosion” into the universe and harvest what some call our “cosmic endowment” of negentropy, where “negentropy” is just the opposite of “entropy.” Similarly, Daniel Faggella, founder of Emerj Artificial Intelligence Research and host of “The Trajectory” podcast, contends that “the great (and ultimately, only) moral aim of artificial general intelligence should be the creation of Worthy Successor — an entity with more capability, intelligence, ability to survive and … moral value than all of humanity.” He defines a “worthy successor” as “a posthuman intelligence so capable and morally valuable that you would gladly prefer that it (not humanity) control the government, and determine the future path of life itself.” As he put it in a recent Facebook post, “imo whatever carries the most sentience SHOULD be the one running the show.” In other words, if AIs were to “carry” more “sentience” than us — whatever that means exactly — then we ought to let them rule the world. I talk to the people who believe that stuff all the time, and increasingly, a lot of them believe that it would be good to wipe out people and that the AI future would be a better one, and that we should wear a disposable temporary container for the birth of AI.
Author: Truthdig
Published at: 2025-04-29 21:38:29
Still want to read the full version? Full article