Elon Musk, xAI and a Terminator Future

This is the recording of a Spaces that Elon Musk did on July 12

Some bullets:

Elon Musk new company xAI will build a system that would be safe because it was maximally curious, maximally truth seeking

The overarching goal of xAI is to build a good AGI [artificial general intelligence] with the overarching purpose of just trying to understand the universe”

From an AI safety standpoint … a maximally curious AI, one that is trying to understand the universe, is I think going to be pro-humanity

Elon was a signatory to a statement from the Center for AI Safety that said, among other things, that dealing with the risk of extinction from artificial intelligence should be a global priority on par with mitigating the risk of pandemics and nuclear war.

If I could press pause on AI or really advanced AI digital superintelligence I would. It doesn’t seem like that is realistic so xAI is essentially going to build an AI…in a good way…hopefully” 

It’s actually important for us to worry about a Terminator future in order to avoid a Terminator future.

If you program a certain reality [into an AI] you have to say what morality you are programming. Whose decision is that?” saying that once an AI is programmed with a specific moral standpoint it would be easier to prompt it into reversing it. (This is known as the “Waluigi effect”, named after Luigi’s mischievous arch-rival in the Super Mario video game franchise.)

I think to a super intelligence, humanity is much more interesting than not [having] humanity…When you look at the various planets in our solar system, the moons and asteroids, and really probably all of them combined are not as interesting as humanity.

“As with everything, I think we’re very open to critical feedback and welcome that…Actually, one of the things that I like about Twitter is that there’s plenty of negative feedback on Twitter, which is helpful for ego compression.