2 Comments

I am curious whether you think AI could be a tool for predicting and preventing the crises you discussed.

Expand full comment

I think the category describing superhuman AGI doesn't do a good job. The most realistic scenario (and the actual problem) with AI isn't that we're going to give it a bad goal to pursue (and "The Paperclipper", originally, was supporting a point that not any goal a superintellgient AI might pursue is automatically good because it knows better: a universe full of paperclips without any humans or anyone experiencing at all is obviously bad). It's that we don't actually know how to give AI any goals at all. See, e.g., https://moratorium.ai/#inner-alignment

Expand full comment