There are a lot of vested interests in maintaining the status quo, and there’s no reason to expect AI to develop a benefit/cost function that leads to the destruction of humanity or civilization.
I worry about an AI that has a cost function only favouring a small subset of humanity. There’s also one that’s just broken, I guess.
I worry about an AI that has a cost function only favouring a small subset of humanity. There’s also one that’s just broken, I guess.