I feel like calling it a “metric” is borderline clickbait since P(doom) isn’t measurable. It isn’t even Bayesian.
Still, it’s an interesting article and discussion point.
I’m in the low-P(doom) camp. There are a lot of vested interests in maintaining the status quo, and there’s no reason to expect AI to develop a benefit/cost function that leads to the destruction of humanity or civilization.
If anything, I think we’ll end up in an AI-driven utopia, where most of the work necessary to live is done by machines.
There are a lot of vested interests in maintaining the status quo, and there’s no reason to expect AI to develop a benefit/cost function that leads to the destruction of humanity or civilization.
I worry about an AI that has a cost function only favouring a small subset of humanity. There’s also one that’s just broken, I guess.
I feel like calling it a “metric” is borderline clickbait since P(doom) isn’t measurable. It isn’t even Bayesian.
Still, it’s an interesting article and discussion point.
I’m in the low-P(doom) camp. There are a lot of vested interests in maintaining the status quo, and there’s no reason to expect AI to develop a benefit/cost function that leads to the destruction of humanity or civilization.
If anything, I think we’ll end up in an AI-driven utopia, where most of the work necessary to live is done by machines.
I worry about an AI that has a cost function only favouring a small subset of humanity. There’s also one that’s just broken, I guess.