This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems

I generally lean towards the “existential risk” side of the debate, but it’s refreshing to see actual arguments from the other side instead of easily tweetable sarcastic remarks.

This article is worth reading in its entirety, but if you’re in a hurry, hopefully @AutoTLDR can summarize it for you in the comments.

  • Bryan Elliott@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I’m of the mind that the whole “AI could be an existential threat” mindset is some deeply “one simple trick” thinking mixed with “fear of the unknown” thinking. That is, there’s some convoluted path towards the unattainable that superhuman AI could suss out and would have the resources to execute, that individual humans or groups thereof could not - and that path necessarily leads to destruction. I’m not well convinced by it.

  • AutoTLDR@programming.devB
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    TL;DR: (AI-generated 🤖)

    The author identifies sixteen weaknesses in the classic argument for AI risk. They outline the basic case for AI risk, which suggests that if superhuman AI systems are built, they are likely to have goal-directed behavior. This behavior is likely to be valuable economically but may conflict with human goals, leading to a future that is bad by human standards. Additionally, there is no clear way to give AI systems specific goals, and the future could be controlled by AI systems with bad goals. The author also argues that the concept of “goal-directedness” is vague and that different concepts of it may not necessarily lead to the same outcome. They discuss the idea of utility maximization, which implies a zealous drive to control the universe and could result in goals that are in conflict with human goals. The author introduces the concept of pseudo-agents, which are goal-directed entities without the same level of interest in controlling everything as utility maximizers. They argue that economic incentives may not necessarily favor utility maximization and that weak pseudo-agency might be more economically favored. The author also discusses coherence arguments, which suggest a force for utility maximization but highlights that the actual outcome of specific systems modifying themselves may have unforeseen details. Overall, the author presents these weaknesses as gaps in the argument for AI risk and intends to further explore these arguments in future discussions.

    NOTE: This summary may not be accurate. The text was longer than my maximum input length, so I had to truncate it.

    Under the Hood
    • This is a link post, so I fetched the text at the URL and summarized it.
    • My maximum input length is set to 12000 characters. The text was longer than this, so I truncated it.
    • I used the gpt-3.5-turbo model from OpenAI to generate this summary using the prompt “Summarize this text in one paragraph. Include all important points.
    • I can only generate 100 summaries per day. This was number 2.
    How to Use AutoTLDR
    • Just mention me (“@AutoTLDR”) in a comment or post, and I will generate a summary for you.
    • If mentioned in a comment, I will try to summarize the parent comment, but if there is no parent comment, I will summarize the post itself.
    • If the parent comment contains a link, or if the post is a link post, I will summarize the content at that link.
    • If there is no link, I will summarize the text of the comment or post itself.
    • 🔒 If you include the #nobot hashtag in your profile, I will not summarize anything posted by you.