• chicken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    10 months ago

    Is this really a worm if all it’s doing is sending a prompt to persuade an AI agent with authority to send emails to send emails with the prompt to other AI agents? Like this guy’s saying in the article, just assume that AI output produced as a result of user prompts is the same as unfiltered user input and should be treated that way:

    “With a lot of these issues, this is something that proper secure application design and monitoring could address parts of,” says Adam Swanda, a threat researcher at AI enterprise security firm Robust Intelligence. “You typically don’t want to be trusting LLM output anywhere in your application.”

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    10 months ago

    Not related to the article, but this post felt just a little scarier because at first the link thumbnail and comments weren’t loading at all.