• Cruxifux@feddit.nl
    link
    fedilink
    English
    arrow-up
    46
    ·
    26 days ago

    “I panicked” had me laughing so hard. Like implying that the robot can panic, and panicking can make it fuck shit up when flustered. Idk why that’s so funny to me.

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      26 days ago

      It’s interesting that it can “recognize” the actions as clearly illogical afterwards, as if made by someone panicking, but will still make them in the first place. Or, a possibly funnier option, it’s mimicking all the stories of people panicking in this situation. Either way, it’s a good lesson to learn about how AI operates… especially for this company.

      • abbotsbury@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        26 days ago

        It’s interesting that it can “recognize” the actions as clearly illogical afterwards, as if made by someone panicking, but will still make them in the first place

        Yeah I don’t use LLMs often, but use ChatGPT occasionally, and sometimes when asking technical/scientific questions it will have glaring contradictions that are just completely wrong for no reason. One time when this happened I told it that it fucked up and to check it’s work, and it corrected itself immediately. I tried again to see if I could get it to overcorrect or something, but it didn’t go for it.

        So as weird as it sounds, I think adding “also make sure to always check your replies for logical consistency” to its base prompt would improve things.

        • Swedneck@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          9
          ·
          26 days ago

          and just like that we’re back to computers doing precisely what we tell them to do, nothing more and nothing less.

          one day there’s gonna be a sapient LLM and it’ll just be a prompt of such length that it qualifies as a full genome

        • Feathercrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          25 days ago

          This unironically works, it’s basically the same reason why chain-of-reasoning models produce better outputs