cross-posted from: https://lemmy.sdf.org/post/41849856

If an LLM can’t be trusted with a fast food order, I can’t imagine what it is reliable enough for. I really was expecting this was the easy use case for the things.

It sounds like most orders still worked, so I guess we’ll see if other chains come to the same conclusion.

  • Lugh@futurology.todayM
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    I’m still surprised at the rate LLMs make simple mistakes. I was recently using ChatGPT to research biographical details about James Joyce’s life, and it gave me several basic facts (places he lived & was educated at) at variance with what is clearly stated in the Wikipedia article about him.

    • SanctimoniousApe@lemmings.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      2 days ago

      That’s because there’s no “thinking” behind LLMs - it’s just pattern-matching on extreme steroids. They work by looking at all the text & such that they’ve been fed, and coming up with something that looks like an amalgamation of all the stuff they’ve seen on the subject you asked about. There’s absolutely minimal (if any) reasoning or logic involved in what they do.

      • CanadaPlus@lemmy.sdf.orgOP
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        edit-2
        1 day ago

        Arguably, thinking is extreme pattern matching. And, they can make original things that were definitely not in their training data.

        The problem seems to be more about alignment. They’re rewarded for generating human-looking text, nothing more, and we have no obvious alternative to training them that way. So, of course they’re a bit imprecise at any other task.

        • webghost0101@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I think its one of the systems thinking emergent from, not the only system though.

          I do regularly feel like i have an llm-analoge component within my consciousnesses, as autihd i will sometimes say the exact opposite word from the meaning i intend.

          I am also known to use certain sentences wrong because i apparently misunderstood their meaning, its autocopy things i heard others say in a similar appearing context so my brain believes i can say them to stall socially to have more time to think about what i really want to say.