• elmtonic@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    9 months ago

    me when the machine specifically designed to pass the turing test passes the turing test

    If you can design a model that spits out self-aware-sounding things after not having been trained on a large corpus of human text, then I’ll bite. Until then, it’s crazy that anybody who knows anything about how current models are trained accepts the idea that it’s anything other than a stochastic parrot.

    Glad that the article included a good amount of dissenting opinion, highlighting this one from Margaret Mitchell: “I think we can agree that systems that can manipulate shouldn’t be designed to present themselves as having feelings, goals, dreams, aspirations.”

    Cool tech. We should probably set it on fire.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 months ago

      Despite the hype, from my admittedly limited experience I haven’t seen a chatbot that is anywhere near passing the turing test. It can seemingly fool people who want to be fooled but throw some non-sequiturs or anything cryptic and context-dependent at it and it will fail miserably.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 months ago

      I agree, except with the first sentence.

      1. I don’t think a computer program has passed the Turing test without interpreting the rules in a very lax way and heavily stacking the deck in the bot’s favor.
      2. I’d be impressed if a machine does something hard even if the machine is specifically designed to do that. Something like proving the Riemann hypothesis or actually passing an honest version of Turing test.
        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Any of… what?

          Yea I don’t think the Turing test is that great for establishing genuine artificial intelligence, but I also maintain that current state of the art doesn’t even pass the Turing test to an intellectually honest standard and certainly didn’t in the 60s.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    24
    ·
    edit-2
    9 months ago

    As somebody said, and im loosely paraphrasing here, most of the intelligent work done by ai is done by the person interpreting what the ai actually said.

    A bit like a tarot reading. (but even those have quite a bit of structure).

    Which bothers me a bit is that people look at this and go ‘it is testing me’ and never seem to notice that LLMs don’t really seem to ask questions, sure sometimes there are related questions to the setup of the LLM, like the ‘why do you want to buy a gpu from me YudAi’ thing. But it never seems curious in the other side as a person. Hell, it won’t even ask you about the relationship with your mother like earlier AIs would. But they do see signs of meta progression where the AI is doing 4d level chess style things.

    • mozz@mbin.grits.devOP
      link
      fedilink
      arrow-up
      10
      ·
      9 months ago

      As somebody said, and im loosely paraphrasing here, most of the intelligent work done by ai is done by the person interpreting what the ai actually said.

      This is an absolutely profound take that I hadn’t seen before; thank you.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        9 months ago

        It prob came from a few of the fired from various ai places ai ethicists who actually worry about real world problems like the racism/bias from ai systems btw.

        The article itself also mentions ideas like this a lot btw. This: “Fan describes how reinforcement learning through human feedback (RLHF), which uses human feedback to condition the outputs of AI models, might come into play. “It’s not too different from asking GPT-4 ‘are you self-conscious’ and it gives you a sophisticated answer,”” is the same idea with extra steps.

  • Treczoks@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    ·
    9 months ago

    Well, the LLM was prompted to find the odd one. Which I consider a (relatively) easy one. Reading the headline, I thought that the LLM was able to point this out by itself, like “Excuse me, but you had one sentence about pizza toppings in your text about programming. Was that intended to be there for some reason, or just a mistaken CTRL-V?”

  • 200fifty@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    9 months ago

    I’m confused how this is even supposed to demonstrating “metacognition” or whatever? It’s not discussing its own thought process or demonstrating awareness of its own internal state, it just said “this sentence might have been added to see if I was paying attention.” Am I missing something here? Is it just that it said “I… paying attention”?

    This is a thing humans already do sometimes in real life and discuss – when I was in middle school, I’d sometimes put the word “banana” randomly into the middle of my essays to see if the teacher noticed – so pardon me if I assume the LLM is doing this by the same means it does literally everything else, i.e. mimicking a human phrasing about a situation that occurred, rather than suddenly developing radical new capabilities that it has never demonstrated before even in situations where those would be useful.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      9 months ago

      I’m also going from the other post which said that this is all simply 90’s era algorithms scaled up. But using that form of neural net stuff, wouldn’t we expect minor mistakes like this from time to time? Neural net does strange unexplained thing suddenly is an ancient tale.

      it doesn’t even have to do the ‘are you paying attention’ thing (which shows so many levels of awareness it is weird (but I guess they are just saying it is copying the test idea back at us (which is parroting, not cognition but whatever))) because it is aware, it could just be an error.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      9 months ago

      Yup, it’s 100% repeating the kind of cliché that is appropriate to the situation. Which is what the machine is designed to do. This business is getting stupider and more desperate by the day.

  • WatDabney@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    The problem is that whether or not an AI is self-aware isn’t a technical question - it’s a philosophical one.

    And our current blinkered focus on STEM and only STEM has made it so that many (most?) of those most involved in AI R&D are woefully underequipped to make a sound judgment on such a matter.

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      9 months ago

      It’s not self aware, it’s just okay at faking it. Just because some people might believe it doesn’t make it so, people also don’t believe in global warming and think the earth is flat.

    • bort@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      And our current blinkered focus on STEM and only STEM has made it so that many (most?) of those most involved in AI R&D are woefully underequipped to make a sound judgment on such a matter.

      who would be equipped to make a sound judgment on such a matter?

  • kinttach@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    The Anthropic researcher didn’t have this take. They were just commenting that it was interesting. It’s everyone else who seemed to think it meant something more.

    Doesn’t it just indicate that the concept of needle-in-a-haystack testing is included in the training set?

  • AllonzeeLV@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    8
    ·
    edit-2
    9 months ago

    I think some here are grossly overestimating average human capacity. There are many humans that have difficulty discerning the context of a statement based on their experiences aka examples.

    This isn’t AGI, but in another couple years at this pace, it’s coming. Not necessarily because it is some higher mind, but because the metric for AGI is can it perform all the tasks our minds can at our level. Not necessarily Stephen Fry or Albert Einstein, just as well as a median asshole. Have you met us?

    We aren’t all that, and most of us spend most of our time on a script, sapience must be exercised, many do many don’t, and isn’t necessary for what we will abuse these for. It would probably be kinder to restrict discussion of such topics from memory when this matures. Even humans have great difficulty wrestling with them, to the point of depression and existential dread.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 months ago

      nah, if anything this A.I. craze has made me appreciate how incredibly smart even the supposedly dimmest of humans are. we can use language of our own volition, to create meaning. in fact we frigging invented it!!! we’re just bloody amazing, to hell with misanthropy.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        9 months ago

        Pace my blog post, these last few years have shown diminishing returns on “AI”:

        • AI is superintelligence (SFnal tropes like HAL 9000)
        • AI will make knowledge workers obsolete (the promise of “expert systems”)
        • AI will replace human vehicle operators
        • AI will replace paralegals and coders
        • AI has made illustrators, stock photographers and spam email copywriters redundant (<- we are here)
        • for $10/month, you will be able to emulate the average Reddit poster (<- the glorious apotheosis)
    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 months ago

      We aren’t all that, and most of us spend most of our time on a script, sapience must be exercised, many do many don’t, and isn’t necessary for what we will abuse these for. It would probably be kinder to restrict discussion of such topics from memory when this matures. Even humans have great difficulty wrestling with them, to the point of depression and existential dread.

      holy fuck please log off and go to therapy. I’m not fucking around. if this is actually how you see yourself and others, you are robbing yourself of the depth of the human experience by not seeking help.

      • Randomgal@lemmy.ca
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        9 months ago

        If this is how you see yourself and others, you might want to touch some grass and meet some more humans outside the Internet.

      • SpiderShoeCult@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        9 months ago

        the person there just commented on the average human’s capacity for reasoning (not all humans, just the average one), and, in all fairness, they’re sort of right, I think

        don’t just think of your friends and family, but about all humans. think about what makes it in the news and then how many things don’t make it. religious nuts stoning people for whatever reason, gang sexual assault in the street in certain areas of the world, people showing up in ERs with weird stuff up their back ends, or finding unexploded ordnance from wars past and deciding the best course of action would be to smash it with a hammer or drill into it. this is all of course in addition to the pressing issues nowadays which do also seem to come from a place of not exercising sapience.

        and for the less extreme cases, I do think the original commenter here is correct in saying people do tend to follow scripts and glide through life.

          • SpiderShoeCult@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            7
            ·
            9 months ago

            if you think about selection bias, namely that one normally chooses to surround oneself with like-minded people and if you add the fact that people would normally not consider themselves non-sapient, it sort of makes sense though, dunnit?

            family, true, you don’t choose that, but I figure statistically people are more likely to have some strong feelings about their family and implications towards themselves if they admit their family is indeed non-sapient (though blood ties are a different topic, best left undisturbed in this context)

            for the record I never said MY friends and family, I was instructing the other commenter to look beyond their own circle. I figured since they were so convinced that the average human was not, in fact, about as dumb as an LLM, their social circle skews their statistics a bit.

            • Amoeba_Girl@awful.systems
              link
              fedilink
              English
              arrow-up
              12
              ·
              edit-2
              9 months ago

              human beings are smart. bad things don’t happen because people are stupid. this kind of thinking is dehumanising and leads to so much evil in our world. people are not LLMs. they’re people like you. they have thoughts. they act for reasons. don’t dehumanise them.

                • SpiderShoeCult@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  5
                  ·
                  9 months ago

                  ummm, you’re the only one here that made any assumption about the sapience of developmentally disabled people, no idea where or why that came from

                  I would expect the people in your social circle to be sapient according to yourself, please see my initial point about selecting the ones you surround yourself with

                  tic-tac-toe is a solved game, so it would be expected for a computer to always win or tie, that says more about the game itself though

              • SpiderShoeCult@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                4
                ·
                9 months ago

                I would point you to Hanlon’s razor for the first part there.

                it’s not about dehumanizing, it’s merely comparing the outputs. it doesn’t really matter if they act for reasons or have thoughts if the output is the same. should we be more forgiving if a LLM outputs crap because it’s just a tool or should we be more forgiving if the human outputs the exact same crap, because it’s a person?

                and, just for fun, to bring solipsism into this, how do we actually know that they have thoughts?

            • Amoeba_Girl@awful.systems
              link
              fedilink
              English
              arrow-up
              10
              ·
              edit-2
              9 months ago

              shit, find me the stupidest dog you know and i’ll show a being that is leagues beyond a fucking chatbot’s capabilities. it can want things in the world, and it can act of its own volition to obtain those things. a chatbot is nothing. it’s noise. fuck that. if you can’t see it it’s because you don’t know to look at the world.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      So… the idea here is that OpenAI and friends are gonna charge you N bucks a month so you can have chat conversations with the average internet user? Spoiler alert: that service is already free.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        For $150 they save you the inconvenience of finding the style of twit you wish to interact with, and will dress up to whatever twit your heart desires!

        In the beginning… I can’t wait to see what happens to their pricing when they believe they’ve locked enough in and shift from vc subsidy to actual customer-carried charge. Bet it’s gonna be real popular…

        • gerikson@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          8 months ago

          Apparently the electric power generation in the US is under strain because of all the AI server farms being feverishly built by entrepreneurs with FOMO. The bill is gonna come due some day, especially if Joe and Jill Sixpack can’t afford to cool their beer because of some egghead generating pr0n.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      This isn’t AGI, but in another couple years at this pace, it’s coming.

      As people noted the last few AI autumns, this is a bad assumption. Winter is coming. S-curve, not exponential.

      • 200fifty@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 months ago

        When people say stuff like this it always makes me wonder “what pace, exactly?” Truthfully, I feel like hearing someone say “well, generative AI is such a fast-moving field” at this point is enough on its own to ping my BS detector.

        Maybe it was forgivable to say it in May 2023, but at this point it definitely feels like progress has slowed down/leveled off. AI doesn’t really seem to me to be significantly more capable than it was a year ago – I guess OpenAI can generate videos now, but it’s been almost a year since “will smith eating spaghetti,” so…

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          9 months ago

          I’m gonna be honest the videos did better than I expected, still meh in the weird uncanny valley aspect, but better than I expected. But still think that we have reached the end of the fast progress curve due to the whole gpt 4 is basically a couple of 3.5’s chained together. Which I think is a sign of people running out of ideas, same as how in the era of multicore cpus the speed of cpus has not increased that drastically, and certainly not that noticeably (compared to the 90’s etc).

          What is going to be amazing however is the rise of 40k mechanicus style coders, I saw somebody go ‘you don’t need to know how to code, my program gave this http error, I didn’t know what it meant, so I asked GPT how to fix it and implemented that and it works’. Amazing, bunch of servitors.

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 months ago

        Seeing as the notion of “progress” in this space is entirely subjective and based on general vibes, it’s easy to make a case for any curve shape.

        I could make a passable argument that it’s actually a noisy sinusoid.