Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Crude reductionist beliefs such as humans being nothing more than “meat computers” and/or “stochastic parrots” have certainly contributed to the belief that a sufficiently elaborate LLM treat printer would be at least as valid a person as actual living people.

    • Nevoic@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I don’t know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.

      I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

      Even if we find the limit to LLMs and figure out that sentience can’t arise (I don’t know how this would be proven, but let’s say it was), you’d still somehow have to prove that algorithms can’t produce sentience, and that only the magical fairy dust in our souls produce sentience.

      That’s not something that I’ve bought into yet.

      • sooper_dooper_roofer [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 months ago

        To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience.

        How is that plausible? The human brain has more processing power than a snake’s. Which has more power than a bacterium’s (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written

        chatGPT : freshman-year-“hello world”-program
        human being : amoeba
        (the : symbol means it’s being analogized to something)

        a human is a sentience made up of trillions of unicellular consciousnesses.
        chatGPT is a program made up of trillions of data points. But they’re still just data points, which have no sentience or consciousness.

        Both are something much greater than the sum of their parts, but in a human’s case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn’t…do anything, it has no will

      • Dirt_Owl [comrade/them, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        10 months ago

        Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don’t even have language.

        It just screams of a marketing scam. I’m not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don’t think this is what they’re doing. I think they’re just trying to sell the next Google AdSense

        • Nevoic@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 months ago

          Notice the distinction in my comments between an LLM and other algorithms, that’s a key point that you’re ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don’t believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn’t have to be LLMs.

    • VILenin [he/him]@hexbear.netOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Have I lost it

      Well no, owls are smart. But yes, in terms of idiocy, very few go lower than “Silicon Valley techbro”

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      For fucks sake it’s just an algorithm. It’s not capable of becoming sentient.

      If I call you a meat computer, or a stochastic parrot, or say “ape” enough times, the algorithm will by comparison seem closer to sentient. smuglord

  • AmarkuntheGatherer@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    The half serious jokes about sentient AI, made by dumb animals on reddit are no closer to the mark than an attempt to piss on the sun. AI can’t be advancing at a pace greater than we think, unless we think it’s not advancing at all. There is no god damn AI. It’s a language model that uses a stochastic calculation to print out the next word each time. It barely holds on to a few variables at a time, it’s got no grasp on anything, no comprehension, let alone a promise of sentience.

    There are plenty of stuff and people that get to me, but few are as good at it as idiot tech bros, their delusions and their extremely warped perspective.

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      10 months ago

      There is no god damn AI. It’s a language model that uses a stochastic calculation to print out the next word each time. It barely holds on to a few variables at a time, it’s got no grasp on anything, no comprehension, let alone a promise of sentience.

      Some believe (in this thread included) that by denigrating living human beings and calling them “meat computers” that the LLMs seem that much closer to being sapient, and they thusly provide a false dichotomy choice between agreeing with that take or else you’re a faith healing crystal touching New Age mystic. morshupls

  • MerryChristmas [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 months ago

    He may be a sucker but at least he is engaging with the topic. The sheer lack of curiosity toward so-called “artificial intelligence” here on hexbear is just as frustrating as any of the bazinga takes on reddit-logo. No material analysis, no good faith discussion, no strategy to liberate these tools in service of the proletariat - just the occasional dunk post and an endless stream of the same snide remarks from the usuals.

    The hexbear party line toward LLMs and similar technologies is straight up reactionary. If we don’t look for ways to utilize, subvert and counter these technologies while they’re still in their infancy then these dorks are going to be the only ones who know how to use them. And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.

    Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

    • Wheaties [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

      As it stands, the capitalists already have the old means of information warfare – this tech represents an acceleration of existing trends, not the creation of something new. What do you want from this, exactly? Large language models that do a predictive text – but with filters installed by communists, rather than the PR arm of a company? That won’t be nearly as convincing as just talking and organizing with people in real life.

      Besides, if it turns out there really is a transformational threat, that it represents some weird new means of production, it’s still just a programme on a server. Computers are very, very fragile. I’m just not too worried about it.

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      The sheer lack of curiosity toward so-called “artificial intelligence” here on hexbear

      Definite what you mean by “curiosity.” Is it also a “lack of curiosity” for people to dunk on and heckle NFT peddlers instead of entertaining their proposals?

      is just as frustrating

      Even at its extremes that I don’t agree with myself, I disagree here. No, it is not just as frustrating.

      No material analysis

      Then bring some. Don’t just say Hexbears suck because they’re not “curious” enough about the treat printers.

      is straight up reactionary

      And hating on leftists in favor of your unspecified “curiosity” position is what exactly by comparison?

      Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance

      What does your “curiosity” propose that is actual resistance and not playing into their hands or even buying into the marketing hype?

    • GreenTeaRedFlag [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      It’s a glorified speak-n-spell, not a one benefit to the working class. A constant, unrelenting push for a democratization of education will do infinitely more for the working class than learning how best to have a machine write a story. Should this be worked on and researched? absolutely. Should it not be allowed out of the confines of people who understand thoroughly what it is and what it can and cannot do? yes. We shouldn’t be using this for the same reason you don’t use a gag dictionary for a research project. Grow up

      • oregoncom [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        It has potential for making propaganda. Automated astroturfing more sophisticated than what we currently see being done on Reddit.

        • GreenTeaRedFlag [any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          astroturfing only works when your views tie into the mainstream narrative. Besides, there’s no competing with the people who have access to the best computers, most coders, and have backdoors and access to every platform. Smarter move is to back up the workers who are having their jobs threatened over this.

    • VILenin [he/him]@hexbear.netOPM
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Oh my god it’s this post again.

      No, LLMs are not “AI”. No, mocking these people is not “reactionary”. No, cloaking your personal stance on leftist language doesn’t make it any more correct. No, they are not on the verge of developing superhuman AI.

      And if we don’t interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.

      Have you read like, anything at all in this thread? There is no way you can possibly say no one here is “interacting with the underlying philosophical questions” in good faith. There’s plenty of discussion, you just disagree with it.

      Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

      What the fuck are you talking about? We’re “handing it over to them” because we don’t take their word at face value? Like nobody here has been extremely opposed to the usage of “AI” to undermine working class power? This is bad faith bullshit and you know it.

      • UlyssesT [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I see it as low-key crybully shit to come here, dunk on Hexbears and call them names for not being “curious” enough about LLMs, and act like some disadvantaged aggrieved party while also standing closer to the billionaires’ current position than anywhere near those they’re raging at here.

  • Justice@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    I said it at the time when chatGPT came along, and I’ll say it now and keep saying it until or unless the android army is built which executes me:

    ChatGPT kinda sucks shit. AI is NO WHERE NEAR what we all (used to?) understand AI to be ie fully sentient, human-equal or better, autonomous, thinking, beings.

    I know the Elons and shit have tried (perhaps successfully) to change the meaning of AI to shit like chatGPT. But, no, I reject that then, now, and forever. Perhaps people have some “real” argument for different types and stages of AI and my only preemptive response to them is basically “keep your industry specific terminology inside your specific industries.” The outside world, normal people, understand AI to be Data from Star Trek or the Terminator. Not a fucking glorified Wikipedia prompt. I think this does need to be straight forwardly stated and their statements rejected because… Frankly, they’re full of shit and it’s annoying.

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      The LLM marketing hype campaign has very successfully changed the overall perceived definition of what “AI” is and what “AI” could be.

      Arguably it makes actual general AI as a concept harder to develop because financing and subsidies will likely keep going downstream toward LLM projects instead of attempts to emulate general intelligence.

      • sooper_dooper_roofer [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 months ago

        the average person was always an NPC who goes by optics instead of fundamentals

        “good people” to them means clean, resourced, wealthy, privileged
        “bad people” means poor, distraught, dirty, refugee, etc

        so it only makes sense that an algorithm box with the optics of a real voice, proper english grammar and syntax, would be perceived as “AI”

        • UlyssesT [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          10 months ago

          That’s very insightful, and you’re right. I assume that an upcoming LLM product with a posh British waifu accent politely telling nerds how special they are would likely make fucking bank and maybe even be seen as the first ascended artificial being. soypoint-1 brrrrrrrrrrrr soypoint-2

          EDIT: I’m not wild about calling any human being an “NPC” though, just because that dehumanizing shit is a common techbro and chud concept.

  • Monk3brain3 [any, he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Whenever the tech industry needs a boost some new bullshit comes up. Crypto, self driving and now AI, which is literally called AI for marketing purposes, but is basically an advanced algorithm.

      • UlyssesT [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Even so much as doubting the billionaires’ hype circus becomes a struggle session about not only about the potential self-aware ascendancy of the treat printers, but also the preceding rhetorical denigration of living beings because it makes the treat printers sound a lot more elegant and special if we’re all meat computers this and stochastic parrots that so-true

  • janny [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Complete nonsense.

    As we all know, many idol-worshiping peoples have encountered gnomes and through worshiping and offering tribute to these gnomes, these gnomes become the hosts for powerful dark gods who reward their followers generously but are known to be fickle and demanding.

    Silicon Valley is infamous for it’s bizzare polycules and their ottoman haramesque power struggles. Somehow, Sam Altman offended Aella_Girl’s polycule who happened to control the board of open AI.

    Aella_girl is openly in sexual concourse with a series of garden gnomes who she likely worships and has married as it is known that gnomes usually demand a wife or your first born child.

    Proof: https://cashmeremag.com/reddit-gonewild-aella-gnome-cam-53817/

    She likely used the powers of this dark god to remove Sam Altman from the board but likely failed to meet it’s escalating demands or otherwise disappointed this entity and as a result failed to remove him. It is known that when disappoints a gnome or stops worshiping it that one’s fortunes will fall into a rapid decline so if this happens to her then we know what likely happened. Either that or Sam Altman is also in contact with a dark entity of some sort.