(This is basically an expanded version of a comment on the weekly Stubsack - Iā€™ve linked it above for convenienceā€™s sake.)

This is pure gut instinct, but Iā€™m starting to get the feeling this AI bubbleā€™s gonna destroy the concept of artificial intelligence as we know it.

On the artistic front, thereā€™s the general tidal wave of AI-generated slop (which Iā€™ve come to term ā€œthe slop-namiā€) which has come to drown the Internet in zero-effort garbage, interesting only when the artā€™s utterly insane or its prompter gets publicly humiliated, and, to quote Line Goes Up, ā€œderivative, lazy, ugly, hollow, and boringā€ the other 99% of the time.

(And all while the AI industry steals artistsā€™ work, destroys their livelihoods and shamelessly mocks their victims throughout.)

On the ā€œintelligenceā€ front, the bubbleā€™s given us public and spectacular failures of reasoning/logic like Google gluing pizza and eating onions, ChatGPT sucking at chess and briefly losing its shit, and so much more - even in the absence of formal proof LLMs canā€™t reason, its not hard to conclude theyā€™re far from intelligent.

All of this is, of course, happening whilst the tech industry as a whole is hyping the ever-loving FUCK out of AI, breathlessly praising its supposed creativity/intelligence/brilliance and relentlessly claiming that theyā€™re on the cusp of AGI/superintelligence/whatever-the-fuck-theyā€™re-calling-it-right-now, they just need to raise a few more billion dollars and boil a few more hundred lakes and kill a few more hundred species and enable a few more months of SEO and scams and spam and slop and soulless shameless scum-sucking shitbags senselessly shitting over everything that was good about the Internet.


The publicā€™s collective consciousness was ready for a lot of futures regarding AI - a future where it took everyoneā€™s jobs, a future where it started the apocalypse, a future where it brought about utopia, etcetera. A future where AI ruins everything by being utterly, fundamentally incompetent, like the one weā€™re living in now?

Thatā€™s a future the public was not ready for - sci-fi writers werenā€™t playing much the idea of ā€œincompetent AI ruins everythingā€ (Paranoia is the only example I know of), and the tech press wasnā€™t gonna run stories about AIā€™s faults until it became unignorable (like that lawyer who got in trouble for taking ChatGPT at its word).

Now, of course, the publicā€™s had plenty of time to let the reality of this current AI bubble sink in, to watch as the AI industry tries and fails to fix the unfixable hallucination issue, to watch the likes of CrAIyon and Midjourney continually fail to produce anything even remotely worth the effort of typing out a prompt, to watch AI creep into and enshittify every waking aspect of their lives as their bosses and higher-ups buy the hype hook, line and fucking sinker.


All this, I feel, has built an image of AI as inherently incapable of humanlike intelligence/creativity (let alone Superintelligencetm), no matter how many server farms you build or oceans of water you boil.

Especially so on the creativity front - publicly rejecting AI, like what Procreate and Schoolism did, earns you an instant standing ovation, whilst openly shilling it (like PC Gamer or The Bookseller) or showcasing it (like Justine Moore, Proper Prompter or Luma Labs) gets you publicly and relentlessly lambasted. To quote Baldur Bjarnason, the ā€œE-number additive, but for creative workā€ connotation of ā€œAIā€ is more-or-less a permanent fixture in the publicā€™s mind.

I donā€™t have any pithy quote to wrap this up, but to take a shot in the dark, I expect weā€™re gonna see a particularly long and harsh AI winter once the bubble bursts - one fueled not only by disappointment in the failures of LLMs, but widespread public outrage at the massive damage the bubble inflicted, with AI funding facing heavy scrutiny as the public comes to treat any research into the field as done with potentally malicious intent.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    arrow-up
    6
    Ā·
    1 month ago

    I guess the other relevant example in film would be Wargames. At least nobody is recommending hooking ChatGPT up to the nuclear launch system.

    I honestly hope youā€™re right about the coming AI winter. I was watching a report from a major South Korean arms show yesterday and one of the themes that the defense industry appears to be taking from the war in Ukraine is that more independent and autonomous weapon systems are going to be useful tools to counteract jamming and to reduce manpower requirements. Nobody appears to be putting OpenAI into the kill chain, and I think on balance autonomous systems are a more ethical way to address the problem than other options. I canā€™t believe weā€™re seeing cluster bombs and land mines making such a comeback. All the same, a strong push against AI more generally would help make sure we donā€™t end up combining the IFF of a land mine with the trigger discipline of the CIAā€™s predator drones.

  • Masonicon@awful.systems
    link
    fedilink
    arrow-up
    2
    Ā·
    edit-2
    25 days ago

    apparently, the only way AIs can have Human-level Intelligence and creativity(if not Superintelligence) was: uses Human brains wired together, photonics, Quantum computing, or some combinations between of, instead using conventional silicon chips, for processing data