Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(December’s finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    edit-2
    6 days ago

    anyone else spent their saturday looking for gas turbine datasheets? no?

    anyway, the bad, no good, haphazard power engineering of crusoe

    neoclouds on top of silicon need a lot of power that they can’t get because they can’t get substation big enough, or maybe provider denied it, so they decided that homemade is just as fine. in order to turn some kind of fuel (could be methane, or maybe not, who knows) into electricity they need gas turbines and a couple of weeks back there was a story that crusoe got their first aeroderivative gas turbines from GE https://www.tomshardware.com/tech-industry/data-centers-turn-to-ex-airliner-engines-as-ai-power-crunch-bites this means that these are old, refurbished, modified jet engines put in a chassis with generator and with turbofan removed. in total they booked 29 turbines from GE, LM2500 series, and some other, PE6000 from other company called proenergy* and probably others (?) for alleged 4.5GW total. for neoclouds generators of this type have major advantage that 1. they exist and backlog isn’t horrific, the first ones delivered were contracted in december 2024, so about 10 months, and onsite construction is limited (sometimes less than month) 2. these things are compact and reasonably powerful, can be loaded on trailer in parts and just delivered wherever 3. at the same time these are small enough that piecewise installation is reasonable (34.4MW per, so just from GE 1GW total spread across 29)

    and that’s about it from advantages. these choices are fucking weird really. the state of the art in turning gas to electricity is to first, take as big gas turbine as practical, which might be 100MW, 350MW, there are even bigger ones. this is because efficiency of gas turbines increases with size, because big part of losses comes from gas slipping through the gap between blades and stator/rotor. the bigger turbine, the bigger cross-sectional area occupied by blades (~ r^2), and so gap (~ r) is less important. this effect is responsible for differences in efficiency of couple of percent just for gas turbine, for example for GE, aeroderivative 35MW-ish turbine (LM2500) we’re looking at 39.8% efficiency, while another GE aeroderivative turbine (LMS100) at 115MW has 43.9% efficiency. our neocloud disruptors stop there, with their just under 40% efficient turbines (and probably lower*) while exhaust is well over 500C and can be used to boil water, which is what any serious powerplant does in combined cycle. this additional steam turbine gives about third of total generated energy, bringing total efficiency to some 60-63%.

    so right off the bat, crusoe throws away about third of usable energy, or alternatively for the same amount of power they burn 50-70% more gas, if they even use gas and not for example diesel. they specifically didn’t order turbines with this extra heat recovery mechanism, because, based on datasheet https://www.gevernova.com/content/dam/gepower-new/global/en_US/downloads/gas-new-site/products/gas-turbines/gev-aero-fact-sheets/GEA35746-GEV-LM2500XPRESS-Product-Factsheet.pdf they would get over 1.37GW, while GE press announcement talked about ā€œjust under 1GWā€ which matches only with the oldest type of turbine there (guess: cheapest), or maybe some mix with even older ones than what is shown. this is not what serious power generating business would do, because for them every fraction of percent matters. while it might be possible to get heat recovery steam boiler and steam turbine units there later, this means extra installation time (capex per MW turns out to be similar) and more backlog, and requires more planning and real estate and foresight, and if they had that they wouldn’t be there in the first place, would they. even then, efficiencies get to maybe 55% because turns out that these heat exchangers required for for professional stuff are huge and can’t be loaded on trailer, so they have to go with less

    so it sorta gets them power short term, and financially it doesn’t look well long term, but maybe they know that and don’t care because they know they won’t be there to pay bills for gas, but also if these glorified gensets are only used during outages or otherwise not to their full capacity then it doesn’t matter that much. also gas turbines in order to run efficiently need to run hot, but the hottest possible temperature with normal fuels would melt any material we can make blades of, so the solution is to take double or triple amount of air than needed and dilute hot gases this way, which also means these are perfect conditions for nitric oxide synthesis, which means smog downwind. now there are SCRs which are supposed to deal with it, but it didn’t stop musk from poisoning people of memphis when he did very similar thing

    * proenergy takes the same jet engine that GE does and turns it into PE6000, which is probably mostly the same stuff as LM6000, except that GE version is 51MW and proenergy 48MW. i don’t know whether it’s derated or less efficient still, but for the same gas consumption it would be 37.5%

    • Seminar2250@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      6 days ago

      https://awful.systems/post/5776862/8966942 😭

      also this guy is a bit of a doofus, e.g. https://bugs.launchpad.net/calibre/+bug/853934, where he is a dick to someone reporting a bug, and https://bugs.launchpad.net/calibre/+bug/885027, where someone points out that you can execute anything as root because of a security issue, and he argues like a total shithead

      You mean that a program designed to let an unprivileged user
      mount/unmount/eject anything he wants has a security flaw because it allows
      him to mount/unmount/eject anything he wants? I’m shocked.

      Implement a system that allows an appilcation to mount/unmount/eject USB
      devices connected to the system securely, then make sure that system is
      universally adopted on every linux install in the universe. Once you’ve done that, feel free to
      re-open this ticket.

      i would not invite this person to my birthday

      • Sailor Sega Saturn@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        Ā·
        6 days ago

        I was vaguely aware of the calibre vulnerabilities but this is the first I’ve actually read the thread and it’s wild.

        There were like 11 or so Proof of Concept exploits over the course of that bug? And he was just kicking and screaming the whole time about how fine his mount-stuff-anywhere-as-root (!!?) code was.

        I’m always fascinated when people are so close to getting something-- like in that first paragraph you quoted. In any normal software project you could just put that paragraph as the bug report and the owners would take is seriously rather than use it as an excuse for why their software has to be insecure.

    • flaviat@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      6 days ago

      Does this mean calibre’s use case is a digital equivalent of a shelf of books you never read?

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    Ā·
    5 days ago

    Yud explains, over 3k words, that not only is he smarter than everyone else, he is also saner, and no, there’s no way you can be as sane as him

    Eliezer’s Unteachable Methods of Sanity

    (side note - it’s weird that LW, otherwise so anxious about designing their website, can’t handle fucking apostrophes correctly)

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      Ā·
      edit-2
      5 days ago

      Ah, prophet-maxxing. ā€˜they have no hope of understanding and I have no hope of explaining in 30 seconds’

      The first and oldest reason I stay sane is that I am an author, and above tropes. Going mad in the face of the oncoming end of the world is a trope.

      This guy wrote this (note I don’t think there is anything wrong with looking like a nerd (I mean I have a mirror somewhere, so I don’t want to be a hypocrite on this), but looking like one and saying you are above tropes is something, there is also HPMOR)

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        5 days ago

        No one point out that ā€œkeeping your head while all about you are losing theirsā€ is also a trope.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      edit-2
      5 days ago

      Handshake meme of Yud and Rorschach praising Harry S Truman

      From the comments:

      I got Claude to read this text and explain the proposed solution to me

      Once you start down the Claude path, forever will it dominate your destiny…

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    Ā·
    6 days ago

    2 links from my feeds with crossover here

    Lawyers, Guns and Money: The Data Center Backlash

    Techdirt: Radicalized Anti-AI Activist Should Be A Wake Up Call For Doomer Rhetoric

    Unfortunately Techdirt’s Mike Masnick is a signatory some bullshit GenAI-collaborationist manifesto called The Resonant Computing Manifesto, along with other suspects like Anil Dash. Like so many other technolibertarian manifestos, it naturally declines to say how their wonderful vision would be economically feasible in a world without meaningful brakes on the very tech giants they profess to oppose.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      6 days ago

      i am pretty sure i am shredding the Resonant Computing Manifesto for Monday

      and of course Anil Dash signed it

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        Ā·
        6 days ago

        The people who build these products aren’t bad or evil.

        No, I’m pretty sure that a lot of them just are bad and evil.

        With the emergence of artificial intelligence, we stand at a crossroads. This technology holds genuine promise.

        [citation needed]

        [to a source that’s not laundered slop, ya dingbats]

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          Ā·
          5 days ago

          to a source that’s not laundered slop, ya dingbats

          Ha thats easy. Read Singularity Sky by Charles Stross see all the wonders the festival brings.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        15
        Ā·
        7 days ago

        Yud’s whole project is a pipeline intended to create zizians, if you believe that Yud is serious about his alignment beliefs. If he isn’t serious then it’s just an unfortunate consequence that he is not trying to address in any meaningful way.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      7 days ago

      just one rationalist got lost in the wilderness? that’s nothing, tell me when all of them are gone

    • BigMuffN69@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      5 days ago

      Most insane part about this is after he assaulted the treasurer(?) of his foundation trying to siphon funds for an apparent terror act, the naive chuckle fucks still went and said ā€œwe dont think his violent tendencies are an indication he might do something violentā€

      Like idk maybe update on the fact he just sent one of his own to the hospital??

    • BioMan@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      7 days ago

      Is it better for these people to be collected in one place under the singularity cult, or dispersed into all the other religions, cults, and conspiracy theories that they would ordinarily be pulled into?

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    Ā·
    6 days ago

    From Lila Byock:

    A 4th grader was assigned to design a book cover for Pippi Longstocking using Adobe for Education.

    The result is, in technical terms, four pictures of a schoolgirl waifu in fetishwear.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      6 days ago

      The result is, in technical terms, four pictures of a schoolgirl waifu in fetishwear.

      I try to avoid having to even see the outputs of these fucking systems, but you just made me realize that there’s going to be more than a few of them that will ā€œleakā€ (read: preferentially deliver, by way of training focus) the kinks of its particular owner. I mean it’s already happening for the textual replies on twitter, soothing felon’s ever so bruised ego. the chance of it not Shipping beyond that is pretty damn zero :|

      god I hate all of this

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    Ā·
    7 days ago

    Dr. Casey Fiesler reports,

    I was poking around Google Scholar for publications about the relationship between chatbots and wellness. Oh how useful: a systematic literature review! Let’s dig into the findings.

    […]

    Did you guess ā€œthat paper does not actually existā€?

    Did you also guess that NOT A SINGLE PAPER IN THEIR REFERENCES APPEARS TO EXIST? […] When I was searching in various places to confirm that those citations were fabricated, Google’s AI overview just kept the con going.

    Jill Walker Rettberg in the comments:

    There’s a peer reviewed published paper in AI & Society called Cognitive Imperialism and Artificial Intelligence which is clearly mostly AI-generated. Citations are real but almost all irrelevant. I emailed the editors weeks ago but it’s still up there and getting cited.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    Ā·
    edit-2
    5 days ago

    New preprint just dropped, and noticed some seemingly pro AI people talk about it and conclude that people who have more success with genAI have better empathy, are more social and have theory of mind. (I will not put those random people on blast, I also have not read the paper itself (aka, I didn’t do the minimum actually required research so be warned), just wanted to give people a heads up on it).

    But yes, that does describe the AI pushers, social people who have good empathy and theory of mind. (Also, ow got genAI runs on fairy rules, you just gotta believe it is real (I’m joking a bit here, it is prob fine, as it helps that you understand where a model is coming from and you realize its limitations it helps, and the research seems to be talking about humans + genAI vs just genAI)).

    • ShakingMyHead@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      Ā·
      edit-2
      5 days ago

      So, I’m not an expert study-reader or anything, but it looks like they took some questions from the MMLU, modified it in some unspecified way and put it into 3 categories (AI, human, AI-human), and after accounting for skill, determined that people with higher theory of mind had a slightly better outcome than people with lower theory of mind. They determined this based on what the people being tested wrote to the AI, but what they wrote isn’t in the study.
      What they didn’t do is state that people with higher theory of mind are more likely to use AI or anything like that. The study also doesn’t mention empathy at all, though I guess it could be inferred.

      Not that any of that actually matters because how they determined how much ā€œtheory of mindā€ each person had was to ask Gemini 2.5 and GPT-4o.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      Ā·
      5 days ago

      I clicked as I was curious as to what markers of AI use would appear. I immediately realised the problem: if it is written with AI then I wouldn’t want to read it, and thus wouldn’t be able to tell. Luckily the author’s profile cops to being ā€œAI assistedā€, which could mean a lot of things that just boil down to ā€œslop forwardā€.

      • lagrangeinterpolator@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        Ā·
        5 days ago

        The most obvious indication of AI I can see is the countless paragraphs that start with a boldfaced ā€œheaderā€ with a colon. I consider this to be terrible writing practice, even for technical/explanatory writing. When a writer does this, it feels as if they don’t even respect their own writing. Maybe their paragraphs are so incomprehensible that they need to spoonfeed the reader. Or, perhaps they have so little to say that the bullet points already get it across, and their writing is little more than extraneous fluff. Yeah, much larger things like sections or chapters should have titles, but putting a header on every single paragraph is, frankly, insulting the reader’s intelligence.

        I see AI output use this format very frequently though. Honestly, this goes to show how AI appeals to people who only care about shortcuts and bullshitting instead of thinking things through. Putting a bold header on every single paragraph really does appeal to that type.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        7 days ago

        hi hi I am budweiser jabrony please join my new famous and good website ā€˜tapering incorrectness dot com’ where we speculate about which OSI layers have the most consciousness (zero is not a valid amount of consciousness) also give money and prima nocta. thanks

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        7 days ago

        Tcp/ip knew what it did, with its authoritarian desire to see packets in order. Reject authority embrace UDP!

        But yes, they are using ā€˜layer’ wrong

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          Ā·
          7 days ago

          Well that and ā€œcoreā€. I could consider social media and even chatbots parts of internet infrastructure, but they both depend on a framework of underlying protocols and their implementation details. Without social media or chatbots the internet would still be the internet, which is not the case for, say, the Internet Protocol.

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            Ā·
            7 days ago

            Also I would contend they’re misusing ā€œinfrastructureā€. Social media and chat bots are kinds of services that are provided over the internet, but they aren’t a part of the infrastructure itself anymore than the world’s largest ball of twine is part of the infrastructure of the Interstate Highway System.

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              8
              Ā·
              6 days ago

              Heh yeah, ā€œinfrastructureā€ in the same way that moneyed bayfuckers are ā€œbuildersā€

              It is also a useful study in just how little they fucking by get about how anything works, and what models of reasoning they apply to what they perceive. Depressing, but useful

              • YourNetworkIsHaunted@awful.systems
                link
                fedilink
                English
                arrow-up
                4
                Ā·
                6 days ago

                It legitimately feels like at least half of these jokers have the same attitude towards IT and project management that sovereign citizens do to the law. SovCits don’t understand the law as a coherent series of rules and principles applied through established procedures etc, they just see a bunch of people who say magic words that they don’t entirely understand and file weird paperwork that doesn’t make sense and then end up getting given a bunch of money or going to prison or whatever. It’s a literal cargo cult version of the legal system, with the slight hiccup that the rest of the world is trying to actually function.

                Similarly, the Silicon Valley Business Idiot set sees the tech industry as one where people say the right things and make the buttons look pretty and sometimes they get bestowed reality-warping sums of money. The financial system is sufficiently divorced from reality that the market doesn’t punish the SVBIs for their cargo cult understanding of technology, but this does explain a lot of the discourse and the way people like Thiel, Andreesen, and Altman talk about their work and why the actual products are so shite to use.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      7 days ago

      the article headline: ā€œChatbots are now rivaling social networks as a core layer of internet infrastructureā€

      Counterpoint: ā€œvibe codingā€ is rotting internet infrastructure from the inside, AI scrapers are destroying the commons through large-scale theft, chatbots are drowning everything else through nonstop lying

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    Ā·
    12 days ago

    (e, cw: genocide and culturally-targeted hate by the felon bot)

    world’s most divorced man continues outperforming black holes at sucking

    404 also recently did a piece on his ego-maintenance society-destroying vainglory projects

    imagine what it’s like in his head. era-defining levels of vacuous.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      7 days ago

      From the replies

      I wonder what prompted it to switch to Elon being worth less than the average human while simultaneously saying it’d vaporize millions if it could prolonged his life in a different sub-thread

      It’s odd to me that people still expect any consistency from chatbots. These bots can and will give different answers to the same verbatim question. Am I just too online if I have involuntarily encountered enough AI output to know this?

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    10 days ago

    Another day, another instance of rationalists struggling to comprehend how they’ve been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy

    A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn’t really engage with the fact the Anthropic has lied and broken ā€œAI safety commitmentsā€ to rationalist/lesswrongers/EA shamelessly and repeatedly:

    https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=tBTMWrTejHPHyhTpQ

    I feel confused about how to engage with this post. I agree that there’s a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is ā€œspunā€ in uncharitable ways.

    https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy?commentId=CogFiu9crBC32Zjdp

    I think it’s sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.

    I would find this all hilarious, except a lot of the regulation and some of the ā€œAI safety commitmentsā€ would also address real ethical concerns.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      9 days ago

      If rationalists could benefit from just one piece of advice, it would be: actions speak louder than words. Right now, I don’t think they understand that, given their penchant for 10k word blog posts.

      One non-AI example of this is the most expensive fireworks show in history, I mean, the SpaceX Starship program. So far, they have had 11 or 12 test flights (I don’t care to count the exact number by this point), and not a single one of them has delivered anything into orbit. Fans generally tend to cling on to a few parlor tricks like the ā€œchopstickā€ stuff. They seem to have forgotten that their goal was to land people on the moon. This goal had already been accomplished over 50 years ago with the 11th flight of the Apollo program.

      I saw this coming from their very first Starship test flight. They destroyed the launchpad as soon as the rocket lifted off, with massive chunks of concrete flying hundreds of feet into the air. The rocket itself lost control and exploded 4 minutes later. But by far the most damning part was when the camera cut to the SpaceX employees wildly cheering. Later on there were countless spin articles about how this test flight was successful because they collected so much data.

      I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc. Now, I choose to look at the actions of the AI companies, and I can easily see that they do not have any ethics. Meanwhile, the rationalists are hypnotized by the Anthropic critihype blog posts about how their AI is dangerous.

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        8 days ago

        I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc.

        I suspect that part of the problem is that there is company in there that’s doing a pretty amazing job of reusable rocketry at lower prices than everyone else under the guidance of a skilled leader who is also technically competent, except that leader is gwynne shotwell who is ultimately beholden to an idiot manchild who wants his flying cybertruck just the way he imagines it, and cannot be gainsayed.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      9 days ago

      This would be worrying if there was any risk at all that the stuff Anthropic is pumping out is an existential threat to humanity. There isn’t so this is just rats learning how the world works outside the blog bubble.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        Ā·
        9 days ago

        I mean, I assume the bigger the pump the bubble the bigger the burst, but at this point the rationalists aren’t really so relevant anymore, they served their role in early incubation.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      12 days ago

      that being a hung banner (rather than wall-mount or so) borders on being a tacit acknowledgement that they know their shit is unpopular and would get vandalised in a fucking second if it were easy (or easier!) to get to

      even then, I suspect that banner will not stay unscathed for long