• ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    21 hours ago

    Ok, so if you agree the AI is not the source of those problems, then it’s not clear what you’re arguing about. Nobody is arguing for using the AI for problems you keep mentioning, and you keep ignoring that. I’ve given you concrete examples of how this tool is useful for me, you’ve just ignored that and continued arguing about the straw man you want to argue about.

    The slop has always been there, and AI isn’t really changing anything here.

    • piggy [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      20 hours ago

      Nobody is arguing for using the AI for problems you keep mentioning, and you keep ignoring that.

      This is absolutely not true. Almost every programmer I know has had their company try to “AI” their documentation or “AI” some process only to fail spectacularly because the basis of what the AI does to data is either missing or doesn’t have enough quality. I have several friends at the Lead/EM level take too much time out of their schedules to talk down a middle manager from sapping resources into AI boondoggles.

      I’ve had to talk people off of this ledge, and lead that works under me (I’m technically a platform architect across 5 platform teams) actually decided to try it anyway and burn a couple days on a test run and guess what the results were garbage.

      Beyond that the problem is that AI is a useful tool in IGNORING the problems.

      I’ve given you concrete examples of how this tool is useful for me, you’ve just ignored that and continued arguing about the straw man you want to argue about.

      I started this entire comment thread with an actual critique, a point, that you have in very debate bro fashion have consistently called a strawman. If I were a feeling less charitable I could call the majority of your arguments non-sequitors to mine. I have never argued that AI isn’t useful to somebody. In fact I’m arguing that it’s dangerously useful for decision makers in the software industry based on how they WANT to make software.

      If a piece of software is a car, and a middle manager wants that car to have a wonderful proprietary light bar on it and wants to use AI to build such a light bar on his wonderful car. The AI might actually build the light bar in a narrow sense to the basic specs the decision maker feels might sell well on the market. However the light bar adds 500lbs of weight so when the driver gets in the car the front suspension is on the floor, and the wiring loom is also now a ball of yarn. But the car ends up being just shitty enough to sell, and that’s the important thing.

      And remember the AI doesn’t complain about resources or order of operations when you ask it do make a light bar at the same time as a cool roof rack, a kick ass sound system and a more powerful engine, and hey if the car doesn’t work after one of these we can just ask it to regenerate the car design and then just have another AI test it! And you know what it might even be fine to have 1 or 2 nerds around just in case we have to painfully take the car apart only to discover we’re overloading the alternator from both ends.

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 hours ago

        I’m talking about our discussion here. AI can be misused just like any tool, there’s nothing surprising or interesting about that. What I’m telling you is that from my experience, it can also be a useful tool when applied properly.

        I started this entire comment thread with an actual critique, a point, that you have in very debate bro fashion have consistently called a strawman.

        I’ve addressed your point repeatedly in this discussion.

        In fact I’m arguing that it’s dangerously useful for decision makers in the software industry based on how they WANT to make software.

        And I’m once again going to point out that this has been happening for a very long time. If you’ve ever worked at a large corporation, then you’d see that they take monkeys at typewriter approach to software development. These companies don’t care about code quality one bit, and they just want to have fungible developers whom they can hire and fire at will. I’ve seen far more nightmarish code produced in these conditions than any AI could ever hope to make.

        The actual problem isn’t AI, it’s capitalist mode of production and alienation of workers. That’s the actual source of the problems, and that’s why these problems exist regardless of whether people use AI or not.

        • piggy [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          20 hours ago

          The way that you’re applying the tool “properly” is ultimately the same way that middle managers want to apply the tool, the only difference is that you know what you’re doing as a quality filter, where the code goes and how to run it. AI can’t solve the former (quality) but there are people working on a wholesale solution for the latter two. And they’re getting their data from people like you!

          In terms a productive process there’s not as much daylight between the two use cases as you seem to think there is.

          • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            20 hours ago

            If people figure out how to automate the entire coding pipeline then power to them. I don’t see this happening in the near future myself. In the meantime, I’m going to use tools that make my life better. Also, not sure why you’d assume people are getting data from me given that I run models locally with ollama. I find deepseek-coder works perfectly fine with local setup.

                • piggy [they/them]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  20 hours ago

                  StackOverflow copypasta wasn’t a productive processes that was seeking to remove the developer from the equation though.

                  This isn’t about a tech scaling strategy of training high quality high productivity engineers vs “just throwing bodies at it” anymore. This is about the next level of “just throwing bodies at it”, “just throwing compute at it”.

                  This is something technically feasible within the next decade unless, inshallah, these models collapse from ingesting their own awful data, rather than improving.

                  • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    20 hours ago

                    StackOverflow copypasta very much did remove the developer from the equation. People would just mindlessly string code together without bothering to understand what they were doing or why the code worked. It has become a common practice in the industry at this point, and huge codebases have been erected using this method.

                    Every large coporation uses this method because they want to have fungible devs. Since developers with actual skill don’t want to be treated as fungible cogs, the selection pressures ensure that people who can’t get jobs with better conditions end up working in these places. They’re just doing it to get a paycheck, and they basically bang their heads against the keyboard till something resembling working code falls out. I’ll also remind you of the whole outsourcing craze which was basically exact same goal corps want to accomplish with AI now.

                    There’s absolutely nothing new happening here that hasn’t been going on for literally decades. What you’re describing is already very much feasible and it’s happening at scale.