PR by Xuan-Son Nguyen for `llama.cpp`: > This PR provides a big jump in speed for WASM by leveraging SIMD instructions for `qX_K_q8_K` and `qX_0_q8_0` dot product functions. > > …
What a wonderful way to engage with my post. You win bud. You’re the smartest.
Amazing counterpoint you’ve mustered there when presented with the simple fact that all the problems you’re describing have already been happening long before AI showed up on the scene. Way to engage in good faith dialogue. Bravo!
I’ve never said that AI is the cause of those problems that’s words you’re putting in my mouth. I’ve said that AI is being used as a solution to those problems in the industry when in reality the use of AI to solve those problems exacerbates them while allowing companies to reap “productive” output.
For some reason programmers can understand “AI Slop” but if the AI is generating code instead of stories, images, audio and video it’s no longer “AI Slop” because we’re exalted in our communion with the machine spirits! Our holy logical languages could never encode the heresy of slop!
Ok, so if you agree the AI is not the source of those problems, then it’s not clear what you’re arguing about. Nobody is arguing for using the AI for problems you keep mentioning, and you keep ignoring that. I’ve given you concrete examples of how this tool is useful for me, you’ve just ignored that and continued arguing about the straw man you want to argue about.
The slop has always been there, and AI isn’t really changing anything here.
Nobody is arguing for using the AI for problems you keep mentioning, and you keep ignoring that.
This is absolutely not true. Almost every programmer I know has had their company try to “AI” their documentation or “AI” some process only to fail spectacularly because the basis of what the AI does to data is either missing or doesn’t have enough quality. I have several friends at the Lead/EM level take too much time out of their schedules to talk down a middle manager from sapping resources into AI boondoggles.
I’ve had to talk people off of this ledge, and lead that works under me (I’m technically a platform architect across 5 platform teams) actually decided to try it anyway and burn a couple days on a test run and guess what the results were garbage.
Beyond that the problem is that AI is a useful tool in IGNORING the problems.
I’ve given you concrete examples of how this tool is useful for me, you’ve just ignored that and continued arguing about the straw man you want to argue about.
I started this entire comment thread with an actual critique, a point, that you have in very debate bro fashion have consistently called a strawman. If I were a feeling less charitable I could call the majority of your arguments non-sequitors to mine. I have never argued that AI isn’t useful to somebody. In fact I’m arguing that it’s dangerously useful for decision makers in the software industry based on how they WANT to make software.
If a piece of software is a car, and a middle manager wants that car to have a wonderful proprietary light bar on it and wants to use AI to build such a light bar on his wonderful car. The AI might actually build the light bar in a narrow sense to the basic specs the decision maker feels might sell well on the market. However the light bar adds 500lbs of weight so when the driver gets in the car the front suspension is on the floor, and the wiring loom is also now a ball of yarn. But the car ends up being just shitty enough to sell, and that’s the important thing.
And remember the AI doesn’t complain about resources or order of operations when you ask it do make a light bar at the same time as a cool roof rack, a kick ass sound system and a more powerful engine, and hey if the car doesn’t work after one of these we can just ask it to regenerate the car design and then just have another AI test it! And you know what it might even be fine to have 1 or 2 nerds around just in case we have to painfully take the car apart only to discover we’re overloading the alternator from both ends.
I’m talking about our discussion here. AI can be misused just like any tool, there’s nothing surprising or interesting about that. What I’m telling you is that from my experience, it can also be a useful tool when applied properly.
I started this entire comment thread with an actual critique, a point, that you have in very debate bro fashion have consistently called a strawman.
I’ve addressed your point repeatedly in this discussion.
In fact I’m arguing that it’s dangerously useful for decision makers in the software industry based on how they WANT to make software.
And I’m once again going to point out that this has been happening for a very long time. If you’ve ever worked at a large corporation, then you’d see that they take monkeys at typewriter approach to software development. These companies don’t care about code quality one bit, and they just want to have fungible developers whom they can hire and fire at will. I’ve seen far more nightmarish code produced in these conditions than any AI could ever hope to make.
The actual problem isn’t AI, it’s capitalist mode of production and alienation of workers. That’s the actual source of the problems, and that’s why these problems exist regardless of whether people use AI or not.
The way that you’re applying the tool “properly” is ultimately the same way that middle managers want to apply the tool, the only difference is that you know what you’re doing as a quality filter, where the code goes and how to run it. AI can’t solve the former (quality) but there are people working on a wholesale solution for the latter two. And they’re getting their data from people like you!
In terms a productive process there’s not as much daylight between the two use cases as you seem to think there is.
If people figure out how to automate the entire coding pipeline then power to them. I don’t see this happening in the near future myself. In the meantime, I’m going to use tools that make my life better. Also, not sure why you’d assume people are getting data from me given that I run models locally with ollama. I find deepseek-coder works perfectly fine with local setup.
Amazing counterpoint you’ve mustered there when presented with the simple fact that all the problems you’re describing have already been happening long before AI showed up on the scene. Way to engage in good faith dialogue. Bravo!
I’ve never said that AI is the cause of those problems that’s words you’re putting in my mouth. I’ve said that AI is being used as a solution to those problems in the industry when in reality the use of AI to solve those problems exacerbates them while allowing companies to reap “productive” output.
For some reason programmers can understand “AI Slop” but if the AI is generating code instead of stories, images, audio and video it’s no longer “AI Slop” because we’re exalted in our communion with the machine spirits! Our holy logical languages could never encode the heresy of slop!
Ok, so if you agree the AI is not the source of those problems, then it’s not clear what you’re arguing about. Nobody is arguing for using the AI for problems you keep mentioning, and you keep ignoring that. I’ve given you concrete examples of how this tool is useful for me, you’ve just ignored that and continued arguing about the straw man you want to argue about.
The slop has always been there, and AI isn’t really changing anything here.
This is absolutely not true. Almost every programmer I know has had their company try to “AI” their documentation or “AI” some process only to fail spectacularly because the basis of what the AI does to data is either missing or doesn’t have enough quality. I have several friends at the Lead/EM level take too much time out of their schedules to talk down a middle manager from sapping resources into AI boondoggles.
I’ve had to talk people off of this ledge, and lead that works under me (I’m technically a platform architect across 5 platform teams) actually decided to try it anyway and burn a couple days on a test run and guess what the results were garbage.
Beyond that the problem is that AI is a useful tool in IGNORING the problems.
I started this entire comment thread with an actual critique, a point, that you have in very debate bro fashion have consistently called a strawman. If I were a feeling less charitable I could call the majority of your arguments non-sequitors to mine. I have never argued that AI isn’t useful to somebody. In fact I’m arguing that it’s dangerously useful for decision makers in the software industry based on how they WANT to make software.
If a piece of software is a car, and a middle manager wants that car to have a wonderful proprietary light bar on it and wants to use AI to build such a light bar on his wonderful car. The AI might actually build the light bar in a narrow sense to the basic specs the decision maker feels might sell well on the market. However the light bar adds 500lbs of weight so when the driver gets in the car the front suspension is on the floor, and the wiring loom is also now a ball of yarn. But the car ends up being just shitty enough to sell, and that’s the important thing.
And remember the AI doesn’t complain about resources or order of operations when you ask it do make a light bar at the same time as a cool roof rack, a kick ass sound system and a more powerful engine, and hey if the car doesn’t work after one of these we can just ask it to regenerate the car design and then just have another AI test it! And you know what it might even be fine to have 1 or 2 nerds around just in case we have to painfully take the car apart only to discover we’re overloading the alternator from both ends.
I’m talking about our discussion here. AI can be misused just like any tool, there’s nothing surprising or interesting about that. What I’m telling you is that from my experience, it can also be a useful tool when applied properly.
I’ve addressed your point repeatedly in this discussion.
And I’m once again going to point out that this has been happening for a very long time. If you’ve ever worked at a large corporation, then you’d see that they take monkeys at typewriter approach to software development. These companies don’t care about code quality one bit, and they just want to have fungible developers whom they can hire and fire at will. I’ve seen far more nightmarish code produced in these conditions than any AI could ever hope to make.
The actual problem isn’t AI, it’s capitalist mode of production and alienation of workers. That’s the actual source of the problems, and that’s why these problems exist regardless of whether people use AI or not.
The way that you’re applying the tool “properly” is ultimately the same way that middle managers want to apply the tool, the only difference is that you know what you’re doing as a quality filter, where the code goes and how to run it. AI can’t solve the former (quality) but there are people working on a wholesale solution for the latter two. And they’re getting their data from people like you!
In terms a productive process there’s not as much daylight between the two use cases as you seem to think there is.
If people figure out how to automate the entire coding pipeline then power to them. I don’t see this happening in the near future myself. In the meantime, I’m going to use tools that make my life better. Also, not sure why you’d assume people are getting data from me given that I run models locally with ollama. I find deepseek-coder works perfectly fine with local setup.
For everyone of you there’s 1000 junior engineers running copilot.
Sure, but and before there were a 1000 junior engineers mindlessly copy/pasting stuff from stackoverflow till it sort of works.