I’ve seen this term thrown around a lot lately and I just wanted to read your opinion on the matter. I feel like I’m going insane.
Vibe coding is essentially asking AI to do the whole coding process, and then checking the code for errors and bugs (optional).
That’s a bad vibe if I’ve ever seen one.
This seems like a game you’d do with other programmers, lol.
I can understand using AI to write some potentially verbose or syntactically hell lines to save time and headaches.
The whole coding process? No. 😭
Seems like a recipe for subtle bugs and unmaintainable systems. Also those Eloi from the time machine, where they don’t know how anything works anymore.
Management is probably salivating at the idea of firing all those expensive engineers that tell them stuff like “you can’t draw three red lines all perpendicular in yellow ink”
I’m also reminded of that ai-for-music guy that was like “No one likes making art!”. Soulless husk.
I agree with you.
The reason I wrote this post in the first place was because I heard people I respect a lot at work talk about this as being the future of programming. Also the CEO has acknowledged this and is actively riding the “vibe-coding” train.
I’m tired of these “get rich quick the easy way” buzz-words and ideas, and the hustle culture that perpetuates them.
^ this
Using AI leads to code churn and code churn is bad for the health of the project.
If you can’t keep the code comprehensible and maintainable then you end up with a worse off product where either everything breaks all the time, or the time it takes to release each new feature becomes exponentially longer, or all of your programmers become burnt out and no one wants to touch the thing.
You just get to the point where you have to stop and start the project all over again, while the whole time people are screaming for the thing that was promised to them back at the start.
It’s exactly the same thing that happens when western managers try to outsource to “cheap” programming labor overseas, it always ends up costing more, taking longer, and ending in disaster
Nearly every time I ask ChatGPT a question about a well established tech stack, it’s responses are erroneous to the point of being useless. It frequently provides examples using fabricated, non-existent functionality and the code samples are awful.
What’s the point in getting AI to write code that I’m just going to have to completely rewrite?
They can vibe as much as they want, but don’t ever ask me to touch the mess they create.
Once companies recognize the full extent of their technical debt, they will likely need to hire a substantial number of highly experienced software engineers to address the issues, many of which stem from over-reliance on copying and pasting outputs from large language models.
If you don’t write a single line then you aren’t coding
I probably wouldn’t do it. I do have AI help at times, but it is more for bouncing ideas off of, and occasionally it’ll mention a library or tech stack I haven’t heard of that allegedly accomplishes what I’m looking to do. Then I go research the library or tech stack and determine if there is value.
If it wasn’t for the fact that even an AI trained on only factually correct data can conflagrate those data points into entirely novel data that may no longer be factually accurate, I wouldn’t mind the use of AI tools for this or much of anything.
But they can literally just combine everything they know to create something that appears normal and correct, while being absolutely fucked. I feel like using AI to generate code would just give you more work and waste time, because you’ll still need to fucking verify that it didn’t just output a bunch of unusable bullshit.
Relying on these things is absolutely stupid.
Completely agree. My coworkers spend more time prompting and trying to get useful text from ChatGPT and then fixing that text than the time it’d take them to actually write the thing in the first place. It’s nonsense.
Based on my experience of AI coding I think this will only work for simple/common tasks, like writing a Python script download a CSV file and convert it to JSON.
As soon as you get anywhere that isn’t all over the internet it starts to bullshit.
But if you’re working in a domain it’s decent at, why not? I found in those cases fixing the AI’s mistakes can be faster than writing it myself. Actually often I find it useful for helping me decide how I want to write code because the AI does something dumb, and I go “no I obviously don’t want it like that”…
Based on my experience of AI coding I think this will only work for simple/common tasks, like writing a Python script download a CSV file and convert it to JSON.
As soon as you get anywhere that isn’t all over the internet it starts to bullshit.
But if you’re working in a domain it’s decent at, why not? I found in those cases fixing the AI’s mistakes can be faster than writing it myself. Actually often I find it useful for helping me decide how I want to write code because the AI does something dumb, and I go “no I obviously don’t want it like that”…
Nah. I only used AI as a last resort, and in my case, it has worked out. I cannot see myself using AI for codes again.
This sounds like something I put on my resume to get a coding job, but I’m not actually a coder.
It’d work, too.