Unpopular opinion: tech-savvy forum users are much more hostile towards the integration of AI than your average idgaf-how-but-i-want-to-get-this-task-done-quickly-and-without-much-effort-employee.
Even enterprise Claude, even if you’re doing something routine, is still going to need shitloads of “fixing”.
The problem is that the vibe coders who doesn’t know wtf they’re doing isn’t going to know that there’s problems. They’re going to give the prompt, get something back. Try it. “Huh it doesn’t work!” And try it again.
Maybe they’re smart enough to figure out what the error is, get something new back, try it… find out it doesn’t work….
You see how that goes? Someone who’s experienced knows what they can pull from libraries and what they need to tweak to get it running, and don’t need it.
Guess whose code is going to be more stable and more effective with fewer vulnerabilities?
It’s a liability time bomb, and the reason tech savvy people are more hostile to it is because they’re the ones who know its capabilities.
If you hire someone to vibe-code important stuff you should be fired of go bankrupt, i was more talking about text-based generation eg writing emails, summarizing reports, filling out spreadsheets etc.
Never said ai was a good thing though, just saying a lot of people who are not posting in a place like this are regularly using ai and don’t mind it being everywhere.
I guess the guy doing the White House official communication also enjoys using ai generation. The amount of crap they put out might really normalize the use of ai images in corporate and political settings. Again, i’m not saying this is a good thing overal, i’m mentioning this in relation to tech companies policy to put those ai buttons everywhere.
Regardless of skill level for-profit GenAI/LLM AI has a terrible economical (funding focus), political (regulatory capture), social (dataset clean up, PR floods on FLOSS projects, spam & scam) and ecological (GPU deprecation pace, data centers) impact.
So… even if somehow a person is so skilled they finally find good use for models hosted by Anthropic, OpenAI, etc then unfortunately they can’t disentangle it from all the negative externalities.
Let’s say your org providing, idunno, coding to control illumination for a smart facility.
Let’s say the lighting doesn’t work, and as a result some one slips and dies. Let’s say it’s 50/50 of the code is at fault or not.
Your org is now looking at liability in a wrongful death lawsuit.
Even if you can argue that it was being used wrong, it’s still going to cost your org more than it would have to pay some one who’s a proper coder to do it.
That’s the whole point I’m making. A “proper coder” can leverage a model without turning it lose on a whole code base and saying “just fuck my shit up”.
You write the logic for a panel and you need to add a centered button in a UI? Done. You need to grab the proper tar flags for data repackage and transport? Done. You need to actually devise a scalable framework for an finite state machine capturing failure modes? Hooray, you now have time to just focus on that, it’s a bad use case for a LLM. This is what you pay real people for.
All laying the ground work for future endeavors which hopefully address those shortcomings. The box is open and it’s unlikely to close, but I agree these models can’t continue to suck as much as they do to make in the long term and be viable without changes.
Unpopular opinion: tech-savvy forum users are much more hostile towards the integration of AI than your average idgaf-how-but-i-want-to-get-this-task-done-quickly-and-without-much-effort-employee.
The thing about that is, it doesn’t work.
Even enterprise Claude, even if you’re doing something routine, is still going to need shitloads of “fixing”.
The problem is that the vibe coders who doesn’t know wtf they’re doing isn’t going to know that there’s problems. They’re going to give the prompt, get something back. Try it. “Huh it doesn’t work!” And try it again.
Maybe they’re smart enough to figure out what the error is, get something new back, try it… find out it doesn’t work….
You see how that goes? Someone who’s experienced knows what they can pull from libraries and what they need to tweak to get it running, and don’t need it.
Guess whose code is going to be more stable and more effective with fewer vulnerabilities?
It’s a liability time bomb, and the reason tech savvy people are more hostile to it is because they’re the ones who know its capabilities.
If you hire someone to vibe-code important stuff you should be fired of go bankrupt, i was more talking about text-based generation eg writing emails, summarizing reports, filling out spreadsheets etc.
Never said ai was a good thing though, just saying a lot of people who are not posting in a place like this are regularly using ai and don’t mind it being everywhere.
I guess the guy doing the White House official communication also enjoys using ai generation. The amount of crap they put out might really normalize the use of ai images in corporate and political settings. Again, i’m not saying this is a good thing overal, i’m mentioning this in relation to tech companies policy to put those ai buttons everywhere.
Even more unpopular: beyond a certain skill level AI starts to look like a feature again too.
But it’s windows so there’s no saving that turd no matter how much you polish it at this point.
Regardless of skill level for-profit GenAI/LLM AI has a terrible economical (funding focus), political (regulatory capture), social (dataset clean up, PR floods on FLOSS projects, spam & scam) and ecological (GPU deprecation pace, data centers) impact.
So… even if somehow a person is so skilled they finally find good use for models hosted by Anthropic, OpenAI, etc then unfortunately they can’t disentangle it from all the negative externalities.
Another negative is liability.
Let’s say your org providing, idunno, coding to control illumination for a smart facility.
Let’s say the lighting doesn’t work, and as a result some one slips and dies. Let’s say it’s 50/50 of the code is at fault or not.
Your org is now looking at liability in a wrongful death lawsuit.
Even if you can argue that it was being used wrong, it’s still going to cost your org more than it would have to pay some one who’s a proper coder to do it.
That’s the whole point I’m making. A “proper coder” can leverage a model without turning it lose on a whole code base and saying “just fuck my shit up”.
You write the logic for a panel and you need to add a centered button in a UI? Done. You need to grab the proper tar flags for data repackage and transport? Done. You need to actually devise a scalable framework for an finite state machine capturing failure modes? Hooray, you now have time to just focus on that, it’s a bad use case for a LLM. This is what you pay real people for.
All laying the ground work for future endeavors which hopefully address those shortcomings. The box is open and it’s unlikely to close, but I agree these models can’t continue to suck as much as they do to make in the long term and be viable without changes.