Bruh the horsehair wig.
It’s funny. You ask anyone who uses GenBS in their daily workflow and they all say “it’s really good for these ancillary tasks but not for my core role”. Then you ask someone else who’s core role is those ancillary tasks and who’s ancillary tasks are person A’s core role and they will say the same thing.
Maybe they assume it has any use whatsoever because they only understand their core role and don’t understand their ancillary tasks
The exception are Lawyers and Advertisers. I think that this is because the average Lawyers core role is to generate BS and the majority of Advertising is completely pointless.
There are good lawyers and advertising workers out there; they are the ones who don’t use GenBS.
AI looks like it’s good for X…if aren’t familiar with or regularly do X.
Same old fuckin’ story of work devaluing. “Hurr i can do your job”
I 100% believe this.
We’ve really looked into co-pilot in our example at work to really see what it can do, and oh my god, the programmers in our team are literally excited and terrified at the same time.
The level that it has come along is astounding and for the first time amongst all the AI hype, and all the AI bashing, I have seen it with my own eyes, how very much we will start relying on these tools.
We just spent a couple of hours playing with the thing to build up a project and amongst us with literal vibe-coding joy in our hearts and just embracing it, this thing, rather than bashing it and avoiding it like we normally do, we made projects that were fully tested, absolutely rock-damn solid, in hours that would have taken us weeks to make as professionals.
So now we are back looking at our terminal and our IDE and thinking oh my god I want some more of that.
So as a lawyer that has an equally technical task at hand with a lot of research they need to pull together, cross-referencing and all the stuff that goes with it, I can imagine that if they have tasted it, I don’t know how they will ever go back to not using it.
However, from a programming perspective, it either works or it doesn’t. From a legal perspective, that is something completely else, so I guess the comparison is not quite equal.
Still, the days are numbered.
The problem is, that when you cross-examine an AI hard, it eventually does a 180 and gives you exactly the opposite answer.
However, from a programming perspective, it either works or it doesn’t. From a legal perspective, that is something completely else, so I guess the comparison is not quite equal.
To me this is the crux of the matter. It’s why AI is absolutely worthless for engineering (the traditional kind) beyond asking “hey, what’s the part of the standard that says something like XYZ?” and then just going directly to the source.
LLMs can’t reason.





