Of course I can't tell the difference. That's not the point. And yes, humans can leave stupid comments too.
The difference is I can ping humans on Slack and get clarification.
I don't want reasons because I think comments are neat. If I'm tracking this sort of thing down, something is broken and I'm trying to fix it without breaking anything else.
It only takes screwing this up a couple times before you learn what a Chesterson's Fence is lol.
You are framing this as an AI problem, but from what I’m hearing, this is just an engineering culture problem.
You should not bet on the ability to ping humans on Slack long-term. Not because AI is going to replace human engineers, but because humans have fallible memories and leave jobs. To the extent that your processes require the ability to regularly ask other engineers “why the hell did you do this“, your processes are holding you back.
If anything, AI potentially makes this easier. Because it’s really easy to prompt the AI to record why the hell things are done the way they are, whether recording its own “thoughts” or recording the “why” it was given by an engineer.
It's not an engineering culture problem lol, I promise. I have over a decade in this career and I've worked at places with fantastic and rigorous processes and at places with awful ones. The better places slacked each other a lot.
I don't understand what's so hard to understand about "I need to understand the actual ramifications of my changes before I make them and no generated robotext is gonna tell me that"
StackOverflow is a tool. You could use it to look for a solution to a bug you're investigating. You could use it to learn new techniques. You could use it to guide you through tradeoffs in different options. You can also use it to copy/paste code you don't understand and break your production service. That's not a problem with StackOverflow.
> "I need to understand the actual ramifications of my changes before I make them and no generated robotext is gonna tell me that"
Who's checking in this robotext?
* Is it some rogue AI agent? Who gave it unfettered access to your codebase, and why?
* Is it you, using an LLM to try to fix a bug? Yeah, don't check it in if you don't understand what you got back or why.
* Is it your peers, checking in code they don't understand? Then you do have a culture problem.
An LLM gives you code. It doesn't free you of the responsibility to understand the code you check in. If the only way you can use an LLM is to blindly accept what it gives you, then yeah, I guess don't use an LLM. But then you also probably shouldn't use StackOverflow. Or anything else that might give you code you'd be tempted to check in blindly.
The difference is I can ping humans on Slack and get clarification.
I don't want reasons because I think comments are neat. If I'm tracking this sort of thing down, something is broken and I'm trying to fix it without breaking anything else.
It only takes screwing this up a couple times before you learn what a Chesterson's Fence is lol.