This is the exact sort of performative garbage that LLMs are great for. I had to do an electrical install, but the installer felt that the code required additional work (I don't think he was trying to rip us off, he sincerely believed it, since it's a volume business model).
I got ChatGPT to come up with some plausible interpretations of the electrical code that allowed the install to continue, including citations. I don't know how accurate it all was, but I sent the argument off to the installer, and he came back and did the work the next day. Even if it gets audited, the chances of the auditor picking apart the arguments are probably slim to none. He has plausible deniability.
This is also why schools and colleges are struggling. No one expected superficially "high quality" work from average and poor students, and now that they have to carefully evaluate everyone's work, they've been caught with their pants down.
Someday superficial AIs will talk to other superficial AIs and they'll deadlock, requiring humans back into the mix. Until then, it's a useful way to do bureaucratic judo.
I got ChatGPT to come up with some plausible interpretations of the electrical code that allowed the install to continue, including citations. I don't know how accurate it all was, but I sent the argument off to the installer, and he came back and did the work the next day. Even if it gets audited, the chances of the auditor picking apart the arguments are probably slim to none. He has plausible deniability.
This is also why schools and colleges are struggling. No one expected superficially "high quality" work from average and poor students, and now that they have to carefully evaluate everyone's work, they've been caught with their pants down.
Someday superficial AIs will talk to other superficial AIs and they'll deadlock, requiring humans back into the mix. Until then, it's a useful way to do bureaucratic judo.