> I know exactly what the result should be, the LLM is just typing it for me.
This is the mental model people should be working with. The LLM is there to tighten the loop from thought to code. You doing need to test it like an engineer. You just need to use it to make you more efficient.
It so happens that you *can^ give an LLM half-baked thoughts and it will sometimes still do a good job because the right thing is so straightforward. But in general the more vague and unclear your own thoughts, the lower quality the results, necessitating more iterations to refine.
I know exactly what the result should be, the LLM is just typing it for me.
And it will do the typing while I get up and go to the bathroom (again, I'm getting old).
When I come back, it's done, tests have been run that prove nothing broke.