LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything. Maybe this influences how they, and regulators, will think about self driving cars.
It's done in a roundabout way. Usually with a variation of "you had a bad experience because you are using the tool incorrectly, get good at prompting".
That's a response to 'I don't get good results with LLMs and therefore conclude that getting good results with them is not possible'. I have never seen anyone claim that they make no mistakes if you prompt them correctly.
"LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything."
You take issue with my response of:
"loads of DEVs on here will claim LLMs are infallible"
You're not really making sense. I'm not straw-manning anything, as I'm directly discussing the statement made. What exactly are you presuming I'm throwing a straw man over?
It's entirely valid to say "there are loads of supposed experts that don't see this point, and you're expecting the general public to?". That's clearly my statement.
You may disagree, but that doesn't make it a strawman. Nor does it make it a poorly phrased argument on my part.
Do pay better attention please. And your entire last sentence is way over the line. We're not on reddit.
LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything. Maybe this influences how they, and regulators, will think about self driving cars.