>When I enter a building, I know that an engineer with a degree, or even a team of them, have meticulously built this building taking into account the material stresses of the ground, the fault lines, the stresses of the materials of construction, the wear amounts, etc.
You can bet that "AI" is coming for this too. The lawsuits that will result when buildings crumble and kill people because an LLM "hallucinated" will be tragic, but maybe we'll learn from it. But we probably won't.
Have you heard of the Horizon IT Post Office Scandal[0]?
> Between 1999 and 2015, more than 900 subpostmasters were wrongfully convicted of theft, fraud and false accounting based on faulty Horizon data, with about 700 of these prosecutions carried out by the Post Office. Other subpostmasters were prosecuted but not convicted, forced to cover illusory shortfalls caused by Horizon with their own money, or had their contracts terminated.
>
> Although many subpostmasters had reported problems with the new software, and Fujitsu was aware that Horizon contained software bugs as early as 1999, the Post Office insisted that Horizon was robust and failed to disclose knowledge of the faults in the system during criminal and civil cases.
(content warning for the article about that for suicide)
Now think of places where LLMs are being deployed:
- accountancy[1][2]
- management systems similar to Horizon IT
- medical workers using it to pass their coursework (A friend of mine is doing a nursing degree in the USA and they are encouraged to use Gemini, and she's already seen someone on the same course use it to complete their medical ethics homework...)
- Ordinary people checking drug interactions[3], learning about pickling (and almost getting botulism), talking to LLMs and getting poisoned by bromide[4]
You can bet that "AI" is coming for this too. The lawsuits that will result when buildings crumble and kill people because an LLM "hallucinated" will be tragic, but maybe we'll learn from it. But we probably won't.