What I see are people returning to the same categorical errors in medicine.
Drawing medical conditions from an urn occasionally yields a true diagnostic, even more so if conditions are weighed in the urn according to their prevalence in society. But disease lottery is no medical practice.
Two essential errors exist in medicine:
1. Delivering a wrong intervention.
2. Failing to deliver an intervention at the right time due to misdiagnosis.
These are the sins of harm and distraction. The metric for judging a system is not whether it gets things right on one occasion, but whether it makes those mistakes on the others.
Doctors err because people and institutions are imperfect, biology is messy, and human variability is immense. But the former can be attenuated by a plurality of opinions and greater resources (including time), while statistical systems are inherently vulnerable to the latter two.
Language models hallucinate and deliver both essential errors - all while speaking in a very confident and convincing manner. What OpenAI is advertising is a system that nudges vulnerable people to abandon, distrust, ignore, or simply avoid seeking true medical practitioners to rely on a statistical system out of an unjustified, naive trust in the machine.
As a reminder to those concerned with healthcare accessibility, picking the wrong solution to a problem for lack of a right one does not solve the problem. That reasoning was the basis of practices such as bloodletting and lobotomy. Time after time again, medical science teaches us the limits of this kind of thinking.
Drawing medical conditions from an urn occasionally yields a true diagnostic, even more so if conditions are weighed in the urn according to their prevalence in society. But disease lottery is no medical practice.
Two essential errors exist in medicine:
1. Delivering a wrong intervention.
2. Failing to deliver an intervention at the right time due to misdiagnosis.
These are the sins of harm and distraction. The metric for judging a system is not whether it gets things right on one occasion, but whether it makes those mistakes on the others.
Doctors err because people and institutions are imperfect, biology is messy, and human variability is immense. But the former can be attenuated by a plurality of opinions and greater resources (including time), while statistical systems are inherently vulnerable to the latter two.
Language models hallucinate and deliver both essential errors - all while speaking in a very confident and convincing manner. What OpenAI is advertising is a system that nudges vulnerable people to abandon, distrust, ignore, or simply avoid seeking true medical practitioners to rely on a statistical system out of an unjustified, naive trust in the machine.
As a reminder to those concerned with healthcare accessibility, picking the wrong solution to a problem for lack of a right one does not solve the problem. That reasoning was the basis of practices such as bloodletting and lobotomy. Time after time again, medical science teaches us the limits of this kind of thinking.