Googling and reading yourself allows you to assess and compare sources, and apply critical thinking and reasoning specific to yourself and your own condition. Using AI takes all this control away from you and trusts a machine to do the reasoning and assessing, which may be based on huge amounts of data which differ from yourself.
Googling allows you to choose the sources you trust, AI forces you to trust it as a source.
I know in Europe we have the GDPR regulations and in theory you can get bad information corrected but in practice you still need to know that someone is holding it to take action.
Then there's laundering of data between brokers.
One broker might acquire data via dubious and then transfer that to another. In some jurisdictions once that happens the second company can do what they like with it without having to worry about the original source.
Say I’m interested in some condition and want to know more about it so I ask a chatbot about it.
It decides “asking for a friend” means I actually have that condition and then silent passes that information on data brokers.
Once it’s in the broker network it’s truth.
We lack the proper infrastructure for to control our own personal data.
Hell, I bet there’s anyone alive that can even name every data broker, let alone contacts them to police what information they’re passing about.