← NewsAll
Doctors warn against health misinformation from AI sources
Summary
The Canadian Medical Association says many patients are turning to AI for health advice and a CMA-commissioned survey found people who followed AI guidance reported higher rates of adverse effects; speakers at a CMA event described misinformation as a public-health concern.
Content
The Canadian Medical Association is warning that more patients are turning to artificial intelligence for health advice and sometimes receive misleading or unsafe information. The CMA highlighted these concerns while presenting a new national survey at an event in Ottawa. Many Canadians lack a primary care provider and often look online for quick answers. Officials noted AI-generated content can appear definitive without accounting for an individual’s medical history.
Key points:
- The CMA commissioned an Abacus Data survey of 5,001 Canadians conducted in early November; because it was an online survey, a conventional margin of error cannot be assigned.
- Sixty-four per cent of respondents said they encountered online health information they later learned was false or misleading; 27 per cent said they trust AI for accurate health information, while about half reported using AI search results and 38 per cent said they used ChatGPT for treatment advice.
- Survey respondents who followed advice from AI were reported to be five times more likely to say they experienced an adverse reaction or negative effect on their health.
- Speakers at the CMA event, including the CMA president and public-health guests, described health misinformation as a growing public-health concern and urged improved preparedness and clearer, reliable sources of information.
Summary:
Doctors say the spread of AI-based health misinformation is eroding trust between patients and clinicians and is linked in the survey to higher reports of adverse effects. The CMA and guest speakers called for better preparedness and clearer sources of reliable information. Undetermined at this time.
