Machine Learning for MDs Weekly Digest


What’s New in ML for MDs
Welcome to the ML for MDs Newsletter. The mission of ML for MDs is to connect physicians interested in machine learning. This newsletter provides the most relevant news, journal articles, and jobs at the intersection of medicine and machine learning.
Fun Facts
In honor of Sam Altman describing AI as a “printing press moment” during his Congressional testimony:
- The inventor of the printing press, Gutenberg, died poor; since so few people could read, he couldn’t sell his product.
- The Gutenberg Bible was printed in red ink, leading people to say it was written in human blood and created by witchcraft
This Week’s Top Stories
- Public health researchers described possible threats to human health by AI including:
- The control and manipulation of people
- The use of lethal autonomous weapons
- The effects on work and employment
- Researchers have likely been using AI to help write their papers without acknowledging it for some time now; the expectation is that will continue to increase
- ChatGPT increased the productivity of professionals writing professional documents (memos, etc) by 37%, and increased their productivity, even without pretraining
Weekly summary
Three articles on ethics and AI reporting from the Lancet, the US government, and a clinical trial reporting consortium
- Ethics in LLMs from The Lancet’s eBioMedicine
- Three key limitations of LLMs:
- Model can’t reliably distinguish fact from misinformation, bias, harmful content
- Hallucinations and difficulty of the model identifying its hallucinations (author refers to LLMs as “excellent bullshitters”)
- Models are probabilistic so don’t always produce the same answers to the same task or question
- Possibilities of AI
- Documentation
- Clinical trial design/matching
- Data extraction from EHRs
- Translating physician jargon into more easily explainable language
- More efficient drug discovery
- Role of AI in healthcare
- Assist rather than replace
- From search engines to response engines
- Author lists resources of good articles of tech executives urging caution with AI: From Nature Medicine, the WHO, the NYT, and Time Magazine
- Path forward for Ethical AI in Healthcare
- “There often are no second opportunities to get things right after releasing AI technology prematurely or hastily in the healthtech sector: user and regulator trust are easy to lose and very hard to regain.”
- Author recommends “close collaboration and communication between all involved stakeholder groups: clinician and patient users, technology developers and regulators”
- Two phases:
- First, “more nuanced and risk-conscious experimentation with research-grade generative AI systems accompanied by increased scrutiny of regulatory bodies, and first commercial product offerings that are targeted and regulated for very specific niche applications in health data management such as summarising or creating reports”
- Second, “generative AI models which understand the data they handle… LLMs might become intelligent and trustworthy artificial companions to clinicians and patients”, which the author describes as a “moonshot goal”
- Author provides a table of AI ethics recommendations related to human autonomy, human well-being and safety, transparency/explainability, responsibility/accountability, inclusiveness/equity, and responsiveness/sustainability
- Three key limitations of LLMs:
- Standards in healthcare AI
- From the National Institute of Standards and Technology
- Three main sections:
- How poorly handled AI in healthcare, including bias, can erode public trust
- Identifies three main categories of bias in AI (see figure below, which is a really great list of all the different kinds of biases):
- Statistical
- Human
- Systemic
- Identifies three ways to decrease bias:
- Datasets
- Human intervention
- Testing and evaluation

- AI reporting for research
- CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) is a research reporting framework for clinical trials involving AI
- Developed in conjunction with its companion guideline, SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence)
- Literature review plus expert consensus opinion to produce 14 new items
- State that the intervention uses AI and its proposed use in the title or abstract
- Explain the purpose of using AI and its intended users
- Inclusion/exclusion criteria of participants and data
- AI algorithm method, how data were obtained and selected, how poor quality data were handled, any human-AI interaction at data input level, AI outputs, how AI contributed to decision-making
- Results of performance errors and how they were identified (or justify why they weren’t performed)
- Describe restrictions on model access
- CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) is a research reporting framework for clinical trials involving AI
Community News
- If you haven’t introduced yourself, please do so under the #intros channel.
Thanks for being a part of this community! As always, please let me know if you have questions/ideas/feedback.
Sarah
Sarah Gebauer, MD