The SAFER Workshop brings together researchers from academia, industry, and clinical practice to explore the frontiers of model adaptation and reasoning in medical AI.

Our mission is to build a community dedicated to developing SAFER — Stable Adaptation and Faithful Evaluation of Reasoning — medical foundation models that are transparent, trustworthy, and robust across diverse clinical domains.

Background

The SAFER workshop focuses on faithful reasoning and safe adaptation in medical AI emphasising models that integrate imaging and other multimodal clinical data in their processes.

As medical AI evolves beyond purely performance-driven optimization, SAFER advocates for systems that:

  • Explain their reasoning transparently
  • Ground their outputs in verifiable medical evidence
  • Adapt reliably across imaging domains and modalities

By convening leaders from multiple disciplines, the workshop seeks to catalyze collaboration and establish guiding principles for the next generation of trustworthy medical foundation models.

Key themes include:

  • Hallucination detection and grounding in radiology and surgical vision
  • Trust, transparency, and evaluation of reasoning for image-based diagnosis
  • Stable model adaptation to unseen imaging domains and interventional video streams Together, we aim to define the challenges and shape the solutions at the intersection of medical foundation models, vision–language systems, and multimodal clinical AI.

Important Dates

  • Paper submissions due: June 19, 2026
  • Notification of paper decisions: July 17, 2026
  • Camera ready papers due: July 31, 2026
  • Workshop date: TBC