no code implementations • 28 Jul 2024 • Yuan Xue, Nan Du, Anne Mottram, Martin Seneviratne, Andrew M. Dai
We propose to meta-learn an a self-supervised patient trajectory forecast learning rule by meta-training on a meta-objective that directly optimizes the utility of the patient representation over the subsequent clinical outcome prediction.
1 code implementation • 26 Dec 2022 • Karan Singhal, Shekoofeh Azizi, Tao Tu, S. Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, Perry Payne, Martin Seneviratne, Paul Gamble, Chris Kelly, Nathaneal Scharli, Aakanksha Chowdhery, Philip Mansfield, Blaise Aguera y Arcas, Dale Webster, Greg S. Corrado, Yossi Matias, Katherine Chou, Juraj Gottweis, Nenad Tomasev, Yun Liu, Alvin Rajkomar, Joelle Barral, Christopher Semturs, Alan Karthikesalingam, Vivek Natarajan
To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars.
Ranked #4 on Multiple Choice Question Answering (MCQA) on MedMCQA (Dev Set (Acc-%) metric)
no code implementations • 20 Nov 2022 • Daniel Lopez-Martinez, Alex Yakubovich, Martin Seneviratne, Adam D. Lelkes, Akshit Tyagi, Jonas Kemp, Ethan Steinberg, N. Lance Downing, Ron C. Li, Keith E. Morse, Nigam H. Shah, Ming-Jun Chen
While it has been well known in the ML community that deep learning models suffer from instability, the consequences for healthcare deployments are under characterised.
no code implementations • 6 Jul 2022 • Eric Loreaux, Ke Yu, Jonas Kemp, Martin Seneviratne, Christina Chen, Subhrajit Roy, Ivan Protsyuk, Natalie Harris, Alexander D'Amour, Steve Yadlowsky, Ming-Jun Chen
We propose a joint model of intervention policy and adverse event risk as a means to explicitly communicate the model's assumptions about future interventions.
1 code implementation • 17 Jul 2021 • Anand Avati, Martin Seneviratne, Emily Xue, Zhen Xu, Balaji Lakshminarayanan, Andrew M. Dai
Most ML approaches focus on generalization performance on unseen data that are similar to the training data (In-Distribution, or IND).
no code implementations • NeurIPS 2020 • Yuan Xue, Nan Du, Anne Mottram, Martin Seneviratne, Andrew M. Dai
The paradigm of pretraining' from a set of relevant auxiliary tasks and thenfinetuning' on a target task has been successfully applied in many different domains.
no code implementations • 6 Nov 2020 • Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yian Ma, Cory McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley
Predictors returned by underspecified pipelines are often treated as equivalent based on their training domain performance, but we show here that such predictors can behave very differently in deployment domains.