Search Results for author: Stephanie L. Hyland

Found 15 papers, 4 papers with code

A Generative Model of Words and Relationships from Multiple Sources

no code implementations1 Oct 2015 Stephanie L. Hyland, Theofanis Karaletsos, Gunnar Rätsch

We propose a generative model which integrates evidence from diverse data sources, enabling the sharing of semantic information.

Link Prediction

Learning Unitary Operators with Help From u(n)

1 code implementation17 Jul 2016 Stephanie L. Hyland, Gunnar Rätsch

A major challenge in the training of recurrent neural networks is the so-called vanishing or exploding gradient problem.

Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs

6 code implementations ICLR 2018 Cristóbal Esteban, Stephanie L. Hyland, Gunnar Rätsch

We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa.

Time Series Time Series Analysis +1

Improving Clinical Predictions through Unsupervised Time Series Representation Learning

no code implementations2 Dec 2018 Xinrui Lyu, Matthias Hueser, Stephanie L. Hyland, George Zerveas, Gunnar Raetsch

In this work, we investigate unsupervised representation learning on medical time series, which bears the promise of leveraging copious amounts of existing unlabeled data in order to eventually assist clinical decision making.

Decision Making Representation Learning +2

Unsupervised Extraction of Phenotypes from Cancer Clinical Notes for Association Studies

no code implementations29 Apr 2019 Stefan G. Stark, Stephanie L. Hyland, Melanie F. Pradier, Kjong Lehmann, Andreas Wicki, Fernando Perez Cruz, Julia E. Vogt, Gunnar Rätsch

To demonstrate the utility of our approach, we perform an association study of clinical features with somatic mutation profiles from 4, 007 cancer patients and their tumors.

Clustering

An Empirical Study on the Intrinsic Privacy of SGD

1 code implementation5 Dec 2019 Stephanie L. Hyland, Shruti Tople

Introducing noise in the training of machine learning systems is a powerful way to protect individual privacy via differential privacy guarantees, but comes at a cost to utility.

Inference Attack Membership Inference Attack +1

ML4H Abstract Track 2020

no code implementations19 Nov 2020 Emily Alsentzer, Matthew B. A. McDermott, Fabian Falck, Suproteem K. Sarkar, Subhrajit Roy, Stephanie L. Hyland

A collection of the accepted abstracts for the Machine Learning for Health (ML4H) workshop at NeurIPS 2020.

BIG-bench Machine Learning

Looking for Out-of-Distribution Environments in Multi-center Critical Care Data

no code implementations26 May 2022 Dimitris Spathis, Stephanie L. Hyland

Clinical machine learning models show a significant performance drop when tested in settings not seen during training.

RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision

no code implementations19 Jan 2024 Fernando Pérez-García, Harshita Sharma, Sam Bond-Taylor, Kenza Bouzid, Valentina Salvatelli, Maximilian Ilse, Shruthi Bannur, Daniel C. Castro, Anton Schwaighofer, Matthew P. Lungren, Maria Wetscherek, Noel Codella, Stephanie L. Hyland, Javier Alvarez-Valle, Ozan Oktay

We introduce RAD-DINO, a biomedical image encoder pre-trained solely on unimodal biomedical imaging data that obtains similar or greater performance than state-of-the-art biomedical language supervised models on a diverse range of benchmarks.

Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.