Search Results for author: Ge Yang

Found 30 papers, 17 papers with code

Robust Source-Free Domain Adaptation for Fundus Image Segmentation

1 code implementation25 Oct 2023 Lingrui Li, Yanfeng Zhou, Ge Yang

Unsupervised Domain Adaptation (UDA) is a learning technique that transfers knowledge learned in the source domain from labelled training data to the target domain with only unlabelled data.

Image Segmentation Medical Image Segmentation +5

Compositional Sculpting of Iterative Generative Processes

1 code implementation28 Sep 2023 Timur Garipov, Sebastiaan De Peuter, Ge Yang, Vikas Garg, Samuel Kaski, Tommi Jaakkola

A key challenge in composing iterative generative processes, such as GFlowNets and diffusion models, is that to realize the desired target distribution, all steps of the generative process need to be coordinated, and satisfy delicate balance conditions.

ADFA: Attention-augmented Differentiable top-k Feature Adaptation for Unsupervised Medical Anomaly Detection

1 code implementation29 Aug 2023 Yiming Huang, Guole Liu, Yaoru Luo, Ge Yang

We then apply differentiable top-k feature adaptation to train the patch descriptor, mapping the extracted feature representations to a new vector space, enabling effective detection of anomalies.

Supervised Anomaly Detection

Improving Generalization of Adversarial Training via Robust Critical Fine-Tuning

1 code implementation ICCV 2023 Kaijie Zhu, Jindong Wang, Xixu Hu, Xing Xie, Ge Yang

The core idea of RiFT is to exploit the redundant capacity for robustness by fine-tuning the adversarially trained model on its non-robust-critical module.

Adversarial Robustness

Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation

no code implementations27 Jul 2023 William Shen, Ge Yang, Alan Yu, Jansen Wong, Leslie Pack Kaelbling, Phillip Isola

Self-supervised and language-supervised image models contain rich knowledge of the world that is important for generalization.

Few-Shot Learning Language Modelling

Neural Volumetric Memory for Visual Locomotion Control

no code implementations CVPR 2023 Ruihan Yang, Ge Yang, Xiaolong Wang

To solve this problem, we follow the paradigm in computer vision that explicitly models the 3D geometry of the scene and propose Neural Volumetric Memory (NVM), a geometric memory architecture that explicitly accounts for the SE(3) equivariance of the 3D world.

Strong Lensing Source Reconstruction Using Continuous Neural Fields

1 code implementation29 Jun 2022 Siddharth Mishra-Sharma, Ge Yang

From the nature of dark matter to the rate of expansion of our Universe, observations of distant galaxies distorted through strong gravitational lensing have the potential to answer some of the major open questions in astrophysics.

Overcoming the Spectral Bias of Neural Value Approximation

no code implementations ICLR 2022 Ge Yang, Anurag Ajay, Pulkit Agrawal

Value approximation using deep neural networks is at the heart of off-policy deep reinforcement learning, and is often the primary module that provides learning signals to the rest of the algorithm.

Continuous Control regression +2

Semi-Supervised Segmentation of Mitochondria from Electron Microscopy Images Using Spatial Continuity

1 code implementation6 Jun 2022 Yunpeng Xiao, Youpeng Zhao, Ge Yang

Fully supervised deep learning models developed for this task achieve excellent performance but require substantial amounts of annotated data for training.

Rapid Locomotion via Reinforcement Learning

no code implementations5 May 2022 Gabriel B Margolis, Ge Yang, Kartik Paigwar, Tao Chen, Pulkit Agrawal

Agile maneuvers such as sprinting and high-speed turning in the wild are challenging for legged robots.

reinforcement-learning Reinforcement Learning (RL)

Bilinear value networks

1 code implementation28 Apr 2022 Zhang-Wei Hong, Ge Yang, Pulkit Agrawal

The dominant framework for off-policy multi-goal reinforcement learning involves estimating goal conditioned Q-value function.

Multi-Goal Reinforcement Learning

Invariance Through Latent Alignment

no code implementations15 Dec 2021 Takuma Yoneda, Ge Yang, Matthew R. Walter, Bradly Stadie

A robot's deployment environment often involves perceptual changes that differ from what it has experienced during training.

Data Augmentation Test

Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer

1 code implementation NeurIPS 2021 Ge Yang, Edward Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, Jianfeng Gao

Hyperparameter (HP) tuning in deep learning is an expensive process, prohibitively so for neural networks (NNs) with billions of parameters. We show that, in the recently discovered Maximal Update Parametrization ($\mu$P), many optimal HPs remain stable even as model size changes.

Understanding the Generalization Gap in Visual Reinforcement Learning

no code implementations29 Sep 2021 Anurag Ajay, Ge Yang, Ofir Nachum, Pulkit Agrawal

Deep Reinforcement Learning (RL) agents have achieved superhuman performance on several video game suites.

Data Augmentation reinforcement-learning +1

Learning Task Informed Abstractions

1 code implementation29 Jun 2021 Xiang Fu, Ge Yang, Pulkit Agrawal, Tommi Jaakkola

Current model-based reinforcement learning methods struggle when operating from complex visual scenes due to their inability to prioritize task-relevant features.

Model-based Reinforcement Learning reinforcement-learning +1

Advancing biological super-resolution microscopy through deep learning: a brief review

no code implementations24 Jun 2021 Tianjie Yang, Yaoru Luo, Wei Ji, Ge Yang

We conclude with an outlook on how deep learning could shape the future of this new generation of light microscopy technology.

Specificity Super-Resolution

Deep Neural Networks Learn Meta-Structures from Noisy Labels in Semantic Segmentation

no code implementations22 Mar 2021 Yaoru Luo, Guole Liu, Yuanhao Guo, Ge Yang

When trained with these noisy labels, DNNs provide largely the same segmentation performance as trained by the original ground truth.

Image Classification Image Segmentation +2

Learning Latent Landmarks for Generalizable Planning

no code implementations1 Jan 2021 Lunjun Zhang, Ge Yang, Bradly C Stadie

Planning - the ability to analyze the structure of a problem in the large and decompose it into interrelated subproblems - is a hallmark of human intelligence.

Continuous Control reinforcement-learning +1

World Model as a Graph: Learning Latent Landmarks for Planning

1 code implementation25 Nov 2020 Lunjun Zhang, Ge Yang, Bradly C. Stadie

Planning - the ability to analyze the structure of a problem in the large and decompose it into interrelated subproblems - is a hallmark of human intelligence.

Continuous Control Graph Learning +2

Few shot domain adaptation for in situ macromolecule structural classification in cryo-electron tomograms

no code implementations30 Jul 2020 Liangyong Yu, Ran Li, Xiangrui Zeng, Hongyi Wang, Jie Jin, Ge Yang, Rui Jiang, Min Xu

Motivation: Cryo-Electron Tomography (cryo-ET) visualizes structure and spatial organization of macromolecules and their interactions with other subcellular components inside single cells in the close-to-native state at sub-molecular resolution.

Classification Domain Adaptation +2

Plan2Vec: Unsupervised Representation Learning by Latent Plans

1 code implementation7 May 2020 Ge Yang, Amy Zhang, Ari S. Morcos, Joelle Pineau, Pieter Abbeel, Roberto Calandra

In this paper we introduce plan2vec, an unsupervised representation learning approach that is inspired by reinforcement learning.

Motion Planning reinforcement-learning +2

Probabilistic Inference for Camera Calibration in Light Microscopy under Circular Motion

1 code implementation30 Oct 2019 Yuanhao Guo, Fons J. Verbeek, Ge Yang

Robust and accurate camera calibration is essential for 3D reconstruction in light microscopy under circular motion.

3D Reconstruction Camera Calibration +1

The Importance of Sampling inMeta-Reinforcement Learning

no code implementations NeurIPS 2018 Bradly Stadie, Ge Yang, Rein Houthooft, Peter Chen, Yan Duan, Yuhuai Wu, Pieter Abbeel, Ilya Sutskever

Results are presented on a new environment we call `Krazy World': a difficult high-dimensional gridworld which is designed to highlight the importance of correctly differentiating through sampling distributions in meta-reinforcement learning.

Meta Reinforcement Learning reinforcement-learning +1

Learning Plannable Representations with Causal InfoGAN

1 code implementation NeurIPS 2018 Thanard Kurutach, Aviv Tamar, Ge Yang, Stuart Russell, Pieter Abbeel

Finally, to generate a visual plan, we project the current and goal observations onto their respective states in the planning model, plan a trajectory, and then use the generative model to transform the trajectory to a sequence of observations.

Representation Learning

Mean Field Residual Networks: On the Edge of Chaos

no code implementations NeurIPS 2017 Ge Yang, Samuel Schoenholz

Classical feedforward neural networks, such as those with tanh activations, exhibit exponential behavior on the average when propagating inputs forward or gradients backward.

Cannot find the paper you are looking for? You can Submit a new open access paper.