1 code implementation • 2 Sep 2024 • Yanfeng Zhou, Lingrui Li, Zichen Wang, Guole Liu, Ziwen Liu, Ge Yang
So far, however, XNet still faces the limitations, including performance degradation when images lack high-frequency (HF) information, underutilization of raw images and insufficient fusion.
1 code implementation • 20 Jul 2024 • Jiaxing Huang, Yanfeng Zhou, Yaoru Luo, Guole Liu, Heng Guo, Ge Yang
A fundamental property of such structures is their topological self-similarity, which can be quantified by fractal features such as fractal dimension (FD).
no code implementations • 1 Jul 2024 • Xuxin Cheng, Jialong Li, Shiqi Yang, Ge Yang, Xiaolong Wang
Teleoperation serves as a powerful method for collecting on-robot data essential for robot learning from demonstrations.
no code implementations • 3 Jun 2024 • Hui Xie, Ge Yang, Wenjuan Gao
With the rapid development of deep learning, Deep Spiking Neural Networks (DSNNs) have emerged as promising due to their unique spike event processing and asynchronous computation.
no code implementations • 24 May 2024 • Jia He, Bonan Li, Ge Yang, Ziwen Liu
Solving 3D medical inverse problems such as image restoration and reconstruction is crucial in modern medical field.
no code implementations • 1 Apr 2024 • Ri-Zhao Qiu, Ge Yang, Weijia Zeng, Xiaolong Wang
Scene representations using 3D Gaussian primitives have produced excellent results in modeling the appearance of static and dynamic 3D scenes.
no code implementations • 12 Mar 2024 • Ri-Zhao Qiu, Yafei Hu, Ge Yang, Yuchen Song, Yang Fu, Jianglong Ye, Jiteng Mu, Ruihan Yang, Nikolay Atanasov, Sebastian Scherer, Xiaolong Wang
An open problem in mobile manipulation is how to represent objects and scenes in a unified manner, so that robots can use it both for navigating in the environment and manipulating objects.
no code implementations • 26 Feb 2024 • Xuxin Cheng, Yandong Ji, Junming Chen, Ruihan Yang, Ge Yang, Xiaolong Wang
Can we enable humanoid robots to generate rich, diverse, and expressive motions in the real world?
1 code implementation • 25 Oct 2023 • Lingrui Li, Yanfeng Zhou, Ge Yang
Unsupervised Domain Adaptation (UDA) is a learning technique that transfers knowledge learned in the source domain from labelled training data to the target domain with only unlabelled data.
1 code implementation • NeurIPS 2023 • Timur Garipov, Sebastiaan De Peuter, Ge Yang, Vikas Garg, Samuel Kaski, Tommi Jaakkola
A key challenge in composing iterative generative processes, such as GFlowNets and diffusion models, is that to realize the desired target distribution, all steps of the generative process need to be coordinated, and satisfy delicate balance conditions.
1 code implementation • 29 Aug 2023 • Yiming Huang, Guole Liu, Yaoru Luo, Ge Yang
We then apply differentiable top-k feature adaptation to train the patch descriptor, mapping the extracted feature representations to a new vector space, enabling effective detection of anomalies.
1 code implementation • ICCV 2023 • Kaijie Zhu, Jindong Wang, Xixu Hu, Xing Xie, Ge Yang
The core idea of RiFT is to exploit the redundant capacity for robustness by fine-tuning the adversarially trained model on its non-robust-critical module.
1 code implementation • 27 Jul 2023 • William Shen, Ge Yang, Alan Yu, Jansen Wong, Leslie Pack Kaelbling, Phillip Isola
Self-supervised and language-supervised image models contain rich knowledge of the world that is important for generalization.
1 code implementation • 11 Jul 2023 • Jian Zhang, Runwei Ding, Miaoju Ban, Ge Yang
It follows the unsupervised setting and only normal (defect-free) images are used for training.
no code implementations • CVPR 2023 • Ruihan Yang, Ge Yang, Xiaolong Wang
To solve this problem, we follow the paradigm in computer vision that explicitly models the 3D geometry of the scene and propose Neural Volumetric Memory (NVM), a geometric memory architecture that explicitly accounts for the SE(3) equivariance of the 3D world.
1 code implementation • ICCV 2023 • Yanfeng Zhou, Jiaxing Huang, Chenlong Wang, Le Song, Ge Yang
Perturbations in consistency-based semi-supervised models are often artificially designed.
1 code implementation • 29 Jun 2022 • Siddharth Mishra-Sharma, Ge Yang
From the nature of dark matter to the rate of expansion of our Universe, observations of distant galaxies distorted through strong gravitational lensing have the potential to answer some of the major open questions in astrophysics.
no code implementations • ICLR 2022 • Ge Yang, Anurag Ajay, Pulkit Agrawal
Value approximation using deep neural networks is at the heart of off-policy deep reinforcement learning, and is often the primary module that provides learning signals to the rest of the algorithm.
1 code implementation • 6 Jun 2022 • Yunpeng Xiao, Youpeng Zhao, Ge Yang
Fully supervised deep learning models developed for this task achieve excellent performance but require substantial amounts of annotated data for training.
no code implementations • 5 May 2022 • Gabriel B Margolis, Ge Yang, Kartik Paigwar, Tao Chen, Pulkit Agrawal
Agile maneuvers such as sprinting and high-speed turning in the wild are challenging for legged robots.
1 code implementation • 30 Apr 2022 • Yaoru Luo, Guole Liu, Yuanhao Guo, Ge Yang
Our understanding of the learning behavior of DNNs trained by noisy segmentation labels remains limited.
1 code implementation • 28 Apr 2022 • Zhang-Wei Hong, Ge Yang, Pulkit Agrawal
The dominant framework for off-policy multi-goal reinforcement learning involves estimating goal conditioned Q-value function.
no code implementations • 15 Dec 2021 • Takuma Yoneda, Ge Yang, Matthew R. Walter, Bradly Stadie
A robot's deployment environment often involves perceptual changes that differ from what it has experienced during training.
1 code implementation • NeurIPS 2021 • Ge Yang, Edward Hu, Igor Babuschkin, Szymon Sidor, Xiaodong Liu, David Farhi, Nick Ryder, Jakub Pachocki, Weizhu Chen, Jianfeng Gao
Hyperparameter (HP) tuning in deep learning is an expensive process, prohibitively so for neural networks (NNs) with billions of parameters. We show that, in the recently discovered Maximal Update Parametrization ($\mu$P), many optimal HPs remain stable even as model size changes.
no code implementations • 29 Sep 2021 • Anurag Ajay, Ge Yang, Ofir Nachum, Pulkit Agrawal
Deep Reinforcement Learning (RL) agents have achieved superhuman performance on several video game suites.
no code implementations • ICLR 2022 • Ge Yang, Zhang-Wei Hong, Pulkit Agrawal
We simultaneously learn both components.
1 code implementation • 29 Jun 2021 • Xiang Fu, Ge Yang, Pulkit Agrawal, Tommi Jaakkola
Current model-based reinforcement learning methods struggle when operating from complex visual scenes due to their inability to prioritize task-relevant features.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • 24 Jun 2021 • Tianjie Yang, Yaoru Luo, Wei Ji, Ge Yang
We conclude with an outlook on how deep learning could shape the future of this new generation of light microscopy technology.
no code implementations • 22 Mar 2021 • Yaoru Luo, Guole Liu, Yuanhao Guo, Ge Yang
When trained with these noisy labels, DNNs provide largely the same segmentation performance as trained by the original ground truth.
no code implementations • 1 Jan 2021 • Lunjun Zhang, Ge Yang, Bradly C Stadie
Planning - the ability to analyze the structure of a problem in the large and decompose it into interrelated subproblems - is a hallmark of human intelligence.
1 code implementation • 25 Nov 2020 • Lunjun Zhang, Ge Yang, Bradly C. Stadie
Planning - the ability to analyze the structure of a problem in the large and decompose it into interrelated subproblems - is a hallmark of human intelligence.
no code implementations • 30 Jul 2020 • Liangyong Yu, Ran Li, Xiangrui Zeng, Hongyi Wang, Jie Jin, Ge Yang, Rui Jiang, Min Xu
Motivation: Cryo-Electron Tomography (cryo-ET) visualizes structure and spatial organization of macromolecules and their interactions with other subcellular components inside single cells in the close-to-native state at sub-molecular resolution.
1 code implementation • 7 May 2020 • Ge Yang, Amy Zhang, Ari S. Morcos, Joelle Pineau, Pieter Abbeel, Roberto Calandra
In this paper we introduce plan2vec, an unsupervised representation learning approach that is inspired by reinforcement learning.
1 code implementation • 30 Oct 2019 • Yuanhao Guo, Fons J. Verbeek, Ge Yang
Robust and accurate camera calibration is essential for 3D reconstruction in light microscopy under circular motion.
no code implementations • NeurIPS 2018 • Bradly Stadie, Ge Yang, Rein Houthooft, Peter Chen, Yan Duan, Yuhuai Wu, Pieter Abbeel, Ilya Sutskever
Results are presented on a new environment we call `Krazy World': a difficult high-dimensional gridworld which is designed to highlight the importance of correctly differentiating through sampling distributions in meta-reinforcement learning.
1 code implementation • NeurIPS 2018 • Thanard Kurutach, Aviv Tamar, Ge Yang, Stuart Russell, Pieter Abbeel
Finally, to generate a visual plan, we project the current and goal observations onto their respective states in the planning model, plan a trajectory, and then use the generative model to transform the trajectory to a sequence of observations.
7 code implementations • ICLR 2018 • Bradly C. Stadie, Ge Yang, Rein Houthooft, Xi Chen, Yan Duan, Yuhuai Wu, Pieter Abbeel, Ilya Sutskever
We consider the problem of exploration in meta reinforcement learning.
no code implementations • NeurIPS 2017 • Ge Yang, Samuel Schoenholz
Classical feedforward neural networks, such as those with tanh activations, exhibit exponential behavior on the average when propagating inputs forward or gradients backward.