no code implementations • 26 May 2023 • Xinyi Chen, Qu Yang, Jibin Wu, Haizhou Li, Kay Chen Tan
The biological neural systems evolved to adapt to ecological environment for efficiency and effectiveness, wherein neurons with heterogeneous structures and rich dynamics are optimized to accomplish complex cognitive tasks.
1 code implementation • 14 May 2023 • ZiHao Wang, Le Ma, Chen Zhang, Bo Han, Yikai Wang, Xinyi Chen, HaoRong Hong, Wenbo Liu, Xinda Wu, Kejun Zhang
Existing studies mainly focus on achieving emotion real-time fit, while the issue of soft transition remains understudied, affecting the overall emotional coherence of the music.
no code implementations • 18 Apr 2023 • Liang Pan, Xinyi Chen, Zhongang Cai, Junzhe Zhang, Haiyu Zhao, Shuai Yi, Ziwei Liu
Existing point cloud completion methods tend to generate global shape skeletons and hence lack fine local details.
no code implementations • 15 Feb 2023 • Jinxia Zhang, Xinyi Chen, Haikun Wei, Kanjian Zhang
To solve these problems, we propose a novel lightweight high-performance model for automatic defect detection of PV cells in electroluminescence(EL) images based on neural architecture search and knowledge distillation.
no code implementations • 7 Feb 2023 • Vladimir Feinberg, Xinyi Chen, Y. Jennifer Sun, Rohan Anil, Elad Hazan
Adaptive regularization methods that exploit more than the diagonal entries exhibit state of the art performance for many tasks, but can be prohibitive in terms of memory and running time.
no code implementations • 19 Jan 2023 • Xinyi Chen, Elad Hazan
Selecting the best hyperparameters for a particular optimization instance, such as the learning rate and momentum, is an important but nonconvex problem.
no code implementations • 1 Jun 2022 • Xinyi Chen, Elad Hazan, Tongyang Li, Zhou Lu, Xinzhao Wang, Rui Yang
In the fundamental problem of shadow tomography, the goal is to efficiently learn an unknown $d$-dimensional quantum state using projective measurements.
no code implementations • 19 Nov 2021 • Daniel Suo, Cyril Zhang, Paula Gradu, Udaya Ghai, Xinyi Chen, Edgar Minasyan, Naman Agarwal, Karan Singh, Julienne LaChance, Tom Zajdel, Manuel Schottdorf, Daniel Cohen, Elad Hazan
Mechanical ventilation is one of the most widely used therapies in the ICU.
no code implementations • 15 Oct 2021 • Xinyi Chen, Edgar Minasyan, Jason D. Lee, Elad Hazan
The theory of deep learning focuses almost exclusively on supervised learning, non-convex optimization using stochastic gradient descent, and overparametrized neural networks.
no code implementations • 16 Jul 2021 • Xinyi Chen, Udaya Ghai, Elad Hazan, Alexandre Megretski
We study online control of an unknown nonlinear dynamical system that is approximated by a time-invariant linear system with model misspecification.
no code implementations • 17 Jun 2021 • Yifei Bi, Xinyi Chen, Caihui Xiao
Adapting the idea of training CartPole with Deep Q-learning agent, we are able to find a promising result that prevent the pole from falling down.
no code implementations • CVPR 2021 • Junzhe Zhang, Xinyi Chen, Zhongang Cai, Liang Pan, Haiyu Zhao, Shuai Yi, Chai Kiat Yeo, Bo Dai, Chen Change Loy
In contrast to previous fully supervised approaches, in this paper we present ShapeInversion, which introduces Generative Adversarial Network (GAN) inversion to shape completion for the first time.
1 code implementation • CVPR 2021 • Liang Pan, Xinyi Chen, Zhongang Cai, Junzhe Zhang, Haiyu Zhao, Shuai Yi, Ziwei Liu
In particular, we propose a dual-path architecture to enable principled probabilistic modeling across partial and complete clouds.
2 code implementations • 12 Feb 2021 • Daniel Suo, Naman Agarwal, Wenhan Xia, Xinyi Chen, Udaya Ghai, Alexander Yu, Paula Gradu, Karan Singh, Cyril Zhang, Edgar Minasyan, Julienne LaChance, Tom Zajdel, Manuel Schottdorf, Daniel Cohen, Elad Hazan
We consider the problem of controlling an invasive mechanical ventilator for pressure-controlled ventilation: a controller must let air in and out of a sedated patient's lungs according to a trajectory of airway pressures specified by a clinician.
1 code implementation • RC 2020 • Ivo Verhoeven, Xinyi Chen, Qingzhi Hu, Mario Holubar
The authors provide code for most of the experiments presented in the paper.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Xinyi Chen, Jingxian Xu, Alex Wang
Several recent state-of-the-art transfer learning methods model classification tasks as text generation, where labels are represented as strings for the model to generate.
no code implementations • 13 Jul 2020 • Xinyi Chen, Elad Hazan
To complete the picture, we investigate the complexity of the online black-box control problem, and give a matching lower bound of $2^{\Omega(\mathcal{L})}$ on the regret, showing that the additional exponential cost is inevitable.
no code implementations • NeurIPS 2020 • Nataly Brukhim, Xinyi Chen, Elad Hazan, Shay Moran
Boosting is a widely used machine learning approach based on the idea of aggregating weak learning rules.
no code implementations • 25 Feb 2020 • DES Collaboration, Tim Abbott, Michel Aguena, Alex Alarcon, Sahar Allam, Steve Allen, James Annis, Santiago Avila, David Bacon, Alberto Bermeo, Gary Bernstein, Emmanuel Bertin, Sunayana Bhargava, Sebastian Bocquet, David Brooks, Dillon Brout, Elizabeth Buckley-Geer, David Burke, Aurelio Carnero Rosell, Matias Carrasco Kind, Jorge Carretero, Francisco Javier Castander, Ross Cawthon, Chihway Chang, Xinyi Chen, Ami Choi, Matteo Costanzi, Martin Crocce, Luiz da Costa, Tamara Davis, Juan De Vicente, Joseph DeRose, Shantanu Desai, H. Thomas Diehl, Jörg Dietrich, Scott Dodelson, Peter Doel, Alex Drlica-Wagner, Kathleen Eckert, Tim Eifler, Jack Elvin-Poole, Juan Estrada, Spencer Everett, August Evrard, Arya Farahi, Ismael Ferrero, Brenna Flaugher, Pablo Fosalba, Josh Frieman, Juan Garcia-Bellido, Marco Gatti, Enrique Gaztanaga, David Gerdes, Tommaso Giannantonio, Paul Giles, Sebastian Grandis, Daniel Gruen, Robert Gruendl, Julia Gschwend, Gaston Gutierrez, Will Hartley, Samuel Hinton, Devon L. Hollowood, Klaus Honscheid, Ben Hoyle, Dragan Huterer, David James, Mike Jarvis, Tesla Jeltema, Margaret Johnson, Stephen Kent, Elisabeth Krause, Richard Kron, Kyler Kuehn, Nikolay Kuropatkin, Ofer Lahav, Ting Li, Christopher Lidman, Marcos Lima, Huan Lin, Niall MacCrann, Marcio Maia, Adam Mantz, Jennifer Marshall, Paul Martini, Julian Mayers, Peter Melchior, Juan Mena, Felipe Menanteau, Ramon Miquel, Joe Mohr, Robert Nichol, Brian Nord, Ricardo Ogando, Antonella Palmese, Francisco Paz-Chinchon, Andrés Plazas Malagón, Judit Prat, Markus Michael Rau, Kathy Romer, Aaron Roodman, Philip Rooney, Eduardo Rozo, Eli Rykoff, Masao Sako, Simon Samuroff, Carles Sanchez, Alexandro Saro, Vic Scarpine, Michael Schubnell, Daniel Scolnic, Santiago Serrano, Ignacio Sevilla, Erin Sheldon, J. Allyn Smith, Eric Suchyta, Molly Swanson, Gregory Tarle, Daniel Thomas, Chun-Hao To, Michael A. Troxel, Douglas Tucker, Tamas Norbert Varga, Anja von der Linden, Alistair Walker, Risa Wechsler, Jochen Weller, Reese Wilkinson, Hao-Yi Wu, Brian Yanny, Zhuowen Zhang, Joe Zuntz
We perform a joint analysis of the counts and weak lensing signal of redMaPPer clusters selected from the Dark Energy Survey (DES) Year 1 dataset.
Cosmology and Nongalactic Astrophysics
no code implementations • ICML 2020 • Mark Braverman, Xinyi Chen, Sham M. Kakade, Karthik Narasimhan, Cyril Zhang, Yi Zhang
Building accurate language models that capture meaningful long-term dependencies is a core challenge in natural language processing.
no code implementations • ICLR 2020 • Xinyi Chen, Naman Agarwal, Elad Hazan, Cyril Zhang, Yi Zhang
State-of-the-art models are now trained with billions of parameters, reaching hardware limits in terms of memory consumption.
no code implementations • ICLR 2019 • Naman Agarwal, Brian Bullins, Xinyi Chen, Elad Hazan, Karan Singh, Cyril Zhang, Yi Zhang
Due to the large number of parameters of machine learning problems, full-matrix preconditioning methods are prohibitively expensive.
no code implementations • NeurIPS 2018 • Scott Aaronson, Xinyi Chen, Elad Hazan, Satyen Kale, Ashwin Nayak
Even in the "non-realizable" setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that do significantly worse than the best possible states at most $\operatorname{O}\!\left(\sqrt {Tn}\right) $ times on the first $T$ measurements.