no code implementations • 19 Sep 2023 • Zhiqiang Shen, Tianhua Tao, Liqun Ma, Willie Neiswanger, Joel Hestness, Natalia Vassilieva, Daria Soboleva, Eric Xing
This paper aims to understand the impacts of various data combinations (e. g., web text, wikipedia, github, books) on the training of large language models using SlimPajama.
no code implementations • 30 Aug 2023 • Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan Li, Fajri Koto, Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, Lalit Pradhan, Zain Muhammad Mujahid, Massa Baali, Alham Fikri Aji, Zhengzhong Liu, Andy Hock, Andrew Feldman, Jonathan Lee, Andrew Jackson, Preslav Nakov, Timothy Baldwin, Eric Xing
We release two open versions of the model -- the foundation Jais model, and an instruction-tuned Jais-chat variant -- with the aim of promoting research on Arabic LLMs.
no code implementations • 2 Jul 2023 • Nanqing Dong, Zhipeng Wang, Jiahao Sun, Michael Kampffmeyer, Yizhe Wen, Shuoying Zhang, William Knottenbelt, Eric Xing
In the era of deep learning, federated learning (FL) presents a promising approach that allows multi-institutional data owners, or clients, to collaboratively train machine learning models without compromising data privacy.
1 code implementation • 22 Jun 2023 • Zeyuan Yin, Eric Xing, Zhiqiang Shen
We present a new dataset condensation framework termed Squeeze, Recover and Relabel (SRe$^2$L) that decouples the bilevel optimization of model and synthetic data during training, to handle varying scales of datasets, model architectures and image resolutions for effective dataset condensation.
no code implementations • 13 Jun 2023 • Lingjing Kong, Biwei Huang, Feng Xie, Eric Xing, Yuejie Chi, Kun Zhang
In this work, we investigate the identification problem for nonlinear latent hierarchical causal models in which observed variables are generated by a set of causally related latent variables, and some latent variables may not have observed children.
1 code implementation • 13 Jun 2023 • Arnav Chavan, Zhuang Liu, Deepak Gupta, Eric Xing, Zhiqiang Shen
We present Generalized LoRA (GLoRA), an advanced approach for universal parameter-efficient fine-tuning tasks.
1 code implementation • 23 May 2023 • Kunhao Liu, Fangneng Zhan, Jiahui Zhang, Muyu Xu, Yingchen Yu, Abdulmotaleb El Saddik, Christian Theobalt, Eric Xing, Shijian Lu
Open-vocabulary segmentation of 3D scenes is a fundamental function of human perception and thus a crucial objective in computer vision research.
1 code implementation • 5 May 2023 • HANLIN ZHANG, Jiani Huang, Ziyang Li, Mayur Naik, Eric Xing
We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module performs deductive reasoning.
1 code implementation • CVPR 2023 • Aoran Xiao, Jiaxing Huang, Weihao Xuan, Ruijie Ren, Kangcheng Liu, Dayan Guan, Abdulmotaleb El Saddik, Shijian Lu, Eric Xing
In addition, we design a domain randomization technique that alternatively randomizes the geometry styles of point clouds and aggregates their embeddings, ultimately leading to a generalizable model that can improve 3DSS under various adverse weather effectively.
no code implementations • CVPR 2023 • Kaiwen Cui, Yingchen Yu, Fangneng Zhan, Shengcai Liao, Shijian Lu1, Eric Xing
The first is aggregated generative KD that mitigates the discriminator overfitting by challenging the discriminator with harder learning tasks and distilling more generalizable knowledge from the pre-trained models.
1 code implementation • CVPR 2023 • Kunhao Liu, Fangneng Zhan, YiWen Chen, Jiahui Zhang, Yingchen Yu, Abdulmotaleb El Saddik, Shijian Lu, Eric Xing
In addition, it transforms the grid features according to the reference style which directly leads to high-quality zero-shot style transfer.
1 code implementation • 8 Mar 2023 • Kai Zhang, Yutong Dai, Hongyi Wang, Eric Xing, Xun Chen, Lichao Sun
Federated learning is a promising paradigm that allows multiple clients to collaboratively train a model without sharing the local data.
1 code implementation • 16 Dec 2022 • HANLIN ZHANG, Yi-Fan Zhang, Li Erran Li, Eric Xing
Pre-trained language models (LMs) have shown remarkable reasoning performance using explanations (or ``chain-of-thought'' (CoT)) for in-context learning.
no code implementations • 20 Oct 2022 • Kirill Vishniakov, Eric Xing, Zhiqiang Shen
These include (I) the inability to drop uninformative masked regions in ConvNets as they process data continuously, resulting in low training efficiency compared to ViT models; and (II) the mismatch between erase-based masking and the contrastive-based objective in Siamese ConvNets, which differs from the MIM approach.
1 code implementation • 13 Oct 2022 • Dacheng Li, Hongyi Wang, Eric Xing, Hao Zhang
Scaling up model sizes can lead to fundamentally new capabilities in many machine learning (ML) tasks.
1 code implementation • 5 Jul 2022 • Sang Keun Choe, Willie Neiswanger, Pengtao Xie, Eric Xing
Gradient-based multilevel optimization (MLO) has gained attention as a framework for studying numerous problems, ranging from hyperparameter optimization and meta-learning to neural architecture search and reinforcement learning.
no code implementations • 9 Jun 2022 • Xijie Huang, Zhiqiang Shen, Shichao Li, Zechun Liu, Xianghong Hu, Jeffry Wicaksana, Eric Xing, Kwang-Ting Cheng
In order to deploy deep models in a computationally efficient manner, model quantization approaches have been frequently used.
1 code implementation • 24 Feb 2022 • Kartik Sreenivasan, Jy-yong Sohn, Liu Yang, Matthew Grinde, Alliot Nagle, Hongyi Wang, Eric Xing, Kangwook Lee, Dimitris Papailiopoulos
Frankle & Carbin conjecture that we can avoid this by training "lottery tickets", i. e., special sparse subnetworks found at initialization, that can be trained to high accuracy.
no code implementations • 30 Jan 2022 • Liu Ziyin, HANLIN ZHANG, Xiangming Meng, Yuting Lu, Eric Xing, Masahito Ueda
This work theoretically studies stochastic neural networks, a main type of neural network in use.
1 code implementation • CVPR 2022 • Arnav Chavan, Zhiqiang Shen, Zhuang Liu, Zechun Liu, Kwang-Ting Cheng, Eric Xing
This paper explores the feasibility of finding an optimal sub-model from a vision transformer and introduces a pure vision transformer slimming (ViT-Slim) framework.
2 code implementations • 27 Dec 2021 • Fangneng Zhan, Yingchen Yu, Rongliang Wu, Jiahui Zhang, Shijian Lu, Lingjie Liu, Adam Kortylewski, Christian Theobalt, Eric Xing
With superb power in modeling the interaction among multimodal information, multimodal image synthesis and editing has become a hot research topic in recent years.
no code implementations • 3 Dec 2021 • Zechun Liu, Zhiqiang Shen, Yun Long, Eric Xing, Kwang-Ting Cheng, Chas Leichner
We identify that the NAS task requires the synthesized data (we target at image domain here) with enough semantics, diversity, and a minimal domain gap from the natural images.
2 code implementations • 2 Dec 2021 • Zhiqiang Shen, Eric Xing
In this study, we present a Fast Knowledge Distillation (FKD) framework that replicates the distillation training phase and generates soft labels using the multi-crop KD approach, while training faster than ReLabel since no post-processes such as RoI align and softmax operations are used.
Ranked #505 on
Image Classification
on ImageNet
1 code implementation • CVPR 2022 • Zechun Liu, Kwang-Ting Cheng, Dong Huang, Eric Xing, Zhiqiang Shen
The nonuniform quantization strategy for compressing neural networks usually achieves better performance than its counterpart, i. e., uniform strategy, due to its superior representational capacity.
1 code implementation • 11 Nov 2021 • Bhanu Garg, Li Zhang, Pradyumna Sridhara, Ramtin Hosseini, Eric Xing, Pengtao Xie
We propose a novel machine learning method called Learning From Mistakes (LFM), wherein the learner improves its ability to learn by focusing more on the mistakes during revision.
1 code implementation • 9 Nov 2021 • Zhiqiang Shen, Zechun Liu, Eric Xing
The proposed weight sharing mechanism by sliced recursion structure allows us to build a transformer with more than 100 or even 1000 shared layers with ease while keeping a compact size (13~15M), to avoid optimization difficulties when the model is too large.
Ranked #249 on
Image Classification
on ImageNet
no code implementations • 5 Nov 2021 • Haohan Wang, Bryon Aragam, Eric Xing
Motivated by empirical arguments that are well-known from the genome-wide association studies (GWAS) literature, we study the statistical properties of linear mixed models (LMMs) applied to GWAS.
1 code implementation • 5 Nov 2021 • Haohan Wang, Zeyi Huang, HANLIN ZHANG, Yong Jae Lee, Eric Xing
Machine learning has demonstrated remarkable prediction accuracy over i. i. d data, but the accuracy often drops when tested with data from another distribution.
no code implementations • NeurIPS 2021 • Xinshi Chen, Haoran Sun, Caleb Ellington, Eric Xing, Le Song
We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports.
no code implementations • 29 Sep 2021 • Han Guo, Bowen Tan, Zhengzhong Liu, Eric Xing, Zhiting Hu
We apply the approach to a wide range of text generation tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Shentong Mo, Xi Fu, Chenyang Hong, Yizhen Chen, Yuxuan Zheng, Xiangru Tang, Yanyan Lan, Zhiqiang Shen, Eric Xing
In this work, we propose a simple yet effective approach for pre-training genome data in a multi-modal and self-supervised manner, which we call GeneBERT.
1 code implementation • ACL 2021 • Xuehai He, Zhuo Cai, Wenlan Wei, Yichen Zhang, Luntian Mou, Eric Xing, Pengtao Xie
In this paper, we aim to develop a pathological visual question answering framework to analyze pathology images and answer medical questions related to these images.
1 code implementation • ACL 2021 • Meng Zhou, Zechen Li, Bowen Tan, Guangtao Zeng, Wenmian Yang, Xuehai He, Zeqian Ju, Subrato Chakravorty, Shu Chen, Xingyi Yang, Yichen Zhang, Qingyang Wu, Zhou Yu, Kun Xu, Eric Xing, Pengtao Xie
Training complex dialog generation models on small datasets bears high risk of overfitting.
1 code implementation • 17 Jun 2021 • Shuai Lin, Pan Zhou, Zi-Yuan Hu, Shuojia Wang, Ruihui Zhao, Yefeng Zheng, Liang Lin, Eric Xing, Xiaodan Liang
However, since for a query, its negatives are uniformly sampled from all graphs, existing methods suffer from the critical sampling bias issue, i. e., the negatives likely having the same semantic structure with the query, leading to performance degradation.
1 code implementation • 20 May 2021 • Nanqing Dong, Michael Kampffmeyer, Irina Voiculescu, Eric Xing
In this work, we provide some theoretical insight into the properties of QNNs by presenting and analyzing a new form of invariance embedded in QNNs for both quantum binary classification and quantum representation learning, which we term negational symmetry.
no code implementations • 30 Jan 2021 • Maruan Al-Shedivat, Liam Li, Eric Xing, Ameet Talwalkar
Meta-learning has enabled learning statistical models that can be quickly adapted to new prediction tasks.
no code implementations • 1 Jan 2021 • Haohan Wang, Zeyi Huang, Eric Xing
In this paper, we formally study the generalization error bound for this setup with the knowledge of how the spurious features are associated with the label.
no code implementations • 1 Jan 2021 • Haohan Wang, Zeyi Huang, Xindi Wu, Eric Xing
Data augmentation is one of the most popular techniques for improving the robustness of neural networks.
1 code implementation • ICLR 2021 • Benedikt Boecking, Willie Neiswanger, Eric Xing, Artur Dubrawski
Our experiments demonstrate that only a small number of feedback iterations are needed to train models that achieve highly competitive test set performance without access to ground truth training labels.
no code implementations • NeurIPS 2020 • Hao Zhang, Yuan Li, Zhijie Deng, Xiaodan Liang, Lawrence Carin, Eric Xing
Synchronization is a key step in data-parallel distributed machine learning (ML).
1 code implementation • ICLR 2021 • Maruan Al-Shedivat, Jennifer Gillenwater, Eric Xing, Afshin Rostamizadeh
Federated learning is typically approached as an optimization problem, where the goal is to minimize a global loss function by distributing computation across client devices that possess local data and specify different parts of the global objective.
no code implementations • 6 Oct 2020 • Xuehai He, Zhuo Cai, Wenlan Wei, Yichen Zhang, Luntian Mou, Eric Xing, Pengtao Xie
To deal with the issue that a publicly available pathology VQA dataset is lacking, we create PathVQA dataset.
no code implementations • 28 Sep 2020 • Ben Lengerich, Eric Xing, Rich Caruana
Conversely, the probability of an interaction of $k$ variables surviving Dropout at rate $p$ is $\mathcal{O}((1-p)^k)$.
1 code implementation • 17 Jun 2020 • Xingyi Yang, Nandiraju Gireesh, Eric Xing, Pengtao Xie
To address this problem, we develop methods to generate view-consistent, high-fidelity, and high-resolution X-ray images from radiology reports to facilitate radiology training of medical students.
1 code implementation • 11 May 2020 • Wenmian Yang, Guangtao Zeng, Bowen Tan, Zeqian Ju, Subrato Chakravorty, Xuehai He, Shu Chen, Xingyi Yang, Qingyang Wu, Zhou Yu, Eric Xing, Pengtao Xie
On these two datasets, we train several dialogue generation models based on Transformer, GPT, and BERT-GPT.
no code implementations • ACL 2019 • Baoyu Jing, Zeya Wang, Eric Xing
In this work, we propose a novel framework that exploits the structure information between and within report sections for generating CXR imaging reports.
1 code implementation • medRxiv 2020 • Xuehai He, Xingyi Yang, Shanghang Zhang, Jinyu Zhao, Yichen Zhang, Eric Xing, Pengtao Xie
Besides, these works require a large number of CTs to train accurate diagnosis models, which are difficult to obtain.
no code implementations • 7 Apr 2020 • Emmanouil Antonios Platanios, Maruan Al-Shedivat, Eric Xing, Tom Mitchell
Many machine learning systems today are trained on large amounts of human-annotated data.
2 code implementations • 11 Mar 2020 • Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing
This drawback hinders the model from learning subtle variance and fine-grained information.
6 code implementations • 7 Mar 2020 • Xuehai He, Yichen Zhang, Luntian Mou, Eric Xing, Pengtao Xie
To achieve this goal, the first step is to create a visual question answering (VQA) dataset where the AI agent is presented with a pathology image together with a question and is asked to give the correct answer.
1 code implementation • 20 Dec 2019 • Kevin Tran, Willie Neiswanger, Junwoong Yoon, Eric Xing, Zachary W. Ulissi
These uncertainty estimates are instrumental for determining which materials to screen next, but there is not yet a standard procedure for judging the quality of such uncertainty estimates objectively.
Materials Science Computational Physics
no code implementations • 28 Sep 2019 • Congzheng Song, Shanghang Zhang, Najmeh Sadoughi, Pengtao Xie, Eric Xing
The International Classification of Diseases (ICD) is a list of classification codes for the diagnoses.
no code implementations • 25 Sep 2019 • Emmanouil Antonios Platanios, Maruan Al-Shedivat, Eric Xing, Tom Mitchell
Many machine learning systems today are trained on large amounts of human-annotated data.
no code implementations • 25 Sep 2019 • Seojin Bang, Pengtao Xie, Heewook Lee, Wei Wu, Eric Xing
Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system.
1 code implementation • 12 Jun 2019 • Lisa Lee, Benjamin Eysenbach, Emilio Parisotto, Eric Xing, Sergey Levine, Ruslan Salakhutdinov
The SMM objective can be viewed as a two-player, zero-sum game between a state density model and a parametric policy, an idea that we use to build an algorithm for optimizing the SMM objective.
no code implementations • 31 May 2019 • Gregory Plumb, Maruan Al-Shedivat, Eric Xing, Ameet Talwalkar
Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, which lack guarantees about their explanation quality.
no code implementations • 29 Mar 2019 • Alexander Ratner, Dan Alistarh, Gustavo Alonso, David G. Andersen, Peter Bailis, Sarah Bird, Nicholas Carlini, Bryan Catanzaro, Jennifer Chayes, Eric Chung, Bill Dally, Jeff Dean, Inderjit S. Dhillon, Alexandros Dimakis, Pradeep Dubey, Charles Elkan, Grigori Fursin, Gregory R. Ganger, Lise Getoor, Phillip B. Gibbons, Garth A. Gibson, Joseph E. Gonzalez, Justin Gottschlich, Song Han, Kim Hazelwood, Furong Huang, Martin Jaggi, Kevin Jamieson, Michael. I. Jordan, Gauri Joshi, Rania Khalaf, Jason Knight, Jakub Konečný, Tim Kraska, Arun Kumar, Anastasios Kyrillidis, Aparna Lakshmiratan, Jing Li, Samuel Madden, H. Brendan McMahan, Erik Meijer, Ioannis Mitliagkas, Rajat Monga, Derek Murray, Kunle Olukotun, Dimitris Papailiopoulos, Gennady Pekhimenko, Theodoros Rekatsinas, Afshin Rostamizadeh, Christopher Ré, Christopher De Sa, Hanie Sedghi, Siddhartha Sen, Virginia Smith, Alex Smola, Dawn Song, Evan Sparks, Ion Stoica, Vivienne Sze, Madeleine Udell, Joaquin Vanschoren, Shivaram Venkataraman, Rashmi Vinayak, Markus Weimer, Andrew Gordon Wilson, Eric Xing, Matei Zaharia, Ce Zhang, Ameet Talwalkar
Machine learning (ML) techniques are enjoying rapidly increasing adoption.
3 code implementations • 19 Feb 2019 • Seojin Bang, Pengtao Xie, Heewook Lee, Wei Wu, Eric Xing
Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system.
1 code implementation • NeurIPS 2020 • Gregory Plumb, Maruan Al-Shedivat, Angel Alexander Cabrera, Adam Perer, Eric Xing, Ameet Talwalkar
Most of the work on interpretable machine learning has focused on designing either inherently interpretable models, which typically trade-off accuracy for interpretability, or post-hoc explanation systems, whose explanation quality can be unpredictable.
1 code implementation • 31 Jan 2019 • Willie Neiswanger, Kirthevasan Kandasamy, Barnabas Poczos, Jeff Schneider, Eric Xing
Optimizing an expensive-to-query function is a common task in science and engineering, where it is beneficial to keep the number of queries to a minimum.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Shuai Lin, Wentao Wang, Zichao Yang, Xiaodan Liang, Frank F. Xu, Eric Xing, Zhiting Hu
That is, the model learns to imitate the writing style of any given exemplar sentence, with automatic adaptions to faithfully describe the content record.
1 code implementation • 1 Jan 2019 • Wanrong Zhu, Zhiting Hu, Eric Xing
Recent years have seen remarkable progress of text generation in different contexts, such as the most common setting of generating text from scratch, and the emerging paradigm of retrieval-and-rewriting.
no code implementations • 24 Nov 2018 • Bowen Tan, Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, Eric Xing
Reinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency.
no code implementations • 20 Nov 2018 • Xiangan Liu, Keyang Xu, Pengtao Xie, Eric Xing
Extractive summarization is very useful for physicians to better manage and digest Electronic Health Records (EHRs).
no code implementations • 16 Nov 2018 • Maruan Al-Shedivat, Lisa Lee, Ruslan Salakhutdinov, Eric Xing
Next, we propose to measure the complexity of each environment by constructing dependency graphs between the goals and analytically computing \emph{hitting times} of a random walk in the graph.
no code implementations • 31 Oct 2018 • Keyang Xu, Mike Lam, Jingzhi Pang, Xin Gao, Charlotte Band, Piyush Mathur, Frank Papay, Ashish K. Khanna, Jacek B. Cywinski, Kamal Maheshwari, Pengtao Xie, Eric Xing
This study presents a multimodal machine learning model to predict ICD-10 diagnostic codes.
1 code implementation • 4 Oct 2018 • Haowen Xu, Hao Zhang, Zhiting Hu, Xiaodan Liang, Ruslan Salakhutdinov, Eric Xing
Many machine learning problems involve iteratively and alternately optimizing different task objectives with respect to different sets of parameters.
no code implementations • ICLR 2019 • Haowen Xu, Hao Zhang, Zhiting Hu, Xiaodan Liang, Ruslan Salakhutdinov, Eric Xing
Many machine learning problems involve iteratively and alternately optimizing different task objectives with respect to different sets of parameters.
no code implementations • 5 Aug 2018 • Hongbao Zhang, Pengtao Xie, Eric Xing
In this paper, we propose a probabilistic framework based on deep generative models for MVI.
no code implementations • ECCV 2018 • Xiaodan Liang, Tairui Wang, Luona Yang, Eric Xing
To our knowledge, this is the first successful case of the learned driving policy through reinforcement learning in the high-fidelity simulator, which performs better-than supervised imitation learning.
no code implementations • 10 Jul 2018 • Rajesh Chidambaram, Michael Kampffmeyer, Willie Neiswanger, Xiaodan Liang, Thomas Lachmann, Eric Xing
Analogously, this paper introduces geometric generalization based zero-shot learning tests to measure the rapid learning ability and the internal consistency of deep generative models.
1 code implementation • ICML 2018 • Jakob Foerster, Gregory Farquhar, Maruan Al-Shedivat, Tim Rocktäschel, Eric Xing, Shimon Whiteson
Lastly, to match the first-order gradient under differentiation, SL treats part of the cost as a fixed sample, which we show leads to missing and wrong terms for estimators of higher-order derivatives.
no code implementations • WS 2018 • Zhiting Hu, Zichao Yang, Tiancheng Zhao, Haoran Shi, Junxian He, Di Wang, Xuezhe Ma, Zhengzhong Liu, Xiaodan Liang, Lianhui Qin, Devendra Singh Chaplot, Bowen Tan, Xingjiang Yu, Eric Xing
The features make Texar particularly suitable for technique sharing and generalization across different text generation applications.
no code implementations • ICML 2018 • Pengtao Xie, Hongbao Zhang, Yichen Zhu, Eric Xing
Variable selection is a classic problem in machine learning (ML), widely used to find important explanatory factors, and improve generalization performance and interpretability of ML models.
no code implementations • ACL 2018 • Pengtao Xie, Eric Xing
The International Classification of Diseases (ICD) provides a hierarchy of diagnostic codes for classifying diseases.
no code implementations • NeurIPS 2018 • Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, Xiaodan Liang, Lianhui Qin, Haoye Dong, Eric Xing
The broad set of deep generative models (DGMs) has achieved remarkable advances.
3 code implementations • ICML 2018 • Lisa Lee, Emilio Parisotto, Devendra Singh Chaplot, Eric Xing, Ruslan Salakhutdinov
Value Iteration Networks (VINs) are effective differentiable path planning modules that can be used by agents to perform navigation while still maintaining end-to-end differentiability of the entire architecture.
no code implementations • NAACL 2018 • Mrinmaya Sachan, Eric Xing
The two tasks of question answering and question generation are usually tackled separately in the NLP literature.
no code implementations • CVPR 2018 • Xiaodan Liang, Hongfei Zhou, Eric Xing
Moreoever, we demonstrate a universal segmentation model that is jointly trained on diverse datasets can surpass the performance of the common fine-tuning scheme for exploiting multiple domain knowledge.
Ranked #56 on
Semantic Segmentation
on Cityscapes test
no code implementations • 12 Feb 2018 • Chang Liu, Xiangrui Zeng, Ruogu Lin, Xiaodan Liang, Zachary Freyberg, Eric Xing, Min Xu
Cellular Electron Cryo-Tomography (CECT) is a powerful imaging technique for the 3D visualization of cellular structure and organization at submolecular resolution.
1 code implementation • NeurIPS 2018 • Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, Eric Xing
A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model.
no code implementations • ECCV 2018 • Luona Yang, Xiaodan Liang, Tairui Wang, Eric Xing
In the spectrum of vision-based autonomous driving, vanilla end-to-end models are not interpretable and suboptimal in performance, while mediated perception models require additional intermediate representations such as segmentation masks or detection bounding boxes, whose annotation can be prohibitively expensive as we move to a larger scale.
no code implementations • 6 Dec 2017 • Christy Li, Dimitris Konomis, Graham Neubig, Pengtao Xie, Carol Cheng, Eric Xing
The hope is that the tool can be used to reduce mis-diagnosis.
4 code implementations • ACL 2018 • Baoyu Jing, Pengtao Xie, Eric Xing
To cope with these challenges, we (1) build a multi-task learning framework which jointly performs the pre- diction of tags and the generation of para- graphs, (2) propose a co-attention mechanism to localize regions containing abnormalities and generate narrations for them, (3) develop a hierarchical LSTM model to generate long paragraphs.
no code implementations • 4 Nov 2017 • Yuan Yang, Pengtao Xie, Xin Gao, Carol Cheng, Christy Li, Hongbao Zhang, Eric Xing
Predicting discharge medications right after a patient being admitted is an important clinical decision, which provides physicians with guidance on what type of medication regimen to plan for and what possible changes on initial medication may occur during an inpatient stay.
no code implementations • EMNLP 2017 • Mrinmaya Sachan, Kumar Dubey, Eric Xing
These axioms are then parsed into rules that are used to improve the state-of-the-art in solving geometry problems.
no code implementations • SEMEVAL 2017 • Mrinmaya Sachan, Eric Xing
As a case study, we explore the task of learning to solve geometry problems using demonstrative solutions available in textbooks.
no code implementations • ACL 2017 • Pengtao Xie, Eric Xing
Reading comprehension (RC), aiming to understand natural texts and answer questions therein, is a challenging task.
no code implementations • 30 May 2017 • Junier B. Oliva, Kumar Avinava Dubey, Barnabas Poczos, Eric Xing, Jeff Schneider
After, an RNN is used to compute the conditional distributions of the latent covariates.
no code implementations • ICCV 2017 • Prasoon Goyal, Zhiting Hu, Xiaodan Liang, Chenyu Wang, Eric Xing
In this work, we propose hierarchical nonparametric variational autoencoders, which combines tree-structured Bayesian nonparametric priors with VAEs, to enable infinite flexibility of the latent representation space.
no code implementations • 25 Jul 2016 • Kai Zhang, Chuanren Liu, Jie Zhang, Hui Xiong, Eric Xing, Jieping Ye
Given a matrix A of size m by n, state-of-the-art randomized algorithms take O(m * n) time and space to obtain its low-rank decomposition.
no code implementations • ICML 2017 • Willie Neiswanger, Eric Xing
However, we demonstrate that IS will fail for many choices of the target prior, depending on its parametric form and similarity to the false prior.
2 code implementations • ACL 2016 • Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, Eric Xing
Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models.
Ranked #61 on
Sentiment Analysis
on SST-2 Binary classification
no code implementations • 23 Dec 2015 • Pengtao Xie, Yuntian Deng, Eric Xing
On two popular latent variable models --- restricted Boltzmann machine and distance metric learning, we demonstrate that MAR can effectively capture long-tail patterns, reduce model complexity without sacrificing expressivity and improve interpretability.
no code implementations • 19 Dec 2015 • Hao Zhang, Zhiting Hu, Jinliang Wei, Pengtao Xie, Gunhee Kim, Qirong Ho, Eric Xing
To investigate how to adapt existing frameworks to efficiently support distributed GPUs, we propose Poseidon, a scalable system architecture for distributed inter-machine communication in existing DL frameworks.
no code implementations • 26 Nov 2015 • Pengtao Xie, Jin Kyu Kim, Yi Zhou, Qirong Ho, Abhimanu Kumar, Yao-Liang Yu, Eric Xing
Matrix-parametrized models, including multiclass logistic regression and sparse coding, are used in machine learning (ML) applications ranging from computer vision to computational biology.
no code implementations • 23 Nov 2015 • Pengtao Xie, Yuntian Deng, Eric Xing
Recently diversity-inducing regularization methods for latent variable models (LVMs), which encourage the components in LVMs to be diverse, have been studied to address several issues involved in latent variable modeling: (1) how to capture long-tail patterns underlying data; (2) how to reduce model complexity without sacrificing expressivity; (3) how to improve the interpretability of learned patterns.
no code implementations • 13 Nov 2015 • William Herlands, Andrew Wilson, Hannes Nickisch, Seth Flaxman, Daniel Neill, Wilbert van Panhuis, Eric Xing
We present a scalable Gaussian process model for identifying and characterizing smooth multidimensional changepoints, and automatically learning changes in expressive covariance structure.
no code implementations • 14 Oct 2015 • Willie Neiswanger, Chong Wang, Eric Xing
We develop a parallel variational inference (VI) procedure for use in data-distributed settings, where each machine only has access to a subset of data and runs VI independently, without communicating with other machines.
no code implementations • 19 Dec 2014 • Pengtao Xie, Eric Xing
In this paper, we propose Cauchy Principal Component Analysis (Cauchy PCA), a very simple yet effective PCA method which is robust to various types of noise.
no code implementations • 18 Dec 2014 • Pengtao Xie, Eric Xing
In large scale machine learning and data mining problems with high feature dimensionality, the Euclidean distance between data points can be uninformative, and Distance Metric Learning (DML) is often desired to learn a proper similarity measure (using side information such as example data pairs being similar or dissimilar).
no code implementations • 27 Oct 2014 • Junier Oliva, Willie Neiswanger, Barnabas Poczos, Eric Xing, Jeff Schneider
Function to function regression (FFR) covers a large range of interesting applications including time-series prediction problems, and also more general tasks like studying a mapping between two separate types of distributions.
no code implementations • 19 Sep 2014 • Pengtao Xie, Jin Kyu Kim, Yi Zhou, Qirong Ho, Abhimanu Kumar, Yao-Liang Yu, Eric Xing
Matrix-parametrized models, including multiclass logistic regression and sparse coding, are used in machine learning (ML) applications ranging from computer vision to computational biology.
no code implementations • 16 Jan 2014 • Le Song, Han Liu, Ankur Parikh, Eric Xing
Tree structured graphical models are powerful at expressing long range or hierarchical dependency among many variables, and have been widely applied in different areas of computer science and statistics.
no code implementations • 19 Nov 2013 • Willie Neiswanger, Chong Wang, Eric Xing
This embarrassingly parallel algorithm allows each machine to act independently on a subset of the data (without communication) until the final combination stage.
no code implementations • 10 Nov 2013 • Junier B. Oliva, Willie Neiswanger, Barnabas Poczos, Jeff Schneider, Eric Xing
We study the problem of distribution to real-value regression, where one aims to regress a mapping $f$ that takes in a distribution input covariate $P\in \mathcal{I}$ (for a non-parametric family of distributions $\mathcal{I}$) and outputs a real-valued response $Y=f(P) + \epsilon$.