no code implementations • 11 Jun 2022 • Han Liu, Bingning Wang, Ting Yao, Haijin Liang, Jianjin Xu, Xiaolin Hu
Large-scale pre-trained language models have achieved great success on natural language generation tasks.
no code implementations • 5 Jun 2022 • Han Liu, Siyang Zhao, Xiaotong Zhang, Feng Zhang, Junjie Sun, Hong Yu, Xianchao Zhang
Zero-shot intent classification is a vital and challenging task in dialogue systems, which aims to deal with numerous fast-emerging unacquainted intents without annotated training data.
no code implementations • 1 May 2022 • Ning Wang, Han Liu, Diego Klabjan
We develop an abstractive summarization framework independent of labeled data for multiple heterogeneous documents.
no code implementations • 23 Mar 2022 • Tim Tsz-Kit Lau, Han Liu
On the other hand, in distributionally robust optimization, we seek data-driven decisions which perform well under the most adverse distribution from a nominal distribution constructed from data samples within a certain discrepancy of probability distributions.
no code implementations • 15 Mar 2022 • Guo Ye, Han Liu, Biswa Sengupta
In multi-agent collaboration problems with communication, an agent's ability to encode their intention and interpret other agents' strategies is critical for planning their future actions.
no code implementations • 14 Mar 2022 • Qinjie Lin, Han Liu, Biswa Sengupta
Our results also demonstrate the advantage of the switch transformer model for absorbing expert knowledge and the importance of value distribution in evaluating the trajectory.
no code implementations • 8 Mar 2022 • Can Cui, Han Liu, Quan Liu, Ruining Deng, Zuhayr Asad, Yaohong WangShilin Zhao, Haichun Yang, Bennett A. Landman, Yuankai Huo
Thus, there are still open questions on how to effectively predict brain cancer survival from the incomplete radiological, pathological, genomic, and demographic data (e. g., one or more modalities might not be collected for a patient).
1 code implementation • 7 Mar 2022 • Han Liu, Yubo Fan, Hao Li, Jiacheng Wang, Dewei Hu, Can Cui, Ho Hin Lee, Ipek Oguz
First, we devise a plug-and-play dynamic head and adopt a filter scaling strategy to improve the expressiveness of the network.
no code implementations • 4 Mar 2022 • Han Liu, Xiaoyu Song, Ge Gao, Hehua Zhang, Yu-Shen Liu, Ming Gu
Semantic rule checking on RDFS/OWL data has been widely used in the construction industry.
no code implementations • 21 Feb 2022 • Han Liu, Michelle K. Sigona, Thomas J. Manuel, Li Min Chen, Charles F. Caskey, Benoit M. Dawant
Transcranial MRI-guided focused ultrasound (TcMRgFUS) is a therapeutic ultrasound method that focuses sound through the skull to a small region noninvasively under MRI guidance.
no code implementations • 26 Jan 2022 • Han Liu, Kathryn L. Holloway, Dario J. Englot, Benoit M. Dawant
Epilepsy is the fourth most common neurological disorder and affects people of all ages worldwide.
no code implementations • 25 Jan 2022 • Han Liu, Yubo Fan, Can Cui, Dingjie Su, Andrew McNeil, Benoit M. Dawant
Automatic methods to segment the vestibular schwannoma (VS) tumors and the cochlea from magnetic resonance imaging (MRI) are critical to VS treatment planning.
no code implementations • 8 Jan 2022 • Reuben Dorent, Aaron Kujawa, Marina Ivory, Spyridon Bakas, Nicola Rieke, Samuel Joutard, Ben Glocker, Jorge Cardoso, Marc Modat, Kayhan Batmanghelich, Arseniy Belkov, Maria Baldeon Calisto, Jae Won Choi, Benoit M. Dawant, Hexin Dong, Sergio Escalera, Yubo Fan, Lasse Hansen, Mattias P. Heinrich, Smriti Joshi, Victoriya Kashtanova, Hyeon Gyu Kim, Satoshi Kondo, Christian N. Kruse, Susana K. Lai-Yuen, Hao Li, Han Liu, Buntheng Ly, Ipek Oguz, Hyungseob Shin, Boris Shirokikh, Zixian Su, Guotai Wang, Jianghao Wu, Yanwu Xu, Kai Yao, Li Zhang, Sebastien Ourselin, Jonathan Shapey, Tom Vercauteren
The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 as provided in the testing set (N=137).
no code implementations • 3 Nov 2021 • Xinlei Zhou, Han Liu, Farhad Pourpanah, Tieyong Zeng, XiZhao Wang
This paper provides a comprehensive review of epistemic uncertainty learning techniques in supervised learning over the last five years.
no code implementations • Findings (EMNLP) 2021 • Han Liu, Feng Zhang, Xiaotong Zhang, Siyang Zhao, Xianchao Zhang
Intent classification (IC) and slot filling (SF) are critical building blocks in task-oriented dialogue systems.
no code implementations • 29 Sep 2021 • Mattson Thieme, Ammar Gilani, Han Liu
In this work, we introduce a general methodology for approximating offline algorithms in online settings.
no code implementations • ICLR 2022 • Zhi Zhang, Zhuoran Yang, Han Liu, Pratap Tokekar, Furong Huang
This paper proposes a new algorithm for learning the optimal policies under a novel multi-agent predictive state representation reinforcement learning model.
no code implementations • 13 Sep 2021 • Han Liu, Yubo Fan, Can Cui, Dingjie Su, Andrew McNeil, Benoit M. Dawant
Automatic methods to segment the vestibular schwannoma (VS) tumors and the cochlea from magnetic resonance imaging (MRI) are critical to VS treatment planning.
no code implementations • CVPR 2021 • Xianchao Zhang, Ziyang Cheng, Xiaotong Zhang, Han Liu
In this paper, we propose a novel variant of GAN, Posterior Promoted GAN (P2GAN), which promotes generator with the real information in the posterior distribution produced by discriminator.
1 code implementation • 8 Jun 2021 • Han Liu, Yangyang Guo, Jianhua Yin, Zan Gao, Liqiang Nie
To be specific, in this model, positive and negative reviews are separately gathered and utilized to model the user-preferred and user-rejected aspects, respectively.
1 code implementation • Findings (ACL) 2021 • Zhihan Zhou, Liqian Ma, Han Liu
In this paper, we introduce an event-driven trading strategy that predicts stock movements by detecting corporate events from news articles.
no code implementations • 21 Mar 2021 • Li Wang, Dong Li, Han Liu, Jinzhang Peng, Lu Tian, Yi Shan
Our goal is to train a unified model for improving the performance in each dataset by leveraging information from all the datasets.
no code implementations • 4 Mar 2021 • Bin Wang, Han Liu, Chao Liu, Zhiqiang Yang, Qian Ren, Huixuan Zheng, Hong Lei
We applied BLOCKEYE in several popular DeFi projects and managed to discover potential security attacks that are unreported before.
Cryptography and Security Computers and Society
1 code implementation • 4 Feb 2021 • Han Liu, Caixia Yuan, Xiaojie Wang, Yushu Yang, Huixing Jiang, Zhongyuan Wang
We propose a novel task, Multi-Document Driven Dialogue (MD3), in which an agent can guess the target document that the user is interested in by leading a dialogue.
no code implementations • 13 Jan 2021 • Han Liu, Vivian Lai, Chenhao Tan
Although AI holds promise for improving human decision making in societally critical domains, it remains an open question how human-AI teams can reliably outperform AI alone and human alone in challenging prediction tasks (also known as complementary performance).
1 code implementation • 11 Dec 2020 • Hyunji Hayley Park, Katherine J. Zhang, Coleman Haley, Kenneth Steimel, Han Liu, Lane Schwartz
We fill in missing typological data for several languages and consider corpus-based measures of morphological complexity in addition to expert-produced typological features.
no code implementations • 3 Nov 2020 • Han Liu, Can Cui, Dario J. Englot, Benoit M. Dawant
Atlas-based methods are the standard approaches for automatic targeting of the Anterior Nucleus of the Thalamus (ANT) for Deep Brain Stimulation (DBS), but these are known to lack robustness when anatomic differences between atlases and subjects are large.
1 code implementation • 15 Aug 2020 • Han Liu, Caixia Yuan, Xiaojie Wang
A major challenge of multi-label text classification (MLTC) is to stimulatingly exploit possible label differences and label correlations.
Ranked #1 on
Multi-Label Text Classification
on AAPD
(Micro F1 metric)
1 code implementation • ACL 2020 • Guangfeng Yan, Lu Fan, Qimai Li, Han Liu, Xiaotong Zhang, Xiao-Ming Wu, Albert Y. S. Lam
User intent classification plays a vital role in dialogue systems.
no code implementations • 27 Jun 2020 • Xingguo Li, Tuo Zhao, Xiaoming Yuan, Han Liu
This paper describes an R package named flare, which implements a family of new high dimensional regression methods (LAD Lasso, SQRT Lasso, $\ell_q$ Lasso, and Dantzig selector) and their extensions to sparse precision matrix estimation (TIGER and CLIME).
1 code implementation • 27 Jun 2020 • Jason Ge, Xingguo Li, Haoming Jiang, Han Liu, Tong Zhang, Mengdi Wang, Tuo Zhao
We describe a new library named picasso, which implements a unified framework of pathwise coordinate optimization for a variety of sparse learning problems (e. g., sparse linear regression, sparse logistic regression, sparse Poisson regression and scaled sparse linear regression) combined with efficient active set selection strategies.
no code implementations • 26 Jun 2020 • Tuo Zhao, Han Liu, Kathryn Roeder, John Lafferty, Larry Wasserman
We describe an R package named huge which provides easy-to-use functions for estimating high dimensional undirected graphs from data.
2 code implementations • ACL 2020 • Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, Ting Liu
In this paper, we explore the slot tagging with only a few labeled support sentences (a. k. a.
no code implementations • 16 May 2020 • Fraser Young, L. Zhang, Richard Jiang, Han Liu, Conor Wall
With the recent booming of artificial intelligence (AI), particularly deep learning techniques, digital healthcare is one of the prevalent areas that could gain benefits from AI-enabled functionality.
no code implementations • 11 May 2020 • Lane Schwartz, Francis Tyers, Lori Levin, Christo Kirov, Patrick Littell, Chi-kiu Lo, Emily Prud'hommeaux, Hyunji Hayley Park, Kenneth Steimel, Rebecca Knowles, Jeffrey Micher, Lonny Strunk, Han Liu, Coleman Haley, Katherine J. Zhang, Robbie Jimmerson, Vasilisa Andriyanets, Aldrian Obaja Muis, Naoki Otani, Jong Hyuk Park, Zhisong Zhang
In the literature, languages like Finnish or Turkish are held up as extreme examples of complexity that challenge common modelling assumptions.
1 code implementation • 9 May 2020 • Yanran Guan, Han Liu, Kun Liu, Kangxue Yin, Ruizhen Hu, Oliver van Kaick, Yan Zhang, Ersin Yumer, Nathan Carr, Radomir Mech, Hao Zhang
Our tool supports constrained modeling, allowing users to restrict or steer the model evolution with functionality labels.
Graphics
no code implementations • LREC 2020 • Han Liu, Pete Burnap, Wafa Alorainy, Matthew Williams
This paper presents a system developed during our participation (team name: scmhl5) in the TRAC-2 Shared Task on aggression identification.
no code implementations • 19 Mar 2020 • Han Liu, Shantao Liu
EQL, also named as Extremely Simple Query Language, can be widely used in the field of knowledge graph, precise search, strong artificial intelligence, database, smart speaker , patent search and other fields.
no code implementations • 14 Jan 2020 • Vivian Lai, Han Liu, Chenhao Tan
To support human decision making with machine learning models, we often need to elucidate patterns embedded in the models that are unsalient, unknown, or counterintuitive to humans.
no code implementations • 11 Dec 2019 • Hong Luo, Han Liu, Kejun Li, Bo Zhang
An essential criterion for FS image quality control is that all the essential anatomical structures in the section should appear full and remarkable with a clear boundary.
1 code implementation • IJCNLP 2019 • Han Liu, Xiaotong Zhang, Lu Fan, Xu Fu, i, Qimai Li, Xiao-Ming Wu, Albert Y. S. Lam
With the burgeoning of conversational AI, existing systems are not capable of handling numerous fast-emerging intents, which motivates zero-shot intent classification.
no code implementations • 27 Sep 2019 • Han Liu, Xianchao Zhang, Xiaotong Zhang, Qimai Li, Xiao-Ming Wu
However, there are two issues in existing possible world based algorithms: (1) They rely on all the possible worlds and treat them equally, but some marginal possible worlds may cause negative effects.
1 code implementation • 26 Sep 2019 • Qimai Li, Xiaotong Zhang, Han Liu, Quanyu Dai, Xiao-Ming Wu
Graph convolutional neural networks (GCN) have been the model of choice for graph representation learning, which is mainly due to the effective design of graph convolution that computes the representation of a node by aggregating those of its neighbors.
no code implementations • 25 Sep 2019 • Yunhui Long, Suxin Lin, Zhuolin Yang, Carl A. Gunter, Han Liu, Bo Li
We present a novel approach named G-PATE for training differentially private data generator.
no code implementations • 25 Sep 2019 • Qimai Li, Xiaotong Zhang, Han Liu, Xiao-Ming Wu
Graph convolutional neural networks have demonstrated promising performance in attributed graph learning, thanks to the use of graph convolution that effectively combines graph structures and node features for learning node representations.
no code implementations • 25 Sep 2019 • Boxin Wang, Hengzhi Pei, Han Liu, Bo Li
In particular, we propose a tree based autoencoder to encode discrete text data into continuous vector space, upon which we optimize the adversarial perturbation.
1 code implementation • NeurIPS 2019 • Han Liu, Zhizhong Han, Yu-Shen Liu, Ming Gu
Low-rank metric learning aims to learn better discrimination of data subject to low-rank constraints.
no code implementations • NeurIPS 2016 • Xinyang Yi, Zhaoran Wang, Zhuoran Yang, Constantine Caramanis, Han Liu
We consider the weakly supervised binary classification problem where the labels are randomly flipped with probability $1- {\alpha}$.
no code implementations • 20 Jun 2019 • Yutai Hou, Zhihan Zhou, Yijia Liu, Ning Wang, Wanxiang Che, Han Liu, Ting Liu
It calculates emission score with similarity based methods and obtains transition score with a specially designed transfer mechanism.
1 code implementation • 4 Jun 2019 • Xiaotong Zhang, Han Liu, Qimai Li, Xiao-Ming Wu
Attributed graph clustering is challenging as it requires joint modelling of graph structures and node attributes.
Ranked #3 on
Graph Clustering
on Cora
(NMI metric)
1 code implementation • ICLR 2020 • Harsh Shrivastava, Xinshi Chen, Binghong Chen, Guanghui Lan, Srinvas Aluru, Han Liu, Le Song
Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix.
no code implementations • 28 May 2019 • Kean Ming Tan, Junwei Lu, Tong Zhang, Han Liu
To address this issue, neuroscientists have been measuring brain activity under natural viewing experiments in which the subjects are given continuous stimuli, such as watching a movie or listening to a story.
1 code implementation • ICLR 2020 • Binghong Chen, Bo Dai, Qinjie Lin, Guo Ye, Han Liu, Le Song
We propose a meta path planning algorithm named \emph{Neural Exploration-Exploitation Trees~(NEXT)} for learning from prior experience for solving new path planning problems in high dimensional continuous state and action spaces.
1 code implementation • CVPR 2019 • Qimai Li, Xiao-Ming Wu, Han Liu, Xiaotong Zhang, Zhichao Guan
However, existing graph-based methods either are limited in their ability to jointly model graph structures and data features, such as the classical label propagation methods, or require a considerable amount of labeled data for training and validation due to high model complexity, such as the recent neural-network-based methods.
no code implementations • 6 Dec 2018 • Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, Tamer Başar
This work appears to be the first finite-sample analysis for batch MARL, a step towards rigorous theoretical understanding of general MARL algorithms in the finite-sample regime.
1 code implementation • NeurIPS 2018 • Qing Wang, Jiechao Xiong, Lei Han, Peng Sun, Han Liu, Tong Zhang
We consider deep policy learning with only batched historical trajectories.
no code implementations • NeurIPS 2018 • Wei Sun, Junwei Lu, Han Liu
In order to test the hypotheses on their topological structures, we propose two adjacency matrix sketching frameworks: neighborhood sketching and subgraph sketching.
no code implementations • 31 Oct 2018 • Yi Zhen, Lei Wang, Han Liu, Jian Zhang, Jiantao Pu
Among these CNNs, the DenseNet had the highest classification accuracy (i. e., 75. 50%) based on pre-trained weights when using global ROIs, as compared to 65. 50% when using local ROIs.
no code implementations • 30 Oct 2018 • Han Liu, Lei Wang, Yandong Nan, Faguang Jin, Qi. Wang, Jiantao Pu
Two CNN-based classification models were then used as feature extractors to obtain the discriminative features of the entire CXR images and the cropped lung region images.
no code implementations • 19 Oct 2018 • Han Liu, Dan Zeng, Qi Tian
Secondly, super-pixel level database is used to train our cloud detection models based on CNN and deep forest.
3 code implementations • 10 Oct 2018 • Jiechao Xiong, Qing Wang, Zhuoran Yang, Peng Sun, Lei Han, Yang Zheng, Haobo Fu, Tong Zhang, Ji Liu, Han Liu
Most existing deep reinforcement learning (DRL) frameworks consider either discrete action space or continuous action space solely.
no code implementations • 25 Sep 2018 • Chaobing Song, Ji Liu, Han Liu, Yong Jiang, Tong Zhang
Regularized online learning is widely used in machine learning applications.
no code implementations • 21 Sep 2018 • Yuan Cao, Matey Neykov, Han Liu
The goal is to distinguish whether the underlying graph is empty, i. e., the model consists of independent Rademacher variables, versus the alternative that the underlying graph contains a subgraph of a certain structure.
3 code implementations • 19 Sep 2018 • Peng Sun, Xinghai Sun, Lei Han, Jiechao Xiong, Qing Wang, Bo Li, Yang Zheng, Ji Liu, Yongsheng Liu, Han Liu, Tong Zhang
Both TStarBot1 and TStarBot2 are able to defeat the built-in AI agents from level 1 to level 10 in a full game (1v1 Zerg-vs-Zerg game on the AbyssalReef map), noting that level 8, level 9, and level 10 are cheating agents with unfair advantages such as full vision on the whole map and resource harvest boosting.
no code implementations • 17 Sep 2018 • Kean Ming Tan, Zhaoran Wang, Tong Zhang, Han Liu, R. Dennis Cook
Sliced inverse regression is a popular tool for sufficient dimension reduction, which replaces covariates with a minimal set of their linear combinations without loss of information on the conditional distribution of the response given the covariates.
no code implementations • 11 Sep 2018 • Yong Chen, Ming Zhou, Ying Wen, Yaodong Yang, Yufeng Su, Wei-Nan Zhang, Dell Zhang, Jun Wang, Han Liu
Deep Q-learning has achieved a significant success in single-agent decision making tasks.
Multiagent Systems
no code implementations • NeurIPS 2017 • Chris Junchi Li, Mengdi Wang, Han Liu, Tong Zhang
In this paper, we propose to adopt the diffusion approximation tools to study the dynamics of Oja's iteration which is an online stochastic gradient descent method for the principal component analysis.
no code implementations • NeurIPS 2016 • Chris Junchi Li, Zhaoran Wang, Han Liu
Despite the empirical success of nonconvex statistical optimization methods, their global dynamics, especially convergence to the desirable local minima, remain less well understood in theory.
no code implementations • 21 Aug 2018 • Jianqing Fan, Han Liu, Zhaoran Wang, Zhuoran Yang
We study the fundamental tradeoffs between statistical accuracy and computational tractability in the analysis of high dimensional heterogeneous data.
no code implementations • ICML 2018 • Qiang Sun, Kean Ming Tan, Han Liu, Tong Zhang
Our proposal is computationally tractable and produces an estimator that achieves the oracle rate of convergence.
no code implementations • ICML 2018 • Hao Lu, Yuan Cao, Zhuoran Yang, Junwei Lu, Han Liu, Zhaoran Wang
We study the hypothesis testing problem of inferring the existence of combinatorial structures in undirected graphical models.
1 code implementation • ICLR 2019 • Carson Eisenach, Haichuan Yang, Ji Liu, Han Liu
In the former, an agent learns a policy over $\mathbb{R}^d$ and in the latter, over a discrete set of actions each of which is parametrized by a continuous parameter.
2 code implementations • 1 Jun 2018 • Carson Eisenach, Han Liu
Compared to the naive interior point method, our method reduces the computational complexity of solving the SDP from $\tilde{O}(d^7\log\epsilon^{-1})$ to $\tilde{O}(d^{6}K^{-2}\epsilon^{-1})$ arithmetic operations for an $\epsilon$-optimal solution.
no code implementations • ICML 2018 • Daniel R. Jiang, Emmanuel Ekwedike, Han Liu
Inspired by recent successes of Monte-Carlo tree search (MCTS) in a number of artificial intelligence (AI) application domains, we propose a model-based reinforcement learning (RL) technique that iteratively applies MCTS on batches of small, finite-horizon versions of the original infinite-horizon Markov decision process.
1 code implementation • 6 May 2018 • Han Liu, Xiangnan He, Fuli Feng, Liqiang Nie, Rui Liu, Hanwang Zhang
In this paper, we develop a generic feature-based recommendation model, called Discrete Factorization Machine (DFM), for fast and accurate recommendation.
4 code implementations • ICML 2018 • Kaiqing Zhang, Zhuoran Yang, Han Liu, Tong Zhang, Tamer Başar
To this end, we propose two decentralized actor-critic algorithms with function approximation, which are applicable to large-scale MARL problems where both the number of states and the number of agents are massively large.
no code implementations • 23 Jan 2018 • Wafa Alorainy, Pete Burnap, Han Liu, Matthew Williams
Offensive or antagonistic language targeted at individuals and social groups based on their personal characteristics (also known as cyber hate speech or cyberhate) has been frequently posted and widely circulated viathe World Wide Web.
1 code implementation • ICLR 2018 • Jiechao Xiong, Qing Wang, Zhuoran Yang, Peng Sun, Yang Zheng, Lei Han, Haobo Fu, Xiangru Lian, Carson Eisenach, Haichuan Yang, Emmanuel Ekwedike, Bei Peng, Haoyue Gao, Tong Zhang, Ji Liu, Han Liu
Most existing deep reinforcement learning (DRL) frameworks consider action spaces that are either discrete or continuous space.
no code implementations • NeurIPS 2017 • Zhuoran Yang, Krishnakumar Balasubramanian, Princeton Zhaoran Wang, Han Liu
We consider estimating the parametric components of semiparametric multi-index models in high dimensions.
no code implementations • NeurIPS 2017 • Haotian Pang, Han Liu, Robert J. Vanderbei, Tuo Zhao
High dimensional sparse learning has imposed a great computational challenge to large scale data analysis.
no code implementations • 26 Sep 2017 • Zhuoran Yang, Krishnakumar Balasubramanian, Han Liu
We consider estimating the parametric components of semi-parametric multiple index models in a high-dimensional and non-Gaussian setting.
no code implementations • 20 Sep 2017 • Matey Neykov, Han Liu
In terms of methodological development, we propose two types of correlation based tests: computationally efficient screening for ferromagnets, and score type tests for general models, including a fast cycle presence test.
no code implementations • 20 Sep 2017 • Cong Ma, Junwei Lu, Han Liu
Our framework is based on the Gaussian graphical models, under which ISA can be converted to the problem of estimation and inference of the inter-subject precision matrix.
no code implementations • ICML 2017 • Zhuoran Yang, Krishnakumar Balasubramanian, Han Liu
We consider estimating the parametric component of single index models in high dimensions.
no code implementations • 28 Jul 2017 • Junwei Lu, Matey Neykov, Han Liu
In this paper, we propose a new inferential framework for testing nested multiple hypotheses and constructing confidence intervals of the unknown graph invariants under undirected graphical models.
1 code implementation • ACL 2017 • Cunchao Tu, Han Liu, Zhiyuan Liu, Maosong Sun
Network embedding (NE) is playing a critical role in network analysis, due to its ability to represent vertices with efficient low-dimensional embedding vectors.
no code implementations • 4 Jun 2017 • Qiang Sun, Kean Ming Tan, Han Liu, Tong Zhang
Our proposal is computationally tractable and produces an estimator that achieves the oracle rate of convergence.
no code implementations • 23 May 2017 • Ari Seff, Alex Beatson, Daniel Suo, Han Liu
Developments in deep generative models have allowed for tractable learning of high-dimensional data distributions.
no code implementations • 4 Apr 2017 • Haotian Pang, Robert Vanderbei, Han Liu, Tuo Zhao
High dimensional sparse learning has imposed a great computational challenge to large scale data analysis.
no code implementations • 29 Dec 2016 • Xingguo Li, Junwei Lu, Raman Arora, Jarvis Haupt, Han Liu, Zhaoran Wang, Tuo Zhao
We propose a general theory for studying the \xl{landscape} of nonconvex \xl{optimization} with underlying symmetric structures \tz{for a class of machine learning problems (e. g., low-rank matrix factorization, phase retrieval, and deep linear neural networks)}.
no code implementations • NeurIPS 2016 • Matey Neykov, Zhaoran Wang, Han Liu
The goal of noisy high-dimensional phase retrieval is to estimate an $s$-sparse parameter $\boldsymbol{\beta}^*\in \mathbb{R}^d$ from $n$ realizations of the model $Y = (\boldsymbol{X}^{\top} \boldsymbol{\beta}^*)^2 + \varepsilon$.
no code implementations • NeurIPS 2016 • Alex Beatson, Zhaoran Wang, Han Liu
We study the potential of a “blind attacker” to provably limit a learner’s performance by data injection attack without observing the learner’s training set or any parameter of the distribution from which it is drawn.
no code implementations • 24 Sep 2016 • Ethan X. Fang, Han Liu, Kim-Chuan Toh, Wen-Xin Zhou
This paper studies the matrix completion problem under arbitrary sampling schemes.
no code implementations • 15 Sep 2016 • Xiang Lyu, Will Wei Sun, Zhaoran Wang, Han Liu, Jian Yang, Guang Cheng
We consider the estimation and inference of graphical models that characterize the dependency structure of high-dimensional tensor-valued data.
no code implementations • 10 Aug 2016 • Matey Neykov, Junwei Lu, Han Liu
We propose a new family of combinatorial inference problems for graphical models.
no code implementations • 10 Jul 2016 • Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, Mingyi Hong
In particular, we first show that for a family of quadratic minimization problems, the iteration complexity $\mathcal{O}(\log^2(p)\cdot\log(1/\epsilon))$ of the CBCD-type methods matches that of the GD methods in term of dependency on $p$, up to a $\log^2 p$ factor.
no code implementations • 25 May 2016 • Xingguo Li, Haoming Jiang, Jarvis Haupt, Raman Arora, Han Liu, Mingyi Hong, Tuo Zhao
Many machine learning techniques sacrifice convenient computational structures to gain estimation robustness and modeling flexibility.
no code implementations • 9 May 2016 • Xingguo Li, Raman Arora, Han Liu, Jarvis Haupt, Tuo Zhao
We propose a stochastic variance reduced optimization algorithm for solving sparse learning problems with cardinality constraints.
no code implementations • 29 Apr 2016 • Kean Ming Tan, Zhaoran Wang, Han Liu, Tong Zhang
Sparse generalized eigenvalue problem (GEP) plays a pivotal role in a large family of high-dimensional statistical models, including sparse Fisher's discriminant analysis, canonical correlation analysis, and sufficient dimension reduction.
no code implementations • 16 Mar 2016 • Chris Junchi Li, Mengdi Wang, Han Liu, Tong Zhang
We prove for the first time a nearly optimal finite-sample error bound for the online PCA algorithm.
no code implementations • 30 Dec 2015 • Zhaoran Wang, Quanquan Gu, Han Liu
Based upon an oracle model of computation, which captures the interactions between algorithms and data, we establish a general lower bound that explicitly connects the minimum testing risk under computational budget constraints with the intrinsic probabilistic and combinatorial structures of statistical problems.
no code implementations • 28 Dec 2015 • Junwei Lu, Mladen Kolar, Han Liu
The testing procedures are based on a high dimensional, debiasing-free moment estimator, which uses a novel kernel smoothed Kendall's tau correlation matrix as an input statistic.
no code implementations • NeurIPS 2015 • Huitong Qiu, Fang Han, Han Liu, Brian Caffo
We propose a robust portfolio optimization approach based on quantile statistics.
no code implementations • NeurIPS 2015 • Zhaoran Wang, Quanquan Gu, Yang Ning, Han Liu
We provide a general theory of the expectation-maximization (EM) algorithm for inferring high dimensional latent variable models.
no code implementations • NeurIPS 2015 • Daniel Vainsencher, Han Liu, Tong Zhang
Abstract We propose a family of non-uniform sampling strategies to provably speed up a class of stochastic optimization algorithms with linear convergence including Stochastic Variance Reduced Gradient (SVRG) and Stochastic Dual Coordinate Ascent (SDCA).
no code implementations • NeurIPS 2015 • Wei Sun, Zhaoran Wang, Han Liu, Guang Cheng
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data.
no code implementations • NeurIPS 2015 • Tuo Zhao, Zhaoran Wang, Han Liu
We study the estimation of low rank matrices via nonconvex optimization.
no code implementations • 14 Nov 2015 • Zhuoran Yang, Zhaoran Wang, Han Liu, Yonina C. Eldar, Tong Zhang
To recover $\beta^*$, we propose an $\ell_1$-regularized least-squares estimator.
no code implementations • 30 Oct 2015 • Matey Neykov, Yang Ning, Jun S. Liu, Han Liu
Our main theoretical contribution is to establish a unified Z-estimation theory of confidence regions for high dimensional problems.
no code implementations • NeurIPS 2015 • Xinyang Yi, Zhaoran Wang, Constantine Caramanis, Han Liu
This model is known as the single-index model in statistics, and, among other things, it represents a significant generalization of one-bit compressed sensing.
no code implementations • 23 Apr 2015 • Junwei Lu, Han Liu
We consider the problem of estimating undirected triangle-free graphs of high dimensional distributions.
no code implementations • 18 Mar 2015 • Yan Li, Han Liu, Warren Powell
We propose a sequential learning policy for noisy discrete global optimization and ranking and selection (R\&S) problems with high dimensional sparse belief functions, where there are hundreds or even thousands of features, but only a small portion of these features contain explanatory power.
no code implementations • 10 Mar 2015 • Junwei Lu, Mladen Kolar, Han Liu
We develop a novel procedure for constructing confidence bands for components of a sparse additive model.
no code implementations • 4 Mar 2015 • Zhaoran Wang, Quanquan Gu, Han Liu
Many high dimensional sparse learning problems are formulated as nonconvex optimization.
no code implementations • 11 Feb 2015 • Cheng Zhou, Fang Han, Xinsheng Zhang, Han Liu
Theoretically, we develop a theory for testing the equality of U-statistic based correlation matrices.
no code implementations • 9 Feb 2015 • Quanquan Gu, Yuan Cao, Yang Ning, Han Liu
Due to the presence of unknown marginal transformations, we propose a pseudo likelihood based inferential approach.
no code implementations • 5 Feb 2015 • Will Wei Sun, Junwei Lu, Han Liu, Guang Cheng
We propose a novel sparse tensor decomposition method, namely Tensor Truncated Power (TTP) method, that incorporates variable selection into the estimation of decomposition components.
no code implementations • 30 Dec 2014 • Tianqi Zhao, Mladen Kolar, Han Liu
Our de-biasing procedure does not require solving the $L_1$-penalized composite quantile regression.
no code implementations • 30 Dec 2014 • Zhuoran Yang, Yang Ning, Han Liu
We propose a new class of semiparametric exponential family graphical models for the analysis of high dimensional mixed data.
no code implementations • 30 Dec 2014 • Yang Ning, Han Liu
Specifically, we propose a decorrelated score function to handle the impact of high dimensional nuisance parameters.
no code implementations • 30 Dec 2014 • Zhaoran Wang, Quanquan Gu, Yang Ning, Han Liu
We provide a general theory of the expectation-maximization (EM) algorithm for inferring high dimensional latent variable models.
no code implementations • 23 Dec 2014 • Tuo Zhao, Han Liu, Tong Zhang
This is the first result on the computational and statistical guarantees of the pathwise coordinate optimization framework in high dimensions.
no code implementations • 16 Dec 2014 • Ethan X. Fang, Yang Ning, Han Liu
This paper proposes a decorrelation-based approach to test hypotheses and construct confidence intervals for the low dimensional component of high dimensional proportional hazards models.
no code implementations • 6 Dec 2014 • Yang Ning, Tianqi Zhao, Han Liu
(i) We develop a regularized statistical chromatography approach to infer the parameter of interest under the proposed semiparametric generalized linear model without the need of estimating the unknown base measure function.
no code implementations • NeurIPS 2014 • Zhaoran Wang, Huanran Lu, Han Liu
In this paper, we propose a two-stage sparse PCA procedure that attains the optimal principal subspace estimator in polynomial time.
no code implementations • NeurIPS 2014 • Tuo Zhao, Mo Yu, Yiming Wang, Raman Arora, Han Liu
When the regularization function is block separable, we can solve the minimization problems in a randomized block coordinate descent (RBCD) manner.
no code implementations • NeurIPS 2014 • Quanquan Gu, Zhaoran Wang, Han Liu
In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-$k$, and attains a $\sqrt{s/n}$ statistical rate of convergence with $s$ being the subspace sparsity level and $n$ the sample size.
no code implementations • NeurIPS 2014 • Han Liu, Lie Wang, Tuo Zhao
We propose a new method named calibrated multivariate regression (CMR) for fitting high dimensional multivariate regression models.
no code implementations • NeurIPS 2014 • Chao Chen, Han Liu, Dimitris Metaxas, Tianqi Zhao
Though the mode finding problem is generally intractable in high dimensions, this paper unveils that, if the distribution can be approximated well by a tree graphical model, mode characterization is significantly easier.
no code implementations • 14 Nov 2014 • Mengdi Wang, Ethan X. Fang, Han Liu
For smooth convex problems, the SCGD can be accelerated to converge at a rate of $O(k^{-2/7})$ in the general case and $O(k^{-4/5})$ in the strongly convex case.
no code implementations • 22 Aug 2014 • Zhaoran Wang, Huanran Lu, Han Liu
To optimally estimate sparse principal subspaces, we propose a two-stage computational framework named "tighten after relax": Within the 'relax' stage, we approximately solve a convex relaxation of sparse PCA with early stopping to obtain a desired initial estimator; For the 'tighten' stage, we propose a novel algorithm called sparse orthogonal iteration pursuit (SOAP), which iteratively refines the initial estimator by directly solving the underlying nonconvex problem.
no code implementations • 29 Apr 2014 • Jianqing Fan, Han Liu, Yang Ning, Hui Zou
Theoretically, the proposed methods achieve the same rates of convergence for both precision matrix estimation and eigenvector estimation, as if the latent variables were observed.
no code implementations • 18 Feb 2014 • Fang Han, Han Liu
We propose a new high dimensional semiparametric principal component analysis (PCA) method, named Copula Component Analysis (COCA).
no code implementations • 16 Jan 2014 • Le Song, Han Liu, Ankur Parikh, Eric Xing
Tree structured graphical models are powerful at expressing long range or hierarchical dependency among many variables, and have been widely applied in different areas of computer science and statistics.
no code implementations • 16 Dec 2013 • Robert Vanderbei, Han Liu, Lie Wang, Kevin Lin
For the first approach, we note that the zero vector can be taken as the initial basic (infeasible) solution for the linear programming problem and therefore, if the true signal is very sparse, some variants of the simplex method can be expected to take only a small number of pivots to arrive at a solution.
no code implementations • NeurIPS 2013 • Tuo Zhao, Han Liu
We propose a semiparametric procedure for estimating high dimensional sparse inverse covariance matrix.
no code implementations • NeurIPS 2013 • Fang Han, Han Liu
In this paper we focus on the principal component regression and its application to high dimension non-Gaussian data.
no code implementations • 1 Nov 2013 • Huitong Qiu, Fang Han, Han Liu, Brian Caffo
In this manuscript we consider the problem of jointly estimating multiple graphical models in high dimensions.
no code implementations • 14 Oct 2013 • Fang Han, Han Liu
In the non-sparse setting, we show that ECA's performance is highly related to the effective rank of the covariance matrix.
no code implementations • 7 Aug 2013 • Jianqing Fan, Fang Han, Han Liu
Big Data bring new opportunities to modern society and challenges to data scientists.
no code implementations • 1 Jul 2013 • Fang Han, Huanran Lu, Han Liu
In addition, we provide thorough experiments on both synthetic and real-world equity data to show that there are empirical advantages of our method over the lasso-type estimators in both parameter estimation and forecasting.
no code implementations • 30 Jun 2013 • Zhaoran Wang, Fang Han, Han Liu
We study sparse principal component analysis for high dimensional vector autoregressive time series under a doubly asymptotic framework, which allows the dimension $d$ to scale with the series length $T$.
no code implementations • 27 Jun 2013 • Mladen Kolar, Han Liu
Through careful analysis, we establish rates of convergence that are significantly faster than the best known results and admit an optimal scaling of the sample size n, dimensionality p, and sparsity level s in the high-dimensional setting.
no code implementations • 20 Jun 2013 • Zhaoran Wang, Han Liu, Tong Zhang
In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator.
no code implementations • 29 May 2013 • Fang Han, Han Liu
The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix.
no code implementations • 10 May 2013 • Han Liu, Lie Wang, Tuo Zhao
We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models.
no code implementations • NeurIPS 2012 • Han Liu, Larry Wasserman, John D. Lafferty
We prove a new exponential concentration inequality for a plug-in estimator of the Shannon mutual information.
no code implementations • 29 Oct 2012 • Mladen Kolar, Han Liu, Eric P. Xing
Many real world network problems often concern multivariate nodal attributes such as image, textual, and multi-view feature vectors on nodes, rather than simple univariate nodal attributes.
no code implementations • NeurIPS 2010 • Han Liu, Xi Chen, Larry Wasserman, John D. Lafferty
In this paper, we propose a semiparametric method for estimating $G(x)$ that builds a tree on the $X$ space just as in CART (classification and regression trees), but at each leaf of the tree estimates a graph.
no code implementations • NeurIPS 2010 • Han Liu, Xi Chen
We propose a new nonparametric learning method based on multivariate dyadic regression trees (MDRTs).
2 code implementations • NeurIPS 2010 • Han Liu, Kathryn Roeder, Larry Wasserman
In this paper, we present StARS: a new stability-based method for choosing the regularization parameter in high dimensional inference for undirected graphs.
no code implementations • NeurIPS 2009 • Han Liu, Xi Chen
This paper studies the forward greedy strategy in sparse nonparametric regression.
no code implementations • NeurIPS 2008 • Han Liu, Larry Wasserman, John D. Lafferty
We propose new families of models and algorithms for high-dimensional nonparametric learning with joint sparsity constraints.