no code implementations • NeurIPS 2012 • Qiang Liu, Jian Peng, Alexander T. Ihler
Crowdsourcing has become a popular paradigm for labeling large datasets.
no code implementations • 26 Feb 2013 • Qiang Liu, Alexander Ihler
The marginal maximum a posteriori probability (MAP) estimation problem, which calculates the mode of the marginal posterior distribution of a subset of variables with the remaining variables marginalized, is an important inference problem in many models, such as those with hidden variables or uncertain parameters.
no code implementations • NeurIPS 2013 • Qiang Liu, Alexander T. Ihler, Mark Steyvers
We study the problem of estimating continuous quantities, such as prices, probabilities, and point spreads, using a crowdsourcing approach.
no code implementations • NeurIPS 2013 • Qiang Cheng, Qiang Liu, Feng Chen, Alexander T. Ihler
The KL divergence is optimized using the belief propagation algorithm, with complexity exponential in only the cluster size of the graph.
no code implementations • 4 Sep 2014 • Wei Ping, Qiang Liu, Alexander Ihler
In this work, we propose the marginal structured SVM (MSSVM) for structured prediction with hidden variables.
no code implementations • NeurIPS 2014 • Qiang Liu, Alexander Ihler
Distributed learning of probabilistic models from multiple data repositories with minimum communication is increasingly important.
no code implementations • CIKM 2015 • Qiang Liu, Feng Yu, Shu Wu, Liang Wang
The explosion in online advertisement urges to better estimate the click prediction of ads.
4 code implementations • 1 Jan 2015 • Qiang Liu, Feng Yu, Shu Wu, Liang Wang
The explosion in online advertisement urges to better estimate the click prediction of ads.
no code implementations • 3 Feb 2015 • Hongwei Li, Qiang Liu
Crowdsourcing provides a popular paradigm for data collection at scale.
no code implementations • 14 Mar 2015 • Jason D. Lee, Yuekai Sun, Qiang Liu, Jonathan E. Taylor
We devise a one-shot approach to distributed sparse regression in the high-dimensional setting.
no code implementations • 25 Mar 2015 • Dengyong Zhou, Qiang Liu, John C. Platt, Christopher Meek, Nihar B. Shah
There is a rapidly increasing interest in crowdsourcing for data labeling.
no code implementations • NeurIPS 2015 • Wei Ping, Qiang Liu, Alexander Ihler
Marginal MAP inference involves making MAP predictions in systems defined with latent variables or missing information.
no code implementations • NeurIPS 2015 • Qiang Liu, John W. Fisher III, Alexander T. Ihler
We propose a simple Monte Carlo based inference method that augments convex variational bounds by adding importance sampling (IS).
no code implementations • 10 Feb 2016 • Qiang Liu, Jason D. Lee, Michael. I. Jordan
We derive a new discrepancy statistic for measuring differences between two probability distributions based on combining Stein's identity with the reproducing kernel Hilbert space theory.
no code implementations • 24 May 2016 • Gedas Bertasius, Qiang Liu, Lorenzo Torresani, Jianbo Shi
In this work, we present a new Local Perturb-and-MAP (locPMAP) framework that replaces the global optimization with a local optimization by exploiting our observed connection between locPMAP and the pseudolikelihood of the original CRF model.
no code implementations • NeurIPS 2016 • Jun Han, Qiang Liu
In distributed, or privacy-preserving learning, we are often given a set of probabilistic models estimated from different local repositories, and asked to combine them into a single model that gives efficient statistical estimation.
13 code implementations • NeurIPS 2016 • Qiang Liu, Dilin Wang
We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization.
no code implementations • 19 Sep 2016 • Qiang Liu, Shu Wu, Diyi Wang, Zhaokang Li, Liang Wang
Recently, Recurrent Neural Networks (RNN) based methods have been successfully applied in several sequential modeling tasks.
no code implementations • 29 Sep 2016 • Qiang Liu, Shu Wu, Feng Yu, Liang Wang, Tieniu Tan
In this paper, we propose a novel representation learning method, Information Credibility Evaluation (ICE), to learn representations of information credibility on social media.
no code implementations • 17 Oct 2016 • Qiang Liu, Jason D. Lee
Importance sampling is widely used in machine learning and statistics, but its power is limited by the restriction of using simple proposals for which the importance weights can be tractably calculated.
1 code implementation • 6 Nov 2016 • Dilin Wang, Qiang Liu
We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference.
Ranked #19 on Conditional Image Generation on CIFAR-10 (Inception score metric)
no code implementations • 30 Nov 2016 • Qiang Liu, Yihao Feng
Variational inference provides a powerful tool for approximate probabilistic in- ference on complex, structured models.
no code implementations • 27 Feb 2017 • Yingzhen Li, Richard E. Turner, Qiang Liu
We propose a novel approximate inference algorithm that approximates a target distribution by amortising the dynamics of a user-selected MCMC sampler.
no code implementations • 7 Apr 2017 • Yang Liu, Prajit Ramachandran, Qiang Liu, Jian Peng
Policy gradient methods have been successfully applied to many complex reinforcement learning problems.
no code implementations • 18 Apr 2017 • Jun Han, Qiang Liu
We propose a novel adaptive importance sampling algorithm which incorporates Stein variational gradient decent algorithm (SVGD) with importance sampling (IS).
no code implementations • NeurIPS 2017 • Qiang Liu
Stein variational gradient descent (SVGD) is a deterministic sampling algorithm that iteratively transports a set of particles to approximate given distributions, based on an efficient gradient-based update that guarantees to optimally decrease the KL divergence within a function space.
no code implementations • 4 Jul 2017 • Qiang Liu, Dilin Wang
We propose a number of new algorithms for learning deep energy models and demonstrate their properties.
no code implementations • 20 Jul 2017 • Yihao Feng, Dilin Wang, Qiang Liu
We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference.
no code implementations • 10 Oct 2017 • Jiaqi Guan, Yang Liu, Qiang Liu, Jian Peng
Deep neural networks have been remarkable successful in various AI tasks but often cast high computation and energy cost for energy-constrained applications such as mobile sensing.
no code implementations • NeurIPS 2016 • Wei Ping, Qiang Liu, Alexander Ihler
In this work, we propose an infinite restricted Boltzmann machine~(RBM), whose maximum likelihood estimation~(MLE) corresponds to a constrained convex optimization.
no code implementations • 17 Oct 2017 • Tianbing Xu, Qiang Liu, Jian Peng
Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems.
no code implementations • 28 Oct 2017 • Jinglin Chen, Jian Peng, Qiang Liu
We propose a new localized inference algorithm for answering marginalization queries in large graphical models with the correlation decay property.
2 code implementations • 30 Oct 2017 • Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu
Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems.
no code implementations • ICLR 2018 • Pengchuan Zhang, Qiang Liu, Dengyong Zhou, Tao Xu, Xiaodong He
When evaluated with neural distance, our bounds show that generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set.
no code implementations • ICML 2018 • Dilin Wang, Zhe Zeng, Qiang Liu
We propose a novel distributed inference algorithm for continuous graphical models, by extending Stein variational gradient descent (SVGD) to leverage the Markov dependency structure of the distribution of interest.
no code implementations • ICLR 2018 • Hao Liu*, Yihao Feng*, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu
Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems.
no code implementations • 27 Jan 2018 • Vadim Smolyakov, Qiang Liu, John W. Fisher III
For large scale on-line inference problems the update strategy is critical for performance.
no code implementations • 11 Mar 2018 • Pan Li, Qiang Liu, Wentao Zhao, Dongxu Wang, Siqi Wang
In this paper, we adopt the Edge Pattern Detection (EPD) algorithm to design a novel poisoning method that attack against several machine learning algorithms used in IDSs.
no code implementations • 13 Mar 2018 • Tianbing Xu, Qiang Liu, Liang Zhao, Jian Peng
The performance of off-policy learning, including deep Q-learning and deep deterministic policy gradient (DDPG), critically depends on the choice of the exploration policy.
no code implementations • ICLR 2019 • Tanmay Gangwani, Qiang Liu, Jian Peng
Improving the efficiency of RL algorithms in real-world problems with sparse or episodic rewards is therefore a pressing need.
no code implementations • ICML 2018 • Jun Han, Qiang Liu
Stein variational gradient decent (SVGD) has been shown to be a powerful approximate inference algorithm for complex distributions.
no code implementations • ICML 2018 • Jiasen Yang, Qiang Liu, Vinayak Rao, Jennifer Neville
Recent work has combined Stein’s method with reproducing kernel Hilbert space theory to develop nonparametric goodness-of-fit tests for un-normalized probability distributions.
no code implementations • ICML 2018 • Tianbing Xu, Qiang Liu, Liang Zhao, Jian Peng
The performance of off-policy learning, including deep Q-learning and deep deterministic policy gradient (DDPG), critically depends on the choice of the exploration policy.
no code implementations • ICLR 2019 • Yuan Xie, Boyi Liu, Qiang Liu, Zhaoran Wang, Yuan Zhou, Jian Peng
Such an error reduction phenomenon is somewhat surprising as the estimated surrogate policy is less accurate than the given historical policy.
no code implementations • 27 Sep 2018 • Yihao Feng, Hao liu, Jian Peng, Qiang Liu
Deep reinforcement learning has achieved remarkable successes in solving various challenging artificial intelligence tasks.
1 code implementation • 11 Oct 2018 • Qiang Liu, Piet Van Mieghem
To shed light on the disease localization phenomenon, we study a bursty susceptible-infected-susceptible (SIS) model and analyze the model under the mean-field approximation.
Physics and Society Social and Information Networks
no code implementations • NeurIPS 2019 • Colin Wei, Jason D. Lee, Qiang Liu, Tengyu Ma
We prove that for infinite-width two-layer nets, noisy gradient descent optimizes the regularized neural net loss to a global minimum in polynomial iterations.
no code implementations • NeurIPS 2018 • Qiang Liu, Dilin Wang
Stein variational gradient descent (SVGD) is a non-parametric inference algorithm that evolves a set of particles to fit a given distribution of interest.
2 code implementations • NeurIPS 2018 • Qiang Liu, Lihong Li, Ziyang Tang, Dengyong Zhou
We consider the off-policy estimation problem of estimating the expected reward of a target policy using samples collected by a different behavior policy.
1 code implementation • NeurIPS 2018 • Dilin Wang, Hao liu, Qiang Liu
Variational inference with {\alpha}-divergences has been widely used in modern probabilistic machine learning.
no code implementations • ICLR 2019 • Colin Wei, Jason Lee, Qiang Liu, Tengyu Ma
We establish: 1) for multi-layer feedforward relu networks, the global minimizer of a weakly-regularized cross-entropy loss has the maximum normalized margin among all networks, 2) as a result, increasing the over-parametrization improves the normalized margin and generalization error bounds for deep networks.
1 code implementation • NeurIPS 2019 • Yihao Feng, Lihong Li, Qiang Liu
Value function learning plays a central role in many state-of-the-art reinforcement-learning algorithms.
1 code implementation • 31 May 2019 • Yang Liu, Yunan Luo, Yuanyi Zhong, Xi Chen, Qiang Liu, Jian Peng
Recent advances in deep reinforcement learning algorithms have shown great potential and success for solving many challenging real-world problems, including Go game and robotic applications.
1 code implementation • NeurIPS 2019 • Zhizhou Ren, Kefan Dong, Yuan Zhou, Qiang Liu, Jian Peng
Goal-oriented reinforcement learning has recently been a practical framework for robotic manipulation tasks, in which an agent is required to reach a certain goal defined by a function on the state space.
1 code implementation • 10 Jun 2019 • Dilin Wang, Chengyue Gong, Qiang Liu
Theoretically, we show that our adversarial mechanism effectively encourages the diversity of the embedding vectors, helping to increase the robustness of models.
Ranked #5 on Language Modelling on Penn Treebank (Word Level)
1 code implementation • 22 Jun 2019 • Tanmay Gangwani, Joel Lehman, Qiang Liu, Jian Peng
We consider the problem of imitation learning from expert demonstrations in partially observable Markov decision processes (POMDPs).
no code implementations • 19 Sep 2019 • Aishan Liu, Xianglong Liu, Chongzhi Zhang, Hang Yu, Qiang Liu, DaCheng Tao
Various adversarial defense methods have accordingly been developed to improve adversarial robustness for deep models.
no code implementations • 25 Sep 2019 • Dinghuai Zhang*, Mao Ye*, Chengyue Gong*, Zhanxing Zhu, Qiang Liu
Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning.
no code implementations • 25 Sep 2019 • Pengchuan Zhang, Hunter Lang, Qiang Liu, Lin Xiao
We investigate statistical methods for automatically scheduling the learning rate (step size) in stochastic optimization.
no code implementations • 25 Sep 2019 • Zhaocheng Liu, Qiang Liu, Haoli Zhang
Automatically feature generation is a major topic of automated machine learning.
1 code implementation • NeurIPS 2019 • Qiang Liu, Lemeng Wu, Dilin Wang
We develop a progressive training approach for neural networks which adaptively grows the network structure by splitting existing neurons to multiple off-springs.
1 code implementation • ICLR 2020 • Dilin Wang, Meng Li, Lemeng Wu, Vikas Chandra, Qiang Liu
Designing energy-efficient networks is of critical importance for enabling state-of-the-art deep learning in mobile and edge settings where the computation and energy budgets are highly limited.
no code implementations • ICLR 2020 • Ziyang Tang, Yihao Feng, Lihong Li, Dengyong Zhou, Qiang Liu
Our method is doubly robust in that the bias vanishes when either the density ratio or the value function estimation is perfect.
1 code implementation • NeurIPS 2019 • Dilin Wang, Ziyang Tang, Chandrajit Bajaj, Qiang Liu
Stein variational gradient descent (SVGD) is a particle-based inference algorithm that leverages gradient information for efficient approximate inference.
no code implementations • 11 Nov 2019 • Qiang Liu, Shu Wu, Liang Wang
For modeling users' demands on different categories of items, the problem can be formulated as recommendation with contextual and sequential information.
no code implementations • NeurIPS 2020 • Xiaoxia Wu, Edgar Dobriban, Tongzheng Ren, Shanshan Wu, Zhiyuan Li, Suriya Gunasekar, Rachel Ward, Qiang Liu
For certain stepsizes of g and w , we show that they can converge close to the minimum norm solution.
no code implementations • 22 Dec 2019 • Shuxin Guo, Qiang Liu
We derive the Black-Scholes-Merton dual equation, which has exactly the same form as the Black-Scholes-Merton equation.
no code implementations • ICLR 2020 • Zhaocheng Liu, Qiang Liu, Haoli Zhang, Jun Zhu
In recent years, substantial progress has been made on graph convolutional networks (GCN).
1 code implementation • 1 Jan 2020 • Feng Yu, Zhaocheng Liu, Qiang Liu, Haoli Zhang, Shu Wu, Liang Wang
IM is an efficient and exact implementation of high-order FM, whose time complexity linearly grows with the order of interactions and the number of feature fields.
no code implementations • CIKM 2020 • Feng Yu, Zhaocheng Liu, Qiang Liu, Haoli Zhang, Shu Wu, Liang Wang
IM is an efficient and exact implementation of high-order FM, whose time complexity linearly grows with the order of interactions and the number of feature fields.
1 code implementation • 7 Jan 2020 • Di You, Nguyen Vo, Kyumin Lee, Qiang Liu
To combat fake news, researchers mostly focused on detecting fake news and journalists built and maintained fact-checking sites (e. g., Snopes. com and Politifact. com).
no code implementations • 20 Feb 2020 • Xingchao Liu, Mao Ye, Dengyong Zhou, Qiang Liu
We propose multipoint quantization, a quantization method that approximates a full-precision weight vector using a linear combination of multiple vectors of low-bit numbers; this is in contrast to typical quantization methods that approximate each weight using a single low precision number.
1 code implementation • 20 Feb 2020 • Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu
The idea is to generate a set of augmented data with some random perturbations or transforms and minimize the maximum, or worst case loss over the augmented data.
Ranked #187 on Image Classification on ImageNet
no code implementations • NeurIPS 2020 • Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, Qiang Liu
Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning.
no code implementations • 21 Feb 2020 • Peng Jia, Qiang Liu, Yongyang Sun
To increase the generalization ability of our framework, we use both simulated and real observation images to train the neural network.
1 code implementation • NeurIPS 2020 • Mao Ye, Tongzheng Ren, Qiang Liu
Our idea is to introduce Stein variational gradient as a repulsive force to push the samples of Langevin dynamics away from the past trajectories.
1 code implementation • 25 Feb 2020 • Pengchuan Zhang, Hunter Lang, Qiang Liu, Lin Xiao
We propose a statistical adaptive procedure called SALSA for automatically scheduling the learning rate (step size) in stochastic gradient methods.
no code implementations • 1 Mar 2020 • Jun Han, Fan Ding, Xianglong Liu, Lorenzo Torresani, Jian Peng, Qiang Liu
In addition, such transform can be straightforwardly employed in gradient-free kernelized Stein discrepancy to perform goodness-of-fit (GOF) test on discrete distributions.
1 code implementation • 3 Mar 2020 • Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu
This differs from the existing methods based on backward elimination, which remove redundant neurons from the large network.
2 code implementations • 3 Mar 2020 • Andrey Kormilitzin, Nemanja Vaci, Qiang Liu, Alejo Nevado-Holgado
In this work we introduced a named-entity recognition model for clinical natural language processing.
Medical Named Entity Recognition named-entity-recognition +4
no code implementations • 23 Mar 2020 • Lemeng Wu, Mao Ye, Qi Lei, Jason D. Lee, Qiang Liu
Recently, Liu et al.[19] proposed a splitting steepest descent (S2D) method that jointly optimizes the neural parameters and architectures based on progressively growing network structures by splitting neurons into multiple copies in a steepest descent fashion.
no code implementations • ICLR 2020 • Ali Mousavi, Lihong Li, Qiang Liu, Denny Zhou
Off-policy estimation for long-horizon problems is important in many real-life applications such as healthcare and robotics, where high-fidelity simulators may not be available and on-policy evaluation is expensive or impossible.
no code implementations • 25 Mar 2020 • Xi Chen, Qiang Liu, Xin T. Tong
One classical canon of statistics is that large models are prone to overfitting, and model selection procedures are necessary for high dimensional data.
no code implementations • 27 Apr 2020 • Qiang Liu, Zhaocheng Liu, Haoli Zhang
When dealing with continuous numeric features, we usually adopt feature discretization.
1 code implementation • 6 May 2020 • Feng Yu, Yanqiao Zhu, Qiang Liu, Shu Wu, Liang Wang, Tieniu Tan
However, these methods compress a session into one fixed representation vector without considering the target items to be predicted.
Ranked #3 on Session-Based Recommendations on yoochoose1
1 code implementation • ACL 2020 • Mao Ye, Chengyue Gong, Qiang Liu
For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution.
3 code implementations • 7 Jun 2020 • Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang
Moreover, our unsupervised method even surpasses its supervised counterparts on transductive tasks, demonstrating its great potential in real-world applications.
Ranked #1 on Node Classification on DBLP
no code implementations • 29 Jun 2020 • Shu Wu, Feng Yu, Xueli Yu, Qiang Liu, Liang Wang, Tieniu Tan, Jie Shao, Fan Huang
The CTR (Click-Through Rate) prediction plays a central role in the domain of computational advertising and recommender systems.
Ranked #31 on Click-Through Rate Prediction on Criteo
no code implementations • ICML 2020 • Denny Zhou, Mao Ye, Chen Chen, Tianjian Meng, Mingxing Tan, Xiaodan Song, Quoc Le, Qiang Liu, Dale Schuurmans
This is achieved by layerwise imitation, that is, forcing the thin network to mimic the intermediate outputs of the wide network from layer to layer.
no code implementations • 17 Jul 2020 • Qiang Liu, Haoli Zhang, Zhaocheng Liu
Moreover, we have also conducted experiments on a typical task of graph embedding, i. e., community detection, and the proposed UCMF model outperforms several representative graph embedding models.
no code implementations • 23 Jul 2020 • Qiang Liu, Zhaocheng Liu, Xiaofang Zhu, Yeliang Xiu
In this paper, inspired by piece-wise linear interpretability in DNN, we introduce the linearly separable regions of samples to the problem of active learning, and propose a novel Deep Active learning approach by Model Interpretability (DAMI).
no code implementations • 15 Aug 2020 • Yihao Feng, Tongzheng Ren, Ziyang Tang, Qiang Liu
We consider off-policy evaluation (OPE), which evaluates the performance of a new policy from observed data collected from previous experiments, without requiring the execution of the new policy.
no code implementations • 17 Aug 2020 • Qiang Liu, Tao Han, Ning Zhang, Ye Wang
Network slicing enables multiple virtual networks run on the same physical infrastructure to support various use cases in 5G and beyond.
no code implementations • 17 Aug 2020 • Zeyu Cui, Feng Yu, Shu Wu, Qiang Liu, Liang Wang
In this way, the items are represented at the attribute level, which can provide fine-grained information of items in recommendation.
no code implementations • 22 Aug 2020 • Zhaocheng Liu, Qiang Liu, Haoli Zhang, Yuntian Chen
Simple classifiers, e. g., Logistic Regression (LR), are globally interpretable, but not powerful enough to model complex nonlinear interactions among features in tabular data.
no code implementations • 26 Sep 2020 • Qiang Liu
We adjust the formulation of each layer of a conventional GRU with sequence to sequence learning and personal information of both sides of the conversation.
no code implementations • EMNLP (Louhi) 2020 • Andrey Kormilitzin, Nemanja Vaci, Qiang Liu, Hao Ni, Goran Nenadic, Alejo Nevado-Holgado
In this work we addressed the problem of capturing sequential information contained in longitudinal electronic health records (EHRs).
no code implementations • 16 Oct 2020 • Mao Ye, Dhruv Choudhary, Jiecao Yu, Ellie Wen, Zeliang Chen, Jiyan Yang, Jongsoo Park, Qiang Liu, Arun Kejariwal
To the best of our knowledge, this is the first work to provide in-depth analysis and discussion of applying pruning to online recommendation systems with non-stationary data distribution.
no code implementations • 23 Oct 2020 • Yurika Sakai, Andrey Kormilitzin, Qiang Liu, Alejo Nevado-Holgado
The most successful methods such as ReLU transfer functions, batch normalization, Xavier initialization, dropout, learning rate decay, or dynamic optimizers, have become standards in the field due, particularly, to their ability to increase the performance of Neural Networks (NNs) significantly and in almost all situations.
1 code implementation • 27 Oct 2020 • Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang
On the node attribute level, we corrupt node features by adding more noise to unimportant node features, to enforce the model to recognize underlying semantic information.
no code implementations • NeurIPS 2020 • Ziyang Tang, Yihao Feng, Na Zhang, Jian Peng, Qiang Liu
Off-policy evaluation provides an essential tool for evaluating the effects of different policies or treatments using only observed data.
1 code implementation • NeurIPS 2020 • Mao Ye, Lemeng Wu, Qiang Liu
Despite the great success of deep learning, recent works show that large deep neural networks are often highly redundant and can be significantly reduced in size.
no code implementations • 30 Oct 2020 • Yanqiao Zhu, Weizhi Xu, Qiang Liu, Shu Wu
To this end, we present a minimax selection scheme that explicitly harnesses neighborhood information and discover homophilous subgraphs to facilitate active selection.
1 code implementation • 15 Nov 2020 • Kurtis Evan David, Qiang Liu, Ruth Fong
While deep learning models often achieve strong task performance, their successes are hampered by their inability to disentangle spurious correlations from causative factors, such as when they use protected attributes (e. g., race, gender, etc.)
no code implementations • 20 Nov 2020 • Peng Jia, Xuebo Wu, Zhengyang Li, Bo Li, Weihua Wang, Qiang Liu, Adam Popowicz
Then we use these data to train a DNN (Tel--Net).
no code implementations • 20 Nov 2020 • Peng Jia, Qiang Liu, Yongyang Sun, Yitian Zheng, Wenbo Liu, Yifei Zhao
The ARGUS uses a deep learning based astronomical detection algorithm implemented in embedded devices in each WFSATs to detect astronomical targets.
1 code implementation • NeurIPS 2020 • Xingchao Liu, Xing Han, Na Zhang, Qiang Liu
In this work, we propose to certify the monotonicity of the general piece-wise linear neural networks by solving a mixed integer linear programming problem. This provides a new general approach for learning monotonic neural networks with arbitrary model structures.
no code implementations • CVPR 2021 • Chengyue Gong, Dilin Wang, Qiang Liu
Semi-supervised learning (SSL) is a key approach toward more data-efficient machine learning by jointly leverage both labeled and unlabeled data.
1 code implementation • CVPR 2021 • Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, Qiang Liu
Data augmentation (DA) is an essential technique for training state-of-the-art deep learning systems.
no code implementations • 2 Dec 2020 • Yiming Gan, Yu Bo, Boyuan Tian, Leimeng Xu, Wei Hu, Shaoshan Liu, Qiang Liu, Yanjun Zhang, Jie Tang, Yuhao Zhu
We develop and commercialize autonomous machines, such as logistic robots and self-driving cars, around the globe.
Self-Driving Cars Hardware Architecture
no code implementations • 16 Dec 2020 • Qiang Liu, Tao Han, Jiang, Xie, BaekGyu Kim
In this paper, we propose LiveMap, a real-time dynamic map, that detects, matches, and tracks objects on the road with crowdsourcing data from connected vehicles in sub-second.
no code implementations • 1 Jan 2021 • Shuo Yang, Le Hou, Xiaodan Song, Qiang Liu, Denny Zhou
It has been widely observed that increasing deep learning model sizes often leads to significant performance improvements on a variety of natural language processing and computer vision tasks.
no code implementations • 1 Jan 2021 • Bo Liu, Qiang Liu, Peter Stone, Animesh Garg, Yuke Zhu, Anima Anandkumar
The performance of our method is comparable or even better than the setting where all players have a full view of the environment, but no coach.
no code implementations • ICLR 2021 • Lizhen Nie, Mao Ye, Qiang Liu, Dan Nicolae
With the rising abundance of observational data with continuous treatments, we investigate the problem of estimating average dose-response curve (ADRF).
no code implementations • 1 Jan 2021 • Chengyue Gong, Xingchao Liu, Qiang Liu
We apply our method to recently-proposed MOCO, SimCLR, SwAV and notice that we can reduce the computational cost with little loss on the performance of ImageNet linear classification and other downstream tasks.
2 code implementations • 11 Jan 2021 • Yichen Xu, Yanqiao Zhu, Feng Yu, Qiang Liu, Shu Wu
To better model complex feature interaction, in this paper we propose a novel DisentanglEd Self-atTentIve NEtwork (DESTINE) framework for CTR prediction that explicitly decouples the computation of unary feature importance from pairwise interaction.
no code implementations • 4 Feb 2021 • Zhaoyang Wang, Yijie Shen, Qiang Liu, Xing Fu
The topological evolution of classic eigenmodes including Hermite-Laguerre-Gaussian and (helical) InceGaussian modes is exploited to construct coherent state modes, which unifies the representations of travelingwave (TW) and standing-wave (SW) ray-wave structured light for the first time and realizes the TW-SW unified ray-wave geometric beam with topology of raytrajectories splitting effect, breaking the boundary of TW and SW structured light.
Optics
2 code implementations • 8 Feb 2021 • Yingtao Luo, Qiang Liu, Zhaocheng Liu
The next location recommendation is at the core of various location-based applications.
Ranked #1 on point of interests on Gowalla
2 code implementations • 16 Feb 2021 • Dilin Wang, Chengyue Gong, Meng Li, Qiang Liu, Vikas Chandra
Weight-sharing NAS builds a supernet that assembles all the architectures as its sub-networks and jointly trains the supernet with the sub-networks.
Ranked #12 on Neural Architecture Search on ImageNet
1 code implementation • NeurIPS 2020 • Lemeng Wu, Bo Liu, Peter Stone, Qiang Liu
We propose firefly neural architecture descent, a general framework for progressively and dynamically growing neural networks to jointly optimize the networks' parameters and architectures.
no code implementations • 17 Feb 2021 • Lemeng Wu, Xingchao Liu, Qiang Liu
Self-attention, as the key block of transformers, is a powerful mechanism for extracting features from the inputs.
Ranked #618 on Image Classification on ImageNet
no code implementations • 24 Feb 2021 • Qiang Liu, Zhaocheng Liu, Haoli Zhang, Yuntian Chen, Jun Zhu
Accordingly, we can design an automatic feature crossing method to find feature interactions in DNN, and use them as cross features in LR.
no code implementations • 4 Mar 2021 • Yanqiao Zhu, Weizhi Xu, Jinghao Zhang, Yuanqi Du, Jieyu Zhang, Qiang Liu, Carl Yang, Shu Wu
Specifically, we first formulate a general pipeline of GSL and review state-of-the-art methods classified by the way of modeling graph structures, followed by applications of GSL across domains.
no code implementations • ICLR 2021 • Yihao Feng, Ziyang Tang, Na Zhang, Qiang Liu
Off-policy evaluation (OPE) is the task of estimating the expected reward of a given policy based on offline data previously collected under different policies.
1 code implementation • 14 Mar 2021 • Lizhen Nie, Mao Ye, Qiang Liu, Dan Nicolae
Motivated by the rising abundance of observational data with continuous treatments, we investigate the problem of estimating the average dose-response curve (ADRF).
1 code implementation • journal 2021 • Fenyu Hu, Liping Wang, Qiang Liu, Shu Wu, Liang Wang, Tieniu Tan
Graph classification is a challenging research problem in many applications across a broad range of domains.
no code implementations • 7 Apr 2021 • Zeyu Cui, Zekun Li, Shu Wu, XiaoYu Zhang, Qiang Liu, Liang Wang, Mengmeng Ai
We naturally generalizes the embedding propagation scheme of GCN to dynamic setting in an efficient manner, which is to propagate the change along the graph to update node embeddings.
1 code implementation • 15 Apr 2021 • Mengqi Zhang, Shu Wu, Xueli Yu, Qiang Liu, Liang Wang
We propose a new method named Dynamic Graph Neural Network for Sequential Recommendation (DGSR), which connects different user sequences through a dynamic graph structure, exploring the interactive behavior of users and items with time and order information.
1 code implementation • 19 Apr 2021 • Jinghao Zhang, Yanqiao Zhu, Qiang Liu, Shu Wu, Shuhui Wang, Liang Wang
To be specific, in the proposed LATTICE model, we devise a novel modality-aware structure learning layer, which learns item-item structures for each modality and aggregates multiple modalities to obtain latent item graphs.
1 code implementation • 26 Apr 2021 • Chengyue Gong, Dilin Wang, Meng Li, Vikas Chandra, Qiang Liu
To alleviate this problem, in this work, we introduce novel loss functions in vision transformer training to explicitly encourage diversity across patch representations for more discriminative feature extraction.
Ranked #19 on Semantic Segmentation on Cityscapes val
1 code implementation • 18 May 2021 • Bo Liu, Qiang Liu, Peter Stone, Animesh Garg, Yuke Zhu, Animashree Anandkumar
Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players.
Multi-agent Reinforcement Learning reinforcement-learning +3
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
In this work, we propose a family of constrained sampling algorithms which generalize Langevin Dynamics (LD) and Stein Variational Gradient Descent (SVGD) to incorporate a moment constraint specified by a general nonlinear function.
no code implementations • NeurIPS 2021 • Chengyue Gong, Xingchao Liu, Qiang Liu
In this work, we consider constrained optimization as a more principled approach for trading off two losses, with a special emphasis on lexicographic optimization, a degenerated limit of constrained optimization which optimizes a secondary loss inside the optimal set of the main loss.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO).
no code implementations • 2 Jun 2021 • Yingtao Luo, Qiang Liu, Yuntian Chen, WenBo Hu, Tian Tian, Jun Zhu
Especially, the discovery of PDEs with highly nonlinear coefficients from low-quality data remains largely under-addressed.
2 code implementations • 9 Jun 2021 • Yuntian Chen, Yingtao Luo, Qiang Liu, Hao Xu, Dongxiao Zhang
Partial differential equations (PDEs) are concise and understandable representations of domain knowledge, which are essential for deepening our understanding of physical processes and predicting future responses.
no code implementations • CVPR 2021 • Chengyue Gong, Tongzheng Ren, Mao Ye, Qiang Liu
The idea is to generate a set of augmented data with some random perturbations or transforms, and minimize the maximum, or worst case loss over the augmented data.
no code implementations • 28 Jun 2021 • Rui Sun, Peng Jia, Yongyang Sun, Zhimin Yang, Qiang Liu, Hongyan Wei
Time domain astronomy has emerged as a vibrant research field in recent years, focusing on celestial objects that exhibit variable magnitudes or positions.
no code implementations • 29 Jul 2021 • Runzhou Ge, Zhuangzhuang Ding, Yihan Hu, Wenxin Shao, Li Huang, Kun Li, Qiang Liu
Extended from our last year's award-winning model AFDet, we have made a handful of modifications to the base model, to improve the accuracy and at the same time to greatly reduce the latency.
no code implementations • 15 Aug 2021 • Qiang Liu, Yanqiao Zhu, Zhaocheng Liu, Yufeng Zhang, Shu Wu
To train high-performing models with the minimal annotation cost, active learning is proposed to select and label the most informative samples, yet it is still challenging to measure informativeness of samples used in DNNs.
no code implementations • 16 Aug 2021 • Mengqi Zhang, Yanqiao Zhu, Qiang Liu, Shu Wu, Liang Wang
In our work, different views can be obtained based on the various relations among nodes.
no code implementations • 31 Aug 2021 • Yanqiao Zhu, Yichen Xu, Hejie Cui, Carl Yang, Qiang Liu, Shu Wu
Recently, heterogeneous Graph Neural Networks (GNNs) have become a de facto model for analyzing HGs, while most of them rely on a relative large number of labeled data.
2 code implementations • 2 Sep 2021 • Yanqiao Zhu, Yichen Xu, Qiang Liu, Shu Wu
We envision this work to provide useful empirical evidence of effective GCL algorithms and offer several insights for future research.
no code implementations • 29 Sep 2021 • Mao Ye, Qiang Liu
The notion of the Pareto set allows us to focus on the set of (often infinite number of) models that cannot be strictly improved.
1 code implementation • ICLR 2022 • Chengyue Gong, Dilin Wang, Meng Li, Xinlei Chen, Zhicheng Yan, Yuandong Tian, Qiang Liu, Vikas Chandra
In this work, we observe that the poor performance is due to a gradient conflict issue: the gradients of different sub-networks conflict with that of the supernet more severely in ViTs than CNNs, which leads to early saturation in training and inferior convergence.
Ranked #7 on Neural Architecture Search on ImageNet
no code implementations • ICLR 2022 • Jiaqi Guan, Wesley Wei Qian, Qiang Liu, Wei-Ying Ma, Jianzhu Ma, Jian Peng
Assuming different forms of the underlying potential energy function, we can not only reinterpret and unify many of the existing models but also derive new variants of SE(3)-equivariant neural networks in a principled manner.
no code implementations • 8 Oct 2021 • Shuo Yang, Le Hou, Xiaodan Song, Qiang Liu, Denny Zhou
Our approach exploits the special structure of BERT that contains a stack of repeated modules (i. e., transformer encoders).
1 code implementation • 14 Oct 2021 • Qilong Yan, Yufeng Zhang, Qiang Liu, Shu Wu, Liang Wang
User profiling has long been an important problem that investigates user interests in many real applications.
no code implementations • IEEE Internet of Things Journal 2021 • Meixia Fu, Songlin Sun, Qilian Liang, Xiaoyun Tong, Qiang Liu
Index Terms—Channel-spatial attention block (CSAB), exciting-inhibition network (EINet), Internet of Things (IoT), person reidentification (re-ID), soft batch dropblock.
Ranked #55 on Person Re-Identification on Market-1501
no code implementations • 17 Oct 2021 • Mao Ye, Qiang Liu
The notion of the Pareto set allows us to focus on the set of (often infinite number of) models that cannot be strictly improved.
no code implementations • 17 Oct 2021 • Mao Ye, Qiang Liu
In this work, we propose an efficient method to explicitly \emph{optimize} a small set of high quality ``centroid'' points to better approximate the ideal bootstrap distribution.
3 code implementations • NeurIPS 2021 • Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, Qiang Liu
The goal of multi-task learning is to enable more efficient learning than single task learning by sharing model structures for a diverse set of tasks.
1 code implementation • 1 Nov 2021 • Jinghao Zhang, Yanqiao Zhu, Qiang Liu, Mengqi Zhang, Shu Wu, Liang Wang
Although having access to multiple modalities might allow us to capture rich information, we argue that the simple coarse-grained fusion by linear combination or concatenation in previous work is insufficient to fully understand content information and item relationships. To this end, we propose a latent structure MIning with ContRastive mOdality fusion method (MICRO for brevity).
no code implementations • 2 Nov 2021 • Qiang Liu, Nakjung Choi, Tao Han
As online learning is converged, OnSlicing reduces 12. 5% usage without any violations as compared to the state-of-the-art online DRL solution.
2 code implementations • 3 Nov 2021 • Hangbo Bao, Wenhui Wang, Li Dong, Qiang Liu, Owais Khan Mohammed, Kriti Aggarwal, Subhojit Som, Furu Wei
We present a unified Vision-Language pretrained Model (VLMo) that jointly learns a dual encoder and a fusion encoder with a modular Transformer network.
Ranked #2 on Image Retrieval on PhotoChat
no code implementations • NeurIPS 2021 • Chengyue Gong, Xingchao Liu, Qiang Liu
In this work, we consider constrained optimization as a more principled approach for trading off two losses, with a special emphasis on lexicographic optimization, a degenerated limit of constrained optimization which optimizes a secondary loss inside the optimal set of the main loss.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
In this work, we propose a family of constrained sampling algorithms which generalize Langevin Dynamics (LD) and Stein Variational Gradient Descent (SVGD) to incorporate a moment constraint specified by a general nonlinear function.
1 code implementation • NeurIPS 2021 • Xingchao Liu, Xin Tong, Qiang Liu
Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO).
no code implementations • NeurIPS 2021 • Chengyue Gong, Mao Ye, Qiang Liu
We propose a general method to construct centroid approximation for the distribution of maximum points of a random function (a. k. a.
1 code implementation • 2 Dec 2021 • Xingchao Liu, Chengyue Gong, Lemeng Wu, Shujian Zhang, Hao Su, Qiang Liu
We approach text-to-image generation by combining the power of the retrained CLIP representation with an off-the-shelf image generator (GANs), optimizing in the latent space of GAN to find images that achieve maximum CLIP score with the given input text.
Ranked #48 on Text-to-Image Generation on MS COCO
1 code implementation • 10 Dec 2021 • Yuanzhi Duan, Xiaofang Hu, Yue Zhou, Qiang Liu, Shukai Duan
In this paper, by exploring the similarities between feature maps, we propose a novel filter pruning method, Central Filter (CF), which suggests that a filter is approximately equal to a set of other filters after appropriate adjustments.
no code implementations • 16 Dec 2021 • Yihan Hu, Zhuangzhuang Ding, Runzhou Ge, Wenxin Shao, Li Huang, Kun Li, Qiang Liu
From this observation, we have devised a single-stage anchor-free network that can fulfill these requirements.
1 code implementation • 30 Dec 2021 • Qingsong Lv, Ming Ding, Qiang Liu, Yuxiang Chen, Wenzheng Feng, Siming He, Chang Zhou, Jianguo Jiang, Yuxiao Dong, Jie Tang
Heterogeneous graph neural networks (HGNNs) have been blossoming in recent years, but the unique data processing and evaluation setups used by each work obstruct a full understanding of their advancements.
no code implementations • 1 Jan 2022 • Ziyang Tang, Yihao Feng, Qiang Liu
The benefit of learning the operator is that we can incorporate any new reward function as input and attain its corresponding value function in a zero-shot manner.
1 code implementation • 18 Jan 2022 • Weizhi Xu, Junfei Wu, Qiang Liu, Shu Wu, Liang Wang
In this paper, we focus on the evidence-based fake news detection, where several evidences are utilized to probe the veracity of news (i. e., a claim).
no code implementations • 20 Jan 2022 • Qiang Liu, Yuru Zhang, Haoxin Wang
High definition (HD) map needs to be updated frequently to capture road changes, which is constrained by limited specialized collection vehicles.
no code implementations • 16 Feb 2022 • Chengyue Gong, Lemeng Wu, Qiang Liu
Although traditional optimization methods focus on finding a single optimal solution, most objective functions in modern machine learning problems, especially those in deep learning, often have multiple or infinite numbers of optima.
no code implementations • 27 Feb 2022 • Junzheng Wu, Ruigang Fu, Qiang Liu, Weiping Ni, Kenan Cheng, Biao Li, Yuli Sun
To address this limitation, a dual neighborhood hypergraph neural network is proposed in this article, which combines the multiscale superpixel segmentation and hypergraph convolution to model and exploit the complex relationships.
no code implementations • 13 Mar 2022 • Yanqiao Zhu, Yuanqi Du, Yinkai Wang, Yichen Xu, Jieyu Zhang, Qiang Liu, Shu Wu
In this paper, we conduct a comprehensive review on the existing literature of deep graph generation from a variety of emerging methods to its wide application areas.
no code implementations • 14 Mar 2022 • Renjie Zhou, Qiang Hu, Jian Wan, Jilin Zhang, Qiang Liu, Tianxiang Hu, Jianjun Li
The model first trains the sentence pairs in the text, calculate similarity between sentence pairs, and fine-tunes BERT used for the named entity recognition task according to the similarity, so as to alleviate word ambiguity.
1 code implementation • 24 Mar 2022 • Bo Liu, Qiang Liu, Peter Stone
As intelligent agents become autonomous over longer periods of time, they may eventually become lifelong counterparts to specific people.
no code implementations • 31 May 2022 • Qiang Liu, Zhi Liu
Jumps and market microstructure noise are stylized features of high-frequency financial data.
no code implementations • 1 Jun 2022 • Qiang Liu, Yingtao Luo, Shu Wu, Zhen Zhang, Xiangnan Yue, Hong Jin, Liang Wang
Accordingly, we for the first time propose to model the biased credit scoring data with Multi-Task Learning (MTL).
no code implementations • 4 Jun 2022 • Ruiqing Yan, Fan Zhang, Mengyuan Huang, Wu Liu, Dongyu Hu, Jinfeng Li, Qiang Liu, Jinrong Jiang, Qianjin Guo, Linghan Zheng
Detection of object anomalies is crucial in industrial processes, but unsupervised anomaly detection and localization is particularly important due to the difficulty of obtaining a large number of defective samples and the unpredictable types of anomalies in real life.
1 code implementation • 20 Jun 2022 • Ruqi Zhang, Xingchao Liu, Qiang Liu
We propose discrete Langevin proposal (DLP), a simple and scalable gradient-based proposal for sampling complex high-dimensional discrete distributions.
no code implementations • 21 Jun 2022 • Yihan Hu, Wenxin Shao, Bo Jiang, Jiajie Chen, Siqi Chai, Zhening Yang, Jingyu Qian, Helong Zhou, Qiang Liu
In this report, we introduce our solution to the Occupancy and Flow Prediction challenge in the Waymo Open Dataset Challenges at CVPR 2022, which ranks 1st on the leaderboard.
1 code implementation • 27 Jun 2022 • Xing Han, Ziyang Tang, Joydeep Ghosh, Qiang Liu
The modified score inherits the spirit of split conformal methods, which is simple and efficient and can scale to high dimensional settings.
1 code implementation • 6 Jul 2022 • Yuanzhi Duan, Yue Zhou, Peng He, Qiang Liu, Shukai Duan, Xiaofang Hu
In this paper, we propose a novel Feature Shift Minimization (FSM) method to compress CNN models, which evaluates the feature shift by converging the information of both features and filters.
no code implementations • 14 Jul 2022 • Zhaocheng Liu, Yingtao Luo, Di Zeng, Qiang Liu, Daqing Chang, Dongying Kong, Zhi Chen
Modeling users' dynamic preferences from historical behaviors lies at the core of modern recommender systems.
2 code implementations • 17 Aug 2022 • Bo Liu, Yihao Feng, Qiang Liu, Peter Stone
Furthermore, we introduce the metric residual network (MRN) that deliberately decomposes the action-value function Q(s, a, g) into the negated summation of a metric plus a residual asymmetric component.
2 code implementations • 22 Aug 2022 • Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, Furu Wei
A big convergence of language, vision, and multimodal pretraining is emerging.
Ranked #1 on Visual Reasoning on NLVR2 Test
1 code implementation • Conference 2022 • Fenyu Hu, Zeyu Cui, Shu Wu, Qiang Liu, Jinlin Wu, Liang Wang & Tieniu Tan
Graph Neural Networks (GNNs) are powerful to learn representation of graph-structured data, which fuse both attributive and topological information.
no code implementations • 31 Aug 2022 • Xingchao Liu, Lemeng Wu, Mao Ye, Qiang Liu
Diffusion-based generative models have achieved promising results recently, but raise an array of open questions in terms of conceptual understanding, theoretical analysis, algorithm improvement and extensions to discrete, structured, non-Euclidean domains.
no code implementations • 2 Sep 2022 • Lemeng Wu, Chengyue Gong, Xingchao Liu, Mao Ye, Qiang Liu
AI-based molecule generation provides a promising approach to a large area of biomedical sciences and engineering, such as antibody design, hydrolase engineering, or vaccine development.
no code implementations • 2 Sep 2022 • Mao Ye, Lemeng Wu, Qiang Liu
We propose a family of First Hitting Diffusion Models (FHDM), deep generative models that generate data with a diffusion process that terminates at a random first hitting time.
no code implementations • 2 Sep 2022 • Mao Ye, Ruichen Jiang, Haoxiang Wang, Dhruv Choudhary, Xiaocong Du, Bhargav Bhushanam, Aryan Mokhtari, Arun Kejariwal, Qiang Liu
One of the key challenges of learning an online recommendation model is the temporal domain shift, which causes the mismatch between the training and testing data distribution and hence domain generalization error.
1 code implementation • 3 Sep 2022 • Yingtao Luo, Zhaocheng Liu, Qiang Liu
The unstable correlation between procedures and diagnoses existed in the training distribution can cause spurious correlation between historical EHR and future diagnosis.
3 code implementations • 7 Sep 2022 • Xingchao Liu, Chengyue Gong, Qiang Liu
The idea of rectified flow is to learn the ODE to follow the straight paths connecting the points drawn from \pi_0 and \pi_1 as much as possible.
no code implementations • 19 Sep 2022 • Mao Ye, Bo Liu, Stephen Wright, Peter Stone, Qiang Liu
Bilevel optimization (BO) is useful for solving a variety of important machine learning problems including but not limited to hyperparameter optimization, meta-learning, continual learning, and reinforcement learning.
1 code implementation • 29 Sep 2022 • Qiang Liu
We present a flow-based approach to the optimal transport (OT) problem between two continuous distributions $\pi_0,\pi_1$ on $\mathbb{R}^d$, of minimizing a transport cost $\mathbb{E}[c(X_1-X_0)]$ in the set of couplings $(X_0, X_1)$ whose marginal distributions on $X_0, X_1$ equals $\pi_0,\pi_1$, respectively, where $c$ is a cost function.
1 code implementation • 29 Sep 2022 • Yanqiao Zhu, Dingshuo Chen, Yuanqi Du, Yingze Wang, Qiang Liu, Shu Wu
Molecular pretraining, which learns molecular representations over massive unlabeled data, has become a prominent paradigm to solve a variety of tasks in computational chemistry and drug discovery.
no code implementations • 6 Oct 2022 • Yan Zheng, Lemeng Wu, Xingchao Liu, Zhen Chen, Qiang Liu, QiXing Huang
We first propose a diffusion-based generative model to tackle this problem by generating voxelized shapes with close-to-reality outlines and structures.
1 code implementation • 11 Oct 2022 • Junfei Wu, Weizhi Xu, Qiang Liu, Shu Wu, Liang Wang
Comprehensive experiments have demonstrated the superiority of GETRAL over the state-of-the-arts and validated the efficacy of semantic mining with graph structure and contrastive learning.
1 code implementation • 12 Oct 2022 • Ruqi Zhang, Qiang Liu, Xin T. Tong
Sampling methods, as important inference and learning techniques, are typically designed for unconstrained domains.
no code implementations • 22 Oct 2022 • ZHIXUN LI, Dingshuo Chen, Qiang Liu, Shu Wu
In this paper, we argue that the performance degradation is mainly attributed to the inconsistency between topology and attribute.
2 code implementations • 24 Oct 2022 • Lingxiao Li, Qiang Liu, Anna Korba, Mikhail Yurochkin, Justin Solomon
These energies rely on mollifier functions -- smooth approximations of the Dirac delta originated from PDE theory.
1 code implementation • 30 Oct 2022 • Qiang Liu, Nakjung Choi, Tao Han
First, we design a learning-based simulator to reduce the sim-to-real discrepancy, which is accomplished by a new parameter searching method based on Bayesian optimization.
2 code implementations • 7 Nov 2022 • Andrey Ignatov, Radu Timofte, Maurizio Denna, Abdel Younes, Ganzorig Gankhuyag, Jingang Huh, Myeong Kyun Kim, Kihwan Yoon, Hyeon-Cheol Moon, Seungho Lee, Yoonsik Choe, Jinwoo Jeong, Sungjei Kim, Maciej Smyl, Tomasz Latkowski, Pawel Kubik, Michal Sokolski, Yujie Ma, Jiahao Chao, Zhou Zhou, Hongfan Gao, Zhengfeng Yang, Zhenbing Zeng, Zhengyang Zhuge, Chenghua Li, Dan Zhu, Mengdi Sun, Ran Duan, Yan Gao, Lingshun Kong, Long Sun, Xiang Li, Xingdong Zhang, Jiawei Zhang, Yaqi Wu, Jinshan Pan, Gaocheng Yu, Jin Zhang, Feng Zhang, Zhe Ma, Hongbin Wang, Hojin Cho, Steve Kim, Huaen Li, Yanbo Ma, Ziwei Luo, Youwei Li, Lei Yu, Zhihong Wen, Qi Wu, Haoqiang Fan, Shuaicheng Liu, Lize Zhang, Zhikai Zong, Jeremy Kwon, Junxi Zhang, Mengyuan Li, Nianxiang Fu, Guanchen Ding, Han Zhu, Zhenzhong Chen, Gen Li, Yuanfan Zhang, Lei Sun, Dafeng Zhang, Neo Yang, Fitz Liu, Jerry Zhao, Mustafa Ayazoglu, Bahri Batuhan Bilecen, Shota Hirose, Kasidis Arunruangsirilert, Luo Ao, Ho Chun Leung, Andrew Wei, Jie Liu, Qiang Liu, Dahai Yu, Ao Li, Lei Luo, Ce Zhu, Seongmin Hong, Dongwon Park, Joonhee Lee, Byeong Hyun Lee, Seunggyu Lee, Se Young Chun, Ruiyuan He, Xuhao Jiang, Haihang Ruan, Xinjian Zhang, Jing Liu, Garas Gendy, Nabil Sabor, Jingchao Hou, Guanghui He
While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints.
no code implementations • 16 Nov 2022 • Qiang Liu
Combinatorial optimizations are usually complex and inefficient, which limits their applications in large-scale networks with billions of links.