no code implementations • ACL 2022 • Kashif Khan, Ruizhe Wang, Pascal Poupart
We contribute a new dataset for the task of automated fact checking and an evaluation of state of the art algorithms.
no code implementations • ICML 2020 • Haonan Duan, Saeed Nejati, George Trimponias, Pascal Poupart, Vijay Ganesh
Our solvers out-perform the baselines by solving 12 more instances from the SAT competition 2018 application benchmark and are %40 faster on average in solving hard cryptographic instances.
no code implementations • 12 Dec 2022 • Aref Jafari, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart, Ali Ghodsi
Knowledge Distillation (KD) has been extensively used for natural language understanding (NLU) tasks to improve a small model's (a student) generalization by transferring the knowledge from a larger model (a teacher).
no code implementations • 27 Nov 2022 • Ehsan Imani, Guojun Zhang, Jun Luo, Pascal Poupart, Yangchen Pan
Recent work reported the label alignment property in a supervised learning setting: the vector of all labels in the dataset is mostly in the span of the top few singular vectors of the data matrix.
1 code implementation • 30 Jun 2022 • Kira Selby, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart
We propose a general deep architecture for learning functions on multiple permutation-invariant sets.
2 code implementations • 20 Jun 2022 • Guiliang Liu, Yudong Luo, Ashish Gaurav, Kasra Rezaee, Pascal Poupart
When deploying Reinforcement Learning (RL) agents into a physical system, we must ensure that these agents are well aware of the underlying constraints.
1 code implementation • 20 Jun 2022 • Mohsin Hasan, Zehao Zhang, Kaiyang Guo, Mahdi Karami, Guojun Zhang, Xi Chen, Pascal Poupart
In contrast, our method performs the aggregation on the predictive posteriors, which are typically easier to approximate owing to the low-dimensionality of the output space.
no code implementations • 13 Jun 2022 • Haolin Yu, Kaiyang Guo, Mahdi Karami, Xi Chen, Guojun Zhang, Pascal Poupart
We present Federated Bayesian Neural Regression (FedBNR), an algorithm that learns a scalable stand-alone global federated GP that respects clients' privacy.
no code implementations • 2 Jun 2022 • Ashish Gaurav, Kasra Rezaee, Guiliang Liu, Pascal Poupart
We consider the setting where the reward function is given, and the constraints are unknown, and propose a method that is able to recover these constraints satisfactorily from the expert data.
1 code implementation • 27 May 2022 • Liam Hebert, Lukasz Golab, Pascal Poupart, Robin Cohen
We evaluate our methods on the Meta-World environment and find that our approach yields significant improvements over FedAvg and non-federated Soft Actor-Critic single-agent methods.
no code implementations • 25 May 2022 • Ivan Kobyzev, Aref Jafari, Mehdi Rezagholizadeh, Tianda Li, Alan Do-Omri, Peng Lu, Pascal Poupart, Ali Ghodsi
Knowledge Distillation (KD) is a prominent neural model compression technique that heavily relies on teacher network predictions to guide the training of a student model.
no code implementations • COLING 2022 • Md Akmal Haidar, Mehdi Rezagholizadeh, Abbas Ghaddar, Khalil Bibi, Philippe Langlais, Pascal Poupart
Knowledge distillation (KD) is an efficient framework for compressing large-scale pre-trained language models.
no code implementations • 23 Dec 2021 • Xiangle Cheng, James He, Shihan Xiao, Yingxue Zhang, Zhitang Chen, Pascal Poupart, FengLin Li
Machine learning is gaining growing momentum in various recent models for the dynamic analysis of information flows in data communications networks.
no code implementations • NeurIPS 2021 • Guiliang Liu, Xiangyu Sun, Oliver Schulte, Pascal Poupart
We propose a Represent And Mimic (RAMi) framework for training 1) an identifiable latent representation to capture the independent factors of variation for the objects and 2) a mimic tree that extracts the causal impact of the latent features on DRL action values.
no code implementations • ICLR 2022 • Yudong Luo, Guiliang Liu, Haonan Duan, Oliver Schulte, Pascal Poupart
Distributional Reinforcement Learning (RL) differs from traditional RL by estimating the distribution over returns to capture the intrinsic uncertainty of MDPs.
Distributional Reinforcement Learning
reinforcement-learning
+1
no code implementations • ICLR 2022 • Guiliang Liu, Ashutosh Adhikari, Amir-Massoud Farahmand, Pascal Poupart
The advancement of dynamics models enables model-based planning in complex environments.
no code implementations • Findings (NAACL) 2022 • Md Akmal Haidar, Nithin Anchuri, Mehdi Rezagholizadeh, Abbas Ghaddar, Philippe Langlais, Pascal Poupart
To address these problems, we propose a RAndom Intermediate Layer Knowledge Distillation (RAIL-KD) approach in which, intermediate layers from the teacher model are selected randomly to be distilled into the intermediate layers of the student model.
1 code implementation • 9 Sep 2021 • Xiangyu Sun, Oliver Schulte, Guiliang Liu, Pascal Poupart
We describe NTS-NOTEARS, a score-based structure learning method for time-series data to learn dynamic Bayesian networks (DBNs) that captures nonlinear, lagged (inter-slice) and instantaneous (intra-slice) relations among variables.
2 code implementations • NeurIPS 2021 • Guojun Zhang, Han Zhao, YaoLiang Yu, Pascal Poupart
We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability.
no code implementations • 17 Apr 2021 • Kira A. Selby, Yinong Wang, Ruizhe Wang, Peyman Passban, Ahmad Rashid, Mehdi Rezagholizadeh, Pascal Poupart
Despite recent monumental advances in the field, many Natural Language Processing (NLP) models still struggle to perform adequately on noisy domains.
no code implementations • CVPR 2021 • Elmira Amirloo, Mohsen Rohani, Ershad Banijamali, Jun Luo, Pascal Poupart
While supervised learning is widely used for perception modules in conventional autonomous driving solutions, scalability is hindered by the huge amount of data labeling needed.
1 code implementation • 31 Dec 2020 • Sriram Ganapathi Subramanian, Matthew E. Taylor, Mark Crowley, Pascal Poupart
Traditional multi-agent reinforcement learning algorithms are not scalable to environments with more than a few agents, since these algorithms are exponential in the number of agents.
Multi-agent Reinforcement Learning
Q-Learning
Multiagent Systems
no code implementations • ICCV 2021 • Ershad Banijamali, Mohsen Rohani, Elmira Amirloo, Jun Luo, Pascal Poupart
In autonomous driving (AD), accurately predicting changes in the environment can effectively improve safety and comfort.
no code implementations • NeurIPS 2020 • Guiliang Liu, Oliver Schulte, Pascal Poupart, Mike Rudd, Mehrsan Javan
This paper develops a new approach for agent representations, based on a Markov game model, that is tailored towards applications in professional ice hockey.
1 code implementation • 25 Jun 2020 • Guojun Zhang, Kaiwen Wu, Pascal Poupart, Yao-Liang Yu
We prove their local convergence at strict local minimax points, which are surrogates of global solutions.
no code implementations • 8 Mar 2020 • Allen Houze Wang, Priyank Jaini, Yao-Liang Yu, Pascal Poupart
Recently, the conditional SAGE certificate has been proposed as a sufficient condition for signomial positivity over a convex set.
no code implementations • 7 Mar 2020 • Nabiha Asghar, Ivan Kobyzev, Jesse Hoey, Pascal Poupart, Muhammad Bilal Sheikh
State-of-the-art neural dialogue systems excel at syntactic and semantic modelling of language, but often have a hard time establishing emotional alignment with the human interactant during a conversation.
no code implementations • 27 Feb 2020 • Guojun Zhang, Pascal Poupart, Yao-Liang Yu
Convergence to a saddle point for convex-concave functions has been studied for decades, while recent years has seen a surge of interest in non-convex (zero-sum) smooth games, motivated by their recent wide applications.
no code implementations • 25 Feb 2020 • Amur Ghose, Abdullah Rashwan, Pascal Poupart
The variational autoencoder is a well defined deep generative model that utilizes an encoder-decoder framework where an encoding neural network outputs a non-deterministic code for reconstructing an input.
1 code implementation • NeurIPS 2020 • Ashutosh Adhikari, Xingdi Yuan, Marc-Alexandre Côté, Mikuláš Zelinka, Marc-Antoine Rondeau, Romain Laroche, Pascal Poupart, Jian Tang, Adam Trischler, William L. Hamilton
Playing text-based games requires skills in processing natural language and sequential decision making.
1 code implementation • 6 Feb 2020 • Sriram Ganapathi Subramanian, Pascal Poupart, Matthew E. Taylor, Nidhi Hegde
We consider two different kinds of mean field environments: a) Games where agents belong to predefined types that are known a priori and b) Games where the type of each agent is unknown and therefore must be learned based on observations.
1 code implementation • 28 Jan 2020 • Xin Lian, Kshitij Jain, Jakub Truszkowski, Pascal Poupart, Yao-Liang Yu
We study unsupervised multilingual alignment, the problem of finding word-to-word translations between multiple languages without using any parallel data.
Ranked #1 on
Word Alignment
on en-es
1 code implementation • 9 Jan 2020 • Abdullah Rashwan, Rishav Agarwal, Agastya Kalra, Pascal Poupart
We present MatrixNets (xNets), a new deep architecture for object detection.
2 code implementations • 13 Aug 2019 • Abdullah Rashwan, Agastya Kalra, Pascal Poupart
We present Matrix Nets (xNets), a new deep architecture for object detection.
Ranked #110 on
Object Detection
on COCO test-dev
6 code implementations • 11 Jul 2019 • Seyed Mehran Kazemi, Rishab Goel, Sepehr Eghbali, Janahan Ramanan, Jaspreet Sahota, Sanjay Thakur, Stella Wu, Cathal Smyth, Pascal Poupart, Marcus Brubaker
Time is an important feature in many applications involving events that occur synchronously and/or asynchronously.
1 code implementation • 8 Jul 2019 • Guojun Zhang, Pascal Poupart, George Trimponias
In the case of mixtures of Bernoullis, we find that there exist one-cluster regions that are stable for GD and therefore trap GD, but those regions are unstable for EM, allowing EM to escape.
1 code implementation • 6 Jul 2019 • Rishab Goel, Seyed Mehran Kazemi, Marcus Brubaker, Pascal Poupart
In this paper, we build novel models for temporal KG completion through equipping static models with a diachronic entity embedding function which provides the characteristics of entities at any point in time.
no code implementations • 27 May 2019 • Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, Pascal Poupart
Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance.
1 code implementation • 11 Jan 2019 • Alejandro Molina, Antonio Vergari, Karl Stelzner, Robert Peharz, Pranav Subramani, Nicola Di Mauro, Pascal Poupart, Kristian Kersting
We introduce SPFlow, an open-source Python library providing a simple interface to inference, learning and manipulation routines for deep and tractable probabilistic models called Sum-Product Networks (SPNs).
no code implementations • NeurIPS 2018 • Priyank Jaini, Pascal Poupart, Yao-Liang Yu
At their core, many unsupervised learning models provide a compact representation of homogeneous density mixtures, but their similarities and differences are not always clearly understood.
no code implementations • NeurIPS 2018 • Agastya Kalra, Abdullah Rashwan, Wei-Shou Hsu, Pascal Poupart, Prashant Doshi, Georgios Trimponias
Sum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which inference is always tractable.
no code implementations • NeurIPS 2018 • Jongmin Lee, Geon-Hyeong Kim, Pascal Poupart, Kee-Eung Kim
In this paper, we present CC-POMCP (Cost-Constrained POMCP), an online MCTS algorithm for large CPOMDPs that leverages the optimization of LP-induced parameters and only requires a black-box simulator of the environment.
1 code implementation • ICLR 2020 • Nabiha Asghar, Lili Mou, Kira A. Selby, Kevin D. Pantasdo, Pascal Poupart, Xin Jiang
The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the number of parameters, and thus the model capacity.
1 code implementation • NeurIPS 2018 • Vik Goel, Jameson Weng, Pascal Poupart
The detection of moving objects is done in an unsupervised way by exploiting structure from motion.
no code implementations • 17 Apr 2018 • Pengfei Zhu, Xin Li, Pascal Poupart, Guanghui Miao
Deep Reinforcement Learning (RL) recently emerged as one of the most competitive approaches for learning in sequential decision making problems with fully observable environments, e. g., computer Go.
2 code implementations • COLING 2018 • Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, Pascal Poupart
The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network.
no code implementations • 6 Dec 2017 • Bolin Wei, Shuai Lu, Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, Zhi Jin
This paper addresses the question: Why do neural dialog systems generate short and meaningless replies?
no code implementations • 12 Sep 2017 • Nabiha Asghar, Pascal Poupart, Jesse Hoey, Xin Jiang, Lili Mou
Existing neural conversational models process natural language primarily on a lexico-syntactic level, thereby ignoring one of the most crucial components of human-to-human dialogue: its affective content.
1 code implementation • 1 Sep 2017 • Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui
Generating texts from structured data (e. g., a table) is important for various natural language processing tasks such as question answering and dialog systems.
1 code implementation • 26 Apr 2017 • Pengfei Zhu, Xin Li, Pascal Poupart, Guanghui Miao
Deep Reinforcement Learning (RL) recently emerged as one of the most competitive approaches for learning in sequential decision making problems with fully observable environments, e. g., computer Go.
no code implementations • 10 Feb 2017 • Ershad Banijamali, Ali Ghodsi, Pascal Poupart
The model consists of K networks that are trained together to learn the underlying distribution of a given data set.
1 code implementation • 19 Jan 2017 • Wilson Hsu, Agastya Kalra, Pascal Poupart
Sum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which inference is always tractable.
no code implementations • SEMEVAL 2017 • Nabiha Asghar, Pascal Poupart, Xin Jiang, Hang Li
We propose an online, end-to-end, neural generative conversational model for open-domain dialogue.
no code implementations • 8 Dec 2016 • Wenchao Du, Pascal Poupart, Wei Xu
We investigate the task of inferring conversational dependencies between messages in one-on-one online chat, which has become one of the most popular forms of customer service.
no code implementations • NeurIPS 2016 • Wei-Shou Hsu, Pascal Poupart
When the number of topics (or latent groups) is unknown, the Hierarchical Dirichlet Process (HDP) provides an elegant non-parametric extension; however, it is a complex model and it is difficult to incorporate prior knowledge since the distribution over topics is implicit.
no code implementations • 19 Sep 2016 • Priyank Jaini, Pascal Poupart
The Gaussian mixture model is a classic technique for clustering and data modeling that is used in numerous applications.
no code implementations • NeurIPS 2016 • Han Zhao, Pascal Poupart, Geoff Gordon
We present a unified approach for learning the parameters of Sum-Product networks (SPNs).
no code implementations • 13 Nov 2015 • Mazen Melibari, Pascal Poupart, Prashant Doshi, George Trimponias
Since SPNs represent distributions over a fixed set of variables only, we propose dynamic sum product networks (DSPNs) as a generalization of SPNs for sequence data of varying length.
1 code implementation • 20 Apr 2015 • Han Zhao, Zhengdong Lu, Pascal Poupart
The ability to accurately model a sentence at varying stages (e. g., word-phrase-sentence) plays a central role in natural language processing.
Ranked #5 on
Subjectivity Analysis
on SUBJ
no code implementations • 6 Jan 2015 • Han Zhao, Mazen Melibari, Pascal Poupart
We conclude the paper with some discussion of the implications of the proof and establish a connection between the depth of an SPN and a lower bound of the tree-width of its corresponding BN.
no code implementations • 18 Jun 2014 • Han Zhao, Pascal Poupart
In contrast, maximum likelihood estimates may get trapped in local optima due to the non-convex nature of the likelihood function of latent variable models.
no code implementations • 16 Jan 2014 • Wei Li, Pascal Poupart, Peter van Beek
Previous studies have demonstrated that encoding a Bayesian network into a SAT formula and then performing weighted model counting using a backtracking search algorithm can be an effective method for exact inference.
no code implementations • NeurIPS 2012 • Zahra Zamani, Scott Sanner, Pascal Poupart, Kristian Kersting
In recent years, point- based value iteration methods have proven to be extremely effective techniques for finding (approximately) optimal dynamic programming solutions to POMDPs when an initial set of belief states is known.
no code implementations • NeurIPS 2012 • Dongho Kim, Kee-Eung Kim, Pascal Poupart
In this paper, we consider Bayesian reinforcement learning (BRL) where actions incur costs in addition to rewards, and thus exploration has to be constrained in terms of the expected total cost while learning to maximize the expected long-term total reward.
no code implementations • NeurIPS 2011 • Omar Z. Khan, Pascal Poupart, John-Mark M. Agosta
We demonstrate that consistency with an expert's test selection leads to non-convex constraints on the model parameters.