no code implementations • 7 Aug 2023 • Jogi Suda Neto, Li Deng, Thejaswi Raya, Reza Shahbazi, Nick Liu, Adhitya Venkatesh, Miral Shah, Neeru Khosla, Rodrigo Capobianco Guido
Language Models are being widely used in Education.
1 code implementation • 12 Jul 2022 • Raymond Li, Ilya Valmianski, Li Deng, Xavier Amatriain, Anitha Kannan
In this paper, we propose a method for linking an open set of entities that does not require any span annotations.
1 code implementation • 17 Nov 2021 • Rhys Compton, Ilya Valmianski, Li Deng, Costa Huang, Namit Katariya, Xavier Amatriain, Anitha Kannan
We present MEDCOD, a Medically-Accurate, Emotive, Diverse, and Controllable Dialog system with a unique approach to the natural language generator module.
no code implementations • 10 Nov 2019 • Chao Zhang, Zichao Yang, Xiaodong He, Li Deng
This review provides a comprehensive analysis of recent works on multimodal deep learning from three perspectives: learning multimodal representations, fusing multimodal signals at various levels, and multimodal applications.
no code implementations • 6 Jun 2019 • Yu Liu, Li Deng, Jianshu Chen, Chang Wen Chen
To remove the need for the parallel training corpora has practical significance for real-world applications, and it is one of the main goals of unsupervised learning.
1 code implementation • 31 May 2019 • Li Deng, Shuo Zhang, Krisztian Balog
Tables contain valuable knowledge in a structured form.
no code implementations • 20 Feb 2018 • Qiuyuan Huang, Li Deng, Dapeng Wu, Chang Liu, Xiaodong He
This paper proposes a new architecture - Attentive Tensor Product Learning (ATPL) - to represent grammatical structures in deep learning models.
no code implementations • ICLR 2018 • Ricky Loynd, Matthew Hausknecht, Lihong Li, Li Deng
Humans rely on episodic memory constantly, in remembering the name of someone they met 10 minutes ago, the plot of a movie as it unfolds, or where they parked the car.
no code implementations • ICLR 2018 • Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng
Many practical reinforcement learning problems contain catastrophic states that the optimal policy visits infrequently or never.
no code implementations • NeurIPS 2017 • Jianshu Chen, Chong Wang, Lin Xiao, Ji He, Lihong Li, Li Deng
In sequential decision making, it is often important and useful for end users to understand the underlying patterns or causes that lead to the corresponding decisions.
no code implementations • 15 Nov 2017 • Zachary Lipton, Xiujun Li, Jianfeng Gao, Lihong Li, Faisal Ahmed, Li Deng
We present a new algorithm that significantly improves the efficiency of exploration for deep Q-learning agents in dialogue systems.
no code implementations • 29 Oct 2017 • Qiuyuan Huang, Paul Smolensky, Xiaodong He, Li Deng, Dapeng Wu
To address this, this paper promotes image/visual captioning based CAPTCHAs, which is robust against machine-learning-based attacks.
2 code implementations • NAACL 2018 • Qiuyuan Huang, Paul Smolensky, Xiaodong He, Li Deng, Dapeng Wu
We present a new approach to the design of deep networks for natural language processing (NLP), based on the general technique of Tensor Product Representations (TPRs) for encoding and processing symbol structures in distributed neural networks.
no code implementations • CVPR 2017 • Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, Li Deng
We propose a novel framework named StyleNet to address the task of generating attractive captions for images and videos with different styles.
2 code implementations • EMNLP 2017 • David Golub, Po-Sen Huang, Xiaodong He, Li Deng
We develop a technique for transfer learning in machine comprehension (MC) using a novel two-stage synthesis network (SynNet).
4 code implementations • ICLR 2018 • Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, Li Deng
In this paper, we present Neural Phrase-based Machine Translation (NPMT).
Ranked #7 on
Machine Translation
on IWSLT2015 English-German
no code implementations • 23 May 2017 • Hamid Palangi, Paul Smolensky, Xiaodong He, Li Deng
In our application of TPRN, internal representations learned by end-to-end optimization in a deep neural network performing a textual question-answering (QA) task can be interpreted using basic concepts from linguistic theory.
no code implementations • 28 Feb 2017 • Asli Celikyilmaz, Li Deng, Lihong Li, Chong Wang
We introduce a new paradigm of learning for reasoning, understanding, and prediction, as well as the scaffolding network to implement this paradigm.
no code implementations • NeurIPS 2017 • Yu Liu, Jianshu Chen, Li Deng
Although it is harder to optimize in its functional form, a stochastic primal-dual gradient method is developed to effectively solve the problem.
2 code implementations • ICML 2017 • Chong Wang, Yining Wang, Po-Sen Huang, Abdel-rahman Mohamed, Dengyong Zhou, Li Deng
The probability of a segmented sequence is calculated as the product of the probabilities of all its segments, where each segment is modeled using existing tools such as recurrent neural networks.
2 code implementations • 8 Feb 2017 • Zhe Gan, P. D. Singh, Ameet Joshi, Xiaodong He, Jianshu Chen, Jianfeng Gao, Li Deng
Connecting different text attributes associated with the same entity (conflation) is important in business data analytics since it could help merge two different tables in a database to provide a more comprehensive profile of an entity.
1 code implementation • 3 Dec 2016 • Xuesong Yang, Yun-Nung Chen, Dilek Hakkani-Tur, Paul Crook, Xiujun Li, Jianfeng Gao, Li Deng
Natural language understanding and dialogue policy learning are both essential in conversational systems that predict the next system actions in response to a current user utterance.
12 code implementations • 28 Nov 2016 • Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang
The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering.
1 code implementation • CVPR 2017 • Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, Li Deng
The degree to which each member of the ensemble is used to generate an image caption is tied to the image-dependent probability of the corresponding tag.
no code implementations • 3 Nov 2016 • Zachary C. Lipton, Kamyar Azizzadenesheli, Abhishek Kumar, Lihong Li, Jianfeng Gao, Li Deng
We introduce intrinsic fear (IF), a learned reward shaping that guards DRL agents against periodic catastrophes.
no code implementations • 12 Sep 2016 • Yun-Nung Chen, Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, Jianfeng Gao, Li Deng
Natural language understanding (NLU) is a core component of a spoken dialogue system.
1 code implementation • ACL 2017 • Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, Li Deng
In this paper, we address this limitation by replacing symbolic queries with an induced "soft" posterior distribution over the KB that indicates which entities the user is interested in.
no code implementations • 17 Aug 2016 • Zachary C. Lipton, Xiujun Li, Jianfeng Gao, Lihong Li, Faisal Ahmed, Li Deng
We present a new algorithm that significantly improves the efficiency of exploration for deep Q-learning agents in dialogue systems.
1 code implementation • EMNLP 2016 • Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, Li Deng
We develop a novel bi-directional attention model for dependency parsing, which learns to agree on headword predictions from the forward and backward parsing directions.
Ranked #4 on
Chinese Dependency Parsing
on Chinese Pennbank
no code implementations • 15 Jun 2016 • Jianshu Chen, Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng
In particular, we show that with regularization via a generative model, learning with the proposed unsupervised objective function converges to an optimal solution.
1 code implementation • EMNLP 2016 • Ji He, Mari Ostendorf, Xiaodong He, Jianshu Chen, Jianfeng Gao, Lihong Li, Li Deng
We introduce an online popularity prediction and tracking task as a benchmark task for reinforcement learning with a combinatorial, natural language action space.
no code implementations • 12 Jan 2016 • Paul Smolensky, Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao, Li Deng
In this paper we present the initial development of a general theory for mapping inference in predicate logic to computation over Tensor Product Representations (TPRs; Smolensky (1990), Smolensky & Legendre (2006)).
no code implementations • 19 Nov 2015 • Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao, Li Deng, Paul Smolensky
Question answering tasks have shown remarkable progress with distributed vector representation.
3 code implementations • ACL 2016 • Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf
This paper introduces a novel architecture for reinforcement learning with deep neural networks designed to handle state and action spaces characterized by natural language, as found in text-based games.
16 code implementations • CVPR 2016 • Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola
Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively.
Ranked #5 on
Visual Question Answering (VQA)
on VQA v1 test-std
no code implementations • 10 Sep 2015 • Xiujun Li, Lihong Li, Jianfeng Gao, Xiaodong He, Jianshu Chen, Li Deng, Ji He
Successful applications of reinforcement learning in real-world problems often require dealing with partially observable states.
no code implementations • 20 Aug 2015 • Hamid Palangi, Rabab Ward, Li Deng
As the proposed method is a data driven method, it is only applicable when training data is available.
1 code implementation • NeurIPS 2015 • Jianshu Chen, Ji He, Yelong Shen, Lin Xiao, Xiaodong He, Jianfeng Gao, Xinying Song, Li Deng
We develop a fully discriminative learning approach for supervised Latent Dirichlet Allocation (LDA) model using Back Propagation (i. e., BP-sLDA), which maximizes the posterior probability of the prediction variable given the input document.
no code implementations • IJCNLP 2015 • Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell
Two recent approaches have achieved state-of-the-art results in image captioning.
no code implementations • 13 Apr 2015 • Xiaodong He, Rupesh Srivastava, Jianfeng Gao, Li Deng
The learned representations attempt to capture the combination of various visual concepts and cues.
no code implementations • 11 Apr 2015 • Yelong Shen, Ruoming Jin, Jianshu Chen, Xiaodong He, Jianfeng Gao, Li Deng
Co-occurrence Data is a common and important information source in many areas, such as the word co-occurrence in the sentences, friends co-occurrence in social networks and products co-occurrence in commercial transaction data, etc, which contains rich correlation and clustering information about the items.
no code implementations • 24 Feb 2015 • Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, Rabab Ward
The results show that the proposed method in this paper significantly outperforms it for web document retrieval task.
9 code implementations • 20 Dec 2014 • Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, Li Deng
We consider learning representations of entities and relations in KBs using the neural-embedding approach.
Ranked #9 on
Link Property Prediction
on ogbl-biokg
1 code implementation • CVPR 2015 • Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig
The language model learns from a set of over 400, 000 image descriptions to capture the statistics of word usage.
Ranked #1 on
Image Captioning
on COCO Captions test
no code implementations • 14 Nov 2014 • Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, Li Deng
In this paper we present a unified framework for modeling multi-relational representations, scoring, and learning, and conduct an empirical study of several recent multi-relational embedding models under the framework.
no code implementations • 28 Nov 2013 • Jianfeng Gao, Xiaodong He, Wen-tau Yih, Li Deng
The results show that the new semantic-based phrase translation model significantly improves the performance of a state-of-the-art phrase-based statistical machine translation sys-tem, leading to a gain of 0. 7-1. 0 BLEU points.
no code implementations • 24 Nov 2013 • Jianshu Chen, Li Deng
We present an architecture of a recurrent neural network (RNN) with a fully-connected deep neural network (DNN) as its feature extractor.
no code implementations • 13 Nov 2013 • Hamid Palangi, Li Deng, Rabab K. Ward
In this paper, we devise a special technique that take advantage of this linearity in the output units of an ESN, to learn the input and recurrent matrices.
4 code implementations • CIKM 2013 • Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, Larry Heck
The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data.
no code implementations • NeurIPS 2012 • Oriol Vinyals, Yangqing Jia, Li Deng, Trevor Darrell
The use of random projections is key to our method, as we show in the experiments section, in which we observe a consistent improvement over previous --often more complicated-- methods on several vision and speech benchmarks.
Ranked #219 on
Image Classification
on CIFAR-10
no code implementations • Signal Processing Magazine 2012 • Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, Brian Kingsbury
Most current speech recognition systems use hidden Markov models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models to determine how well each state of each HMM fits a frame or a short window of frames of coefficients that represents the acoustic input.