1 code implementation • ACL (dialdoc) 2021 • Yan Xu, Etsuko Ishii, Genta Indra Winata, Zhaojiang Lin, Andrea Madotto, Zihan Liu, Peng Xu, Pascale Fung
Information-seeking dialogue systems, including knowledge identification and response generation, aim to respond to users with fluent, coherent, and informative responses based on users’ needs, which.
1 code implementation • LREC 2022 • Wenliang Dai, Samuel Cahyawijaya, Tiezheng Yu, Elham J. Barezi, Peng Xu, Cheuk Tung Yiu, Rita Frieske, Holy Lovenia, Genta Winata, Qifeng Chen, Xiaojuan Ma, Bertram Shi, Pascale Fung
With the rise of deep learning and intelligent vehicles, the smart assistant has become an essential in-car component to facilitate driving and provide extra functionalities.
no code implementations • 18 Sep 2023 • Yihan Wu, Tao Chang, Peng Xu, Yangsong Zhang
Graph Neural Networks (GNNs) have received considerable attention since its introduction.
1 code implementation • 7 Sep 2023 • Jiaming Han, Renrui Zhang, Wenqi Shao, Peng Gao, Peng Xu, Han Xiao, Kaipeng Zhang, Chris Liu, Song Wen, Ziyu Guo, Xudong Lu, Shuai Ren, Yafei Wen, Xiaoxin Chen, Xiangyu Yue, Hongsheng Li, Yu Qiao
During training, we adopt a learnable bind network to align the embedding space between LLaMA and ImageBind's image encoder.
no code implementations • 6 Sep 2023 • David B. D'Ambrosio, Jonathan Abelian, Saminda Abeyruwan, Michael Ahn, Alex Bewley, Justin Boyd, Krzysztof Choromanski, Omar Cortes, Erwin Coumans, Tianli Ding, Wenbo Gao, Laura Graesser, Atil Iscen, Navdeep Jaitly, Deepali Jain, Juhana Kangaspunta, Satoshi Kataoka, Gus Kouretas, Yuheng Kuang, Nevena Lazic, Corey Lynch, Reza Mahjourian, Sherry Q. Moore, Thinh Nguyen, Ken Oslund, Barney J Reed, Krista Reymann, Pannag R. Sanketi, Anish Shankar, Pierre Sermanet, Vikas Sindhwani, Avi Singh, Vincent Vanhoucke, Grace Vesom, Peng Xu
We present a deep-dive into a real-world robotic learning system that, in previous work, was shown to be capable of hundreds of table tennis rallies with a human and has the ability to precisely return the ball to desired targets.
1 code implementation • 25 Aug 2023 • Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, Ping Luo
To tackle this issue, we introduce an Omnidirectionally calibrated Quantization (OmniQuant) technique for LLMs, which achieves good performance in diverse quantization settings while maintaining the computational efficiency of PTQ by efficiently optimizing various quantization parameters.
1 code implementation • 24 Aug 2023 • Santiago Estrada, David Kügler, Emad Bahrami, Peng Xu, Dilshad Mousa, Monique M. B. Breteler, N. Ahmad Aziz, Martin Reuter
The hypothalamus plays a crucial role in the regulation of a broad range of physiological, behavioural, and cognitive functions.
no code implementations • 15 Aug 2023 • Jie Huang, Wei Ping, Peng Xu, Mohammad Shoeybi, Kevin Chen-Chuan Chang, Bryan Catanzaro
In this paper, we investigate the in-context learning ability of retrieval-augmented encoder-decoder language models.
no code implementations • 8 Aug 2023 • Juan Wen, Shupeng Cheng, Peng Xu, BoWen Zhou, Radu Timofte, Weiyan Hou, Luc van Gool
Super Resolution (SR) and Camouflaged Object Detection (COD) are two hot topics in computer vision with various joint applications.
1 code implementation • 7 Aug 2023 • Wenqi Shao, Yutao Hu, Peng Gao, Meng Lei, Kaipeng Zhang, Fanqing Meng, Peng Xu, Siyuan Huang, Hongsheng Li, Yu Qiao, Ping Luo
Secondly, it conducts an in-depth analysis of LVLMs' predictions using the ChatGPT Ensemble Evaluation (CEE), which leads to a robust and accurate evaluation and exhibits improved alignment with human evaluation compared to the word matching approach.
1 code implementation • 28 Jul 2023 • Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich
Our goal is to enable a single end-to-end trained model to both learn to map robot observations to actions and enjoy the benefits of large-scale pretraining on language and vision-language data from the web.
no code implementations • 4 Jul 2023 • Allen Z. Ren, Anushri Dixit, Alexandra Bodrova, Sumeet Singh, Stephen Tu, Noah Brown, Peng Xu, Leila Takayama, Fei Xia, Jake Varley, Zhenjia Xu, Dorsa Sadigh, Andy Zeng, Anirudha Majumdar
Large language models (LLMs) exhibit a wide range of promising capabilities -- from step-by-step planning to commonsense reasoning -- that may provide utility for robots, but remain prone to confidently hallucinated predictions.
no code implementations • 29 Jun 2023 • Anthony Francis, Claudia Pérez-D'Arpino, Chengshu Li, Fei Xia, Alexandre Alahi, Rachid Alami, Aniket Bera, Abhijat Biswas, Joydeep Biswas, Rohan Chandra, Hao-Tien Lewis Chiang, Michael Everett, Sehoon Ha, Justin Hart, Jonathan P. How, Haresh Karnan, Tsang-Wei Edward Lee, Luis J. Manso, Reuth Mirksy, Sören Pirk, Phani Teja Singamaneni, Peter Stone, Ada V. Taylor, Peter Trautman, Nathan Tsoi, Marynel Vázquez, Xuesu Xiao, Peng Xu, Naoki Yokoyama, Alexander Toshev, Roberto Martín-Martín
A major challenge to deploying robots widely is navigation in human-populated environments, commonly referred to as social robot navigation.
no code implementations • 27 Jun 2023 • Peng Xu, Zhiyu Xiang, Chenyu Qiao, Jingyun Fu, Xijun Zhao
Then a novel adaptive multi-modal cross-entropy loss which encourages the network to generate different distribution patterns for edge and non-edge pixels is proposed.
1 code implementation • 15 Jun 2023 • Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, Ping Luo
Large Vision-Language Models (LVLMs) have recently played a dominant role in multimodal vision-language learning.
no code implementations • 14 Jun 2023 • Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, Fei Xia
However, since low-level robot actions are hardware-dependent and underrepresented in LLM training corpora, existing efforts in applying LLMs to robotics have largely treated LLMs as semantic planners or relied on human-engineered control primitives to interface with the robot.
1 code implementation • ICCV 2023 • Mengzhao Chen, Wenqi Shao, Peng Xu, Mingbao Lin, Kaipeng Zhang, Fei Chao, Rongrong Ji, Yu Qiao, Ping Luo
Token compression aims to speed up large-scale vision transformers (e. g. ViTs) by pruning (dropping) or merging tokens.
1 code implementation • 23 May 2023 • Peng Xu, Lin Zhang, Xuanzhou Liu, Jiaqi Sun, Yue Zhao, Haiqin Yang, Bei Yu
Neural architecture search (NAS) for Graph neural networks (GNNs), called NAS-GNNs, has achieved significant performance over manually designed GNN architectures.
1 code implementation • 23 May 2023 • Mingkun Li, Peng Xu, Chun-Guang Li, Jun Guo
This paper considers a novel and challenging problem: unsupervised long-term person re-identification with clothes change.
Ranked #1 on
Unsupervised Person Re-Identification
on PRCC
Clothes Changing Person Re-Identification
Contrastive Learning
+3
1 code implementation • 10 May 2023 • Jiaqi Sun, Lin Zhang, Guangyi Chen, Kun Zhang, Peng Xu, Yujiu Yang
Graph neural networks aim to learn representations for graph-structured data and show impressive performance, particularly in node classification.
1 code implementation • 21 Apr 2023 • Deng-Ping Fan, Ge-Peng Ji, Peng Xu, Ming-Ming Cheng, Christos Sakaridis, Luc van Gool
Concealed scene understanding (CSU) is a hot computer vision topic aiming to perceive objects exhibiting camouflage.
1 code implementation • 13 Apr 2023 • Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandkumar, Bryan Catanzaro
To answer it, we perform a comprehensive study on a scalable pre-trained retrieval-augmented LM (i. e., RETRO) compared with standard GPT and retrieval-augmented GPT incorporated at fine-tuning or inference stages.
no code implementations • 12 Apr 2023 • Ge-Peng Ji, Deng-Ping Fan, Peng Xu, Ming-Ming Cheng, BoWen Zhou, Luc van Gool
Segmenting anything is a ground-breaking step toward artificial general intelligence, and the Segment Anything Model (SAM) greatly fosters the foundation models for computer vision.
no code implementations • 27 Feb 2023 • Yao Zhang, Hua-Ying Liu, Xiaoyi Liu, Peng Xu, Xiang Dong, Pengfei Fan, Xiaohui Tian, Hua Yu, Dong Pan, Zhijun Yin, Guilu Long, Shi-Ning Zhu, Zhenda Xie
Free-space optical communication (FSO) can achieve fast, secure and license-free communication without need for physical cables, making it a cost-effective, energy-efficient and flexible solution when the fiber connection is unavailable.
1 code implementation • 13 Jan 2023 • Yudong Pan, Ning li, Yangsong Zhang, Peng Xu, Dezhong Yao
This study substantiates the feasibility of the proposed method to extend the data length for short-time SSVEP signals for developing a high-performance BCI system.
1 code implementation • ICCV 2023 • Peng Xu, Xiatian Zhu
Long-term re-id with clothes change is a challenging problem in surveillance AI.
1 code implementation • 13 Dec 2022 • Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Tomas Jackson, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Kuang-Huei Lee, Sergey Levine, Yao Lu, Utsav Malla, Deeksha Manjunath, Igor Mordatch, Ofir Nachum, Carolina Parada, Jodilyn Peralta, Emily Perez, Karl Pertsch, Jornell Quiambao, Kanishka Rao, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Kevin Sayed, Jaspiar Singh, Sumedh Sontakke, Austin Stone, Clayton Tan, Huong Tran, Vincent Vanhoucke, Steve Vega, Quan Vuong, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich
By transferring knowledge from large, diverse, task-agnostic datasets, modern machine learning models can solve specific downstream tasks either zero-shot or with small task-specific datasets to a high level of performance.
no code implementations • 11 Nov 2022 • Vashist Avadhanula, Omar Abdul Baki, Hamsa Bastani, Osbert Bastani, Caner Gocmen, Daniel Haimovich, Darren Hwang, Dima Karamshuk, Thomas Leeper, Jiayuan Ma, Gregory Macnamara, Jake Mullett, Christopher Palow, Sung Park, Varun S Rajagopal, Kevin Schaeffer, Parikshit Shah, Deeksha Sinha, Nicolas Stier-Moses, Peng Xu
We describe the current content moderation strategy employed by Meta to remove policy-violating content from its platforms.
no code implementations • 25 Oct 2022 • Peng Xu, Mostofa Patwary, Shrimai Prabhumoye, Virginia Adams, Ryan J. Prenger, Wei Ping, Nayeon Lee, Mohammad Shoeybi, Bryan Catanzaro
For cross-domain and cross-dataset cases, we show that (a) Adapter (Houlsby et al., 2019) performs the best amongst all the PERMs studied here, and (b) it outperforms finetuning if the task dataset is below a certain size.
no code implementations • 19 Oct 2022 • Thomas Lew, Sumeet Singh, Mario Prats, Jeffrey Bingham, Jonathan Weisz, Benjie Holson, Xiaohan Zhang, Vikas Sindhwani, Yao Lu, Fei Xia, Peng Xu, Tingnan Zhang, Jie Tan, Montserrat Gonzalez
This problem is challenging, as it requires planning wiping actions while reasoning over uncertain latent dynamics of crumbs and spills captured via high-dimensional visual observations.
no code implementations • 12 Oct 2022 • Dan Su, Mostofa Patwary, Shrimai Prabhumoye, Peng Xu, Ryan Prenger, Mohammad Shoeybi, Pascale Fung, Anima Anandkumar, Bryan Catanzaro
Prior work on closed-book QA either directly finetunes or prompts a pretrained language model (LM) to leverage the stored knowledge.
1 code implementation • 9 Oct 2022 • Jianbo Chen, Yangsong Zhang, Yudong Pan, Peng Xu, Cuntai Guan
The proposed model validates the feasibility of deep learning models based on Transformer structure for SSVEP classification task, and could serve as a potential model to alleviate the calibration procedure in the practical application of SSVEP-based BCI systems.
no code implementations • 22 Sep 2022 • Xuesu Xiao, Tingnan Zhang, Krzysztof Choromanski, Edward Lee, Anthony Francis, Jake Varley, Stephen Tu, Sumeet Singh, Peng Xu, Fei Xia, Sven Mikael Persson, Dmitry Kalashnikov, Leila Takayama, Roy Frostig, Jie Tan, Carolina Parada, Vikas Sindhwani
Despite decades of research, existing navigation systems still face real-world challenges when deployed in the wild, e. g., in cluttered home environments or in human-occupied public spaces.
no code implementations • 13 Jun 2022 • Peng Xu, Xiatian Zhu, David A. Clifton
Transformer is a promising neural network learner, and has achieved great success in various machine learning tasks.
3 code implementations • 9 Jun 2022 • Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Pascale Fung, Mohammad Shoeybi, Bryan Catanzaro
In this work, we measure and improve the factual accuracy of large-scale LMs for open-ended text generation.
1 code implementation • Findings (NAACL) 2022 • Danilo Ribeiro, Shen Wang, Xiaofei Ma, Rui Dong, Xiaokai Wei, Henry Zhu, Xinchi Chen, Zhiheng Huang, Peng Xu, Andrew Arnold, Dan Roth
Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises.
no code implementations • 18 Apr 2022 • Shuojia Zou, Chen Li, Hongzan Sun, Peng Xu, Jiawei Zhang, Pingli Ma, YuDong Yao, Xinyu Huang, Marcin Grzegorzek
The detection of tiny objects in microscopic videos is a problematic point, especially in large-scale experiments.
no code implementations • 17 Apr 2022 • Xuejiao Tang, Tai Le Quy, Eirini Ntoutsi, Kea Turner, Vasile Palade, Israat Haque, Peng Xu, Chris Brown, Wenbin Zhang
Given a question-image input, the Visual Commonsense Reasoning (VCR) model can predict an answer with the corresponding rationale, which requires inference ability from the real world.
3 code implementations • 4 Apr 2022 • Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, Andy Zeng
We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions, while value functions associated with these skills provide the grounding necessary to connect this knowledge to a particular physical environment.
no code implementations • 14 Feb 2022 • Dan Su, Peng Xu, Pascale Fung
Multi-hop question generation (MQG) aims to generate complex questions which require reasoning over multiple pieces of information of the input passage.
1 code implementation • 8 Feb 2022 • Boxin Wang, Wei Ping, Chaowei Xiao, Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Bo Li, Anima Anandkumar, Bryan Catanzaro
In this work, we systematically explore domain-adaptive training to reduce the toxicity of language models.
no code implementations • 8 Feb 2022 • Shuhao Cao, Peng Xu, David A. Clifton
"Masked Autoencoders (MAE) Are Scalable Vision Learners" revolutionizes the self-supervised learning method in that it not only achieves the state-of-the-art for image pre-training, but is also a milestone that bridges the gap between visual and linguistic masked autoencoding (BERT-style) pre-trainings.
no code implementations • 7 Feb 2022 • Mingkun Li, Peng Xu, Xiatian Zhu, Jun Guo
We investigate unsupervised person re-identification (Re-ID) with clothes change, a new challenging problem with more practical usability and scalability to real-world deployment.
Clustering
Unsupervised Long Term Person Re-Identification
+1
1 code implementation • 11 Jan 2022 • Wenliang Dai, Samuel Cahyawijaya, Tiezheng Yu, Elham J. Barezi, Peng Xu, Cheuk Tung Shadow Yiu, Rita Frieske, Holy Lovenia, Genta Indra Winata, Qifeng Chen, Xiaojuan Ma, Bertram E. Shi, Pascale Fung
With the rise of deep learning and intelligent vehicle, the smart assistant has become an essential in-car component to facilitate driving and provide extra functionalities.
1 code implementation • LREC 2022 • Tiezheng Yu, Rita Frieske, Peng Xu, Samuel Cahyawijaya, Cheuk Tung Shadow Yiu, Holy Lovenia, Wenliang Dai, Elham J. Barezi, Qifeng Chen, Xiaojuan Ma, Bertram E. Shi, Pascale Fung
We further conduct experiments with Fairseq S2T Transformer, a state-of-the-art ASR model, on the biggest existing dataset, Common Voice zh-HK, and our proposed MDCC, and the results show the effectiveness of our dataset.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
2 code implementations • LREC 2022 • Holy Lovenia, Samuel Cahyawijaya, Genta Indra Winata, Peng Xu, Xu Yan, Zihan Liu, Rita Frieske, Tiezheng Yu, Wenliang Dai, Elham J. Barezi, Qifeng Chen, Xiaojuan Ma, Bertram E. Shi, Pascale Fung
ASCEND (A Spontaneous Chinese-English Dataset) is a high-quality Mandarin Chinese-English code-switching corpus built on spontaneous multi-turn conversational dialogue sources collected in Hong Kong.
no code implementations • 4 Dec 2021 • Wei Yang, Peng Xu, Yanshuai Cao
Moreover, even the questions pertinent to a given domain, which are the input of a semantic parsing system, might not be readily available, especially in cross-domain semantic parsing.
no code implementations • ICLR 2022 • Dhruv Shah, Peng Xu, Yao Lu, Ted Xiao, Alexander Toshev, Sergey Levine, Brian Ichter
Hierarchical reinforcement learning aims to enable this by providing a bank of low-level skills as action abstractions.
Hierarchical Reinforcement Learning
reinforcement-learning
+1
no code implementations • Findings (EMNLP) 2021 • Peng Xu, Xinchi Chen, Xiaofei Ma, Zhiheng Huang, Bing Xiang
In this work, we propose to use a graph attention network on top of the available pretrained Transformers model to learn document embeddings.
no code implementations • 12 Oct 2021 • Peng Xu, Davis Liang, Zhiheng Huang, Bing Xiang
We propose a simple strategy to obtain an extractive answer span from the generative model by leveraging the decoder cross-attention patterns.
Extractive Question-Answering
Open-Domain Question Answering
+1
no code implementations • 27 Sep 2021 • Zhiheng Huang, Davis Liang, Peng Xu, Bing Xiang
Transformer models, which leverage architectural improvements like self-attention, perform remarkably well on Natural Language Processing (NLP) tasks.
1 code implementation • ACL 2021 • Alexander Hanbo Li, Patrick Ng, Peng Xu, Henghui Zhu, Zhiguo Wang, Bing Xiang
However, a large amount of world's knowledge is stored in structured databases, and need to be accessed using query languages such as SQL.
no code implementations • ACL 2021 • Peng Xu, Wenjie Zi, Hamidreza Shahidi, Ákos Kádár, Keyi Tang, Wei Yang, Jawad Ateeq, Harsh Barot, Meidan Alon, Yanshuai Cao
A natural language database interface (NLDB) can democratize data-driven insights for non-technical users.
1 code implementation • 7 Jun 2021 • Etsuko Ishii, Yan Xu, Genta Indra Winata, Zhaojiang Lin, Andrea Madotto, Zihan Liu, Peng Xu, Pascale Fung
Information-seeking dialogue systems, including knowledge identification and response generation, aim to respond to users with fluent, coherent, and informative responses based on users' needs, which.
1 code implementation • ACL (RepL4NLP) 2021 • Zihan Liu, Genta Indra Winata, Peng Xu, Pascale Fung
Experimental results illustrate that our model can significantly outperform existing strong baselines in cross-lingual and cross-domain settings, and our model can also achieve a good generalization ability on target languages of target domains.
1 code implementation • 5 Jun 2021 • Zhaojiang Lin, Andrea Madotto, Genta Indra Winata, Peng Xu, Feijun Jiang, Yuxiang Hu, Chen Shi, Pascale Fung
However, existing datasets for end-to-end ToD modeling are limited to a single language, hindering the development of robust end-to-end ToD systems for multilingual countries and regions.
1 code implementation • 31 May 2021 • Peng Xu, Xiatian Zhu
Currently, one of the most significant limitations in this field is the lack of a large realistic benchmark.
no code implementations • 11 Mar 2021 • Qingli Jing, Hong Qian, Peng Xu
In this work, we apply the Monte Carlo wave packet method to study the ultrafast nuclear dynamics following inner-shell photoionization of N2 exposed to an ultrashort intense X-ray pulse.
Atomic Physics Atomic and Molecular Clusters
1 code implementation • ACL 2021 • Peng Xu, Dhruv Kumar, Wei Yang, Wenjie Zi, Keyi Tang, Chenyang Huang, Jackie Chi Kit Cheung, Simon J. D. Prince, Yanshuai Cao
This work shows that this does not always need to be the case: with proper initialization and optimization, the benefits of very deep transformers can carry over to challenging tasks with small datasets, including Text-to-SQL semantic parsing and logical reading comprehension.
no code implementations • CUHK Course IERG5350 2020 • Han Ma, Peng Xu
n this paper, we try to solve the mobile robot exploration problem in a 2D indoor office environment by deep reinforcement learning.
no code implementations • 6 Dec 2020 • Zhonghua Zheng, Joseph Ching, Jeffrey H. Curtis, Yu Yao, Peng Xu, Matthew West, Nicole Riemer
Here we developed a simple but effective unsupervised learning approach to regionalize predictions of global aerosol mixing state indices.
no code implementations • 8 Nov 2020 • Taha Ameen ur Rahman, Alton S. Barbehenn, Xinan Chen, Hassan Dbouk, James A. Douglas, Yuncong Geng, Ian George, John B. Harvill, Sung Woo Jeon, Kartik K. Kansal, Kiwook Lee, Kelly A. Levick, Bochao Li, Ziyue Li, Yashaswini Murthy, Adarsh Muthuveeru-Subramaniam, S. Yagiz Olmez, Matthew J. Tomei, Tanya Veeravalli, Xuechao Wang, Eric A. Wayman, Fan Wu, Peng Xu, Shen Yan, Heling Zhang, Yibo Zhang, Yifan Zhang, Yibo Zhao, Sourya Basu, Lav R. Varshney
Many information sources are not just sequences of distinguishable symbols but rather have invariances governed by alternative counting paradigms such as permutations, combinations, and partitions.
Information Theory Information Theory
no code implementations • EMNLP 2020 • Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Raul Puri, Pascale Fung, Anima Anandkumar, Bryan Catanzaro
We showcase the controllability of our model by replacing the keywords used to generate stories and re-running the generation process.
1 code implementation • EMNLP 2020 • Zihan Liu, Genta Indra Winata, Peng Xu, Zhaojiang Lin, Pascale Fung
Despite the promising results of current cross-lingual models for spoken language understanding systems, they still suffer from imperfect cross-lingual representation alignments between the source and target languages, which makes the performance sub-optimal.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zhiheng Huang, Davis Liang, Peng Xu, Bing Xiang
In this paper, we first review absolute position embeddings and existing methods for relative position embeddings.
1 code implementation • 22 Sep 2020 • Davis Liang, Peng Xu, Siamak Shakeri, Cicero Nogueira dos Santos, Ramesh Nallapati, Zhiheng Huang, Bing Xiang
In some cases, our model trained on synthetic data can even outperform the same model trained on real data
no code implementations • 21 Aug 2020 • Peng Xu, Zihan Liu, Genta Indra Winata, Zhaojiang Lin, Pascale Fung
Most emotion recognition methods tackle the emotion understanding task by considering individual emotion independently while ignoring their fuzziness nature and the interconnections among them.
Ranked #3 on
Emotion Classification
on SemEval 2018 Task 1E-c
no code implementations • 17 Aug 2020 • Yi Zhang, Qin Yang, Lifu Zhang, Branko Celler, Steven Su, Peng Xu, Dezhong Yao
Causal decomposition depicts a cause-effect relationship that is not based on the concept of prediction, but based on the phase dependence of time series.
1 code implementation • 7 Jul 2020 • Peng Xu, Yongye Huang, Tongtong Yuan, Tao Xiang, Timothy M. Hospedales, Yi-Zhe Song, Liang Wang
Specifically, we use our dual-branch architecture as a universal representation framework to design two sketch-specific deep models: (i) We propose a deep hashing model for sketch retrieval, where a novel hashing loss is specifically designed to accommodate both the abstract and messy traits of sketches.
1 code implementation • ACL 2020 • Genta Indra Winata, Samuel Cahyawijaya, Zhaojiang Lin, Zihan Liu, Peng Xu, Pascale Fung
An increasing number of people in the world today speak a mixed-language as a result of being multilingual.
1 code implementation • ACL 2020 • Zihan Liu, Genta Indra Winata, Peng Xu, Pascale Fung
In this paper, we propose a Coarse-to-fine approach (Coach) for cross-domain slot filling.
Cross-Domain Named Entity Recognition
named-entity-recognition
+3
2 code implementations • 28 Mar 2020 • Zhaojiang Lin, Genta Indra Winata, Peng Xu, Zihan Liu, Pascale Fung
Despite the great promise of Transformers in many sequence modeling tasks (e. g., machine translation), their deterministic nature hinders them from generalizing to high entropy tasks such as dialogue response generation.
no code implementations • 16 Mar 2020 • Zhiheng Huang, Peng Xu, Davis Liang, Ajay Mishra, Bing Xiang
Prior to the transformer era, bidirectional Long Short-Term Memory (BLSTM) has been the dominant modeling architecture for neural machine translation and question answering.
Ranked #1 on
Text Classification
on GLUE RTE
1 code implementation • 4 Mar 2020 • Genta Indra Winata, Samuel Cahyawijaya, Zihan Liu, Zhaojiang Lin, Andrea Madotto, Peng Xu, Pascale Fung
The great variability and complex characteristics of accents creates a major challenge for training a robust and accent-agnostic automatic speech recognition (ASR) system.
Audio and Speech Processing Sound
no code implementations • 21 Feb 2020 • Peng Xu, Kun Liu, Tao Xiang, Timothy M. Hospedales, Zhanyu Ma, Jun Guo, Yi-Zhe Song
Existing sketch-analysis work studies sketches depicting static objects or scenes.
1 code implementation • 20 Feb 2020 • Sehoon Ha, Peng Xu, Zhenyu Tan, Sergey Levine, Jie Tan
In this paper, we develop a system for learning legged locomotion policies with deep RL in the real world with minimal human effort.
1 code implementation • 3 Feb 2020 • Peng Xu, Zeyu Song, Qiyue Yin, Yi-Zhe Song, Liang Wang
In this paper, we tackle for the first time, the problem of self-supervised representation learning for free-hand sketches.
no code implementations • 28 Jan 2020 • Xiaotong Gu, Zehong Cao, Alireza Jolfaei, Peng Xu, Dongrui Wu, Tzyy-Ping Jung, Chin-Teng Lin
Recent technological advances such as wearable sensing devices, real-time data streaming, machine learning, and deep learning approaches have increased interest in electroencephalographic (EEG) based BCI for translational and healthcare applications.
2 code implementations • 8 Jan 2020 • Peng Xu, Timothy M. Hospedales, Qiyue Yin, Yi-Zhe Song, Tao Xiang, Liang Wang
Free-hand sketches are highly illustrative, and have been widely used by humans to depict objects or stories from ancient times to the present.
1 code implementation • 24 Dec 2019 • Peng Xu, Chaitanya K. Joshi, Xavier Bresson
In this work, we propose a new representation of sketches as multiple sparsely connected graphs.
no code implementations • 13 Dec 2019 • Steven Schwarcz, Peng Xu, David D'Ambrosio, Juhana Kangaspunta, Anelia Angelova, Huong Phan, Navdeep Jaitly
The corpus consists of ping pong play with three main annotation streams that can be used to learn tracking and action recognition models -- tracking of the ping pong ball and poses of humans in the videos and the spin of the ball being hit by humans.
1 code implementation • 21 Nov 2019 • Zihan Liu, Genta Indra Winata, Zhaojiang Lin, Peng Xu, Pascale Fung
Recently, data-driven task-oriented dialogue systems have achieved promising performance in English.
no code implementations • IJCNLP 2019 • Zihan Liu, Jamin Shin, Yan Xu, Genta Indra Winata, Peng Xu, Andrea Madotto, Pascale Fung
Despite the surging demands for multilingual task-oriented dialog systems (e. g., Alexa, Google Home), there has been less research done in multilingual or cross-lingual scenarios.
no code implementations • 11 Nov 2019 • Weike Sun, Antonio R. C. Paiva, Peng Xu, Anantha Sundaram, Richard D. Braatz
In processing and manufacturing industries, there has been a large push to produce higher quality products and ensure maximum efficiency of processes.
no code implementations • WS 2019 • Dan Su, Yan Xu, Genta Indra Winata, Peng Xu, Hyeondey Kim, Zihan Liu, Pascale Fung
With a large number of datasets being released and new techniques being proposed, Question answering (QA) systems have witnessed great breakthroughs in reading comprehension (RC)tasks.
no code implementations • WS 2019 • Xiaofei Ma, Peng Xu, Zhiguo Wang, Ramesh Nallapati, Bing Xiang
The performance of deep neural models can deteriorate substantially when there is a domain shift between training and test data.
no code implementations • 2 Oct 2019 • Xuan Li, Yuchen Lu, Peng Xu, Jizong Peng, Christian Desrosiers, Xue Liu
In this paper, we study the problem of image recognition with non-differentiable constraints.
1 code implementation • IJCNLP 2019 • Peng Xu, Chien-Sheng Wu, Andrea Madotto, Pascale Fung
Sensational headlines are headlines that capture people's attention and generate reader interest.
4 code implementations • IJCNLP 2019 • Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, Pascale Fung
Previous research on empathetic dialogue systems has mostly focused on generating responses given certain emotions.
1 code implementation • LREC 2020 • Chien-Sheng Wu, Andrea Madotto, Zhaojiang Lin, Peng Xu, Pascale Fung
User attributes provide rich and useful information for user understanding, yet structured and easy-to-use attributes are often sparsely populated.
no code implementations • 12 Aug 2019 • Zhaohong Deng, Ruixiu Liu, Te Zhang, Peng Xu, Kup-Sze Choi, Bin Qin, Shitong Wang
The existing algorithms usually focus on the cooperation of different views in the original space but neglect the influence of the hidden information among these different visible views, or they only consider the hidden information between the views.
no code implementations • 12 Aug 2019 • Zhaohong Deng, Chen Cui, Peng Xu, Ling Liang, Haoran Chen, Te Zhang, Shitong Wang
How to exploit the relation-ship between different views effectively using the characteristic of multi-view data has become a crucial challenge.
2 code implementations • 28 Jul 2019 • Zhaojiang Lin, Peng Xu, Genta Indra Winata, Farhad Bin Siddique, Zihan Liu, Jamin Shin, Pascale Fung
In this paper, we present an end-to-end empathetic conversation agent CAiRE.
1 code implementation • 20 Jun 2019 • Jamin Shin, Peng Xu, Andrea Madotto, Pascale Fung
Hence, in this paper, we propose Sentiment Look-ahead, which is a novel perspective for empathy that models the future user emotional state.
no code implementations • 10 Jun 2019 • Genta Indra Winata, Andrea Madotto, Zhaojiang Lin, Jamin Shin, Yan Xu, Peng Xu, Pascale Fung
Detecting emotion from dialogue is a challenge that has not yet been extensively surveyed.
no code implementations • SEMEVAL 2019 • Genta Indra Winata, Andrea Madotto, Zhaojiang Lin, Jamin Shin, Yan Xu, Peng Xu, Pascale Fung
Detecting emotion from dialogue is a challenge that has not yet been extensively surveyed.
1 code implementation • 28 May 2019 • Yanshuai Cao, Peng Xu
In this work, we develop a novel regularizer to improve the learning of long-range dependency of sequence data.
1 code implementation • ACL 2019 • Peng Xu, Hamidreza Saghir, Jin Sung Kang, Teng Long, Avishek Joey Bose, Yanshuai Cao, Jackie Chi Kit Cheung
Coherence is an important aspect of text quality and is crucial for ensuring its readability.
1 code implementation • ICML 2020 • Peng Xu, Jackie Chi Kit Cheung, Yanshuai Cao
The variational autoencoder (VAE) can learn the manifold of natural images on certain datasets, as evidenced by meaningful interpolating or extrapolating in the continuous latent space.
no code implementations • 25 May 2019 • Xiang Ma, Liangzhe Chen, Zhaohong Deng, Peng Xu, Qisheng Yan, Kup-Sze Choi, Shitong Wang
The method progressively learns image features through a layer-by-layer manner based on fuzzy rules, so the feature learning process can be better explained by the generated rules.
1 code implementation • 25 May 2019 • Peng Xu, Zhaohong Deng, Kup-Sze Choi, Longbing Cao, Shitong Wang
More specifically, it exploits the agreement and disagreement among views by sharing a common clustering results along the sample dimension and keeping the clustering results of each view specific along the feature dimension.
no code implementations • 22 May 2019 • Peng Xu, Zhaohong Deng, Kup-Sze Choi, Jun Wang, Shitong Wang
The two domains often lie in different feature spaces due to diverse data collection methods, which leads to the more challenging task of heterogeneous domain adaptation (HDA).
no code implementations • ICLR Workshop LLD 2019 • Peng Xu, Xiaofei Ma, Ramesh Nallapati, Bing Xiang
In this paper, we propose a \textit{weak supervision} framework for neural ranking tasks based on the data programming paradigm \citep{Ratner2016}, which enables us to leverage multiple weak supervision signals from different sources.
no code implementations • 11 May 2019 • Yao Xie, Peng Xu, Zhanyu Ma
We introduce a novel problem of scene sketch zero-shot learning (SSZSL), which is a challenging task, since (i) different from photo, the gap between common semantic domain (e. g., word vector) and sketch is too huge to exploit common semantic knowledge as the bridge for knowledge transfer, and (ii) compared with single-object sketch, more expressive feature representation for scene sketch is required to accommodate its high-level of abstraction and complexity.
no code implementations • 24 Apr 2019 • Peng Xu, Zhaohong Deng, Chen Cui, Te Zhang, Kup-Sze Choi, Gu Suhang, Jun Wang, Shitong Wang
Furthermore, for highly nonlinear modeling task, it is usually necessary to use a large number of rules which further weakens the clarity and interpretability of TSK FS.
1 code implementation • NAACL 2019 • Peng Xu, Denilson Barbosa
Knowledge Bases (KBs) require constant up-dating to reflect changes to the world they represent.
Ranked #3 on
Relation Extraction
on NYT Corpus
no code implementations • 19 Feb 2019 • Peng Xu, Pascale Fung
While reinforcement learning can effectively improve language generation models, it often suffers from generating incoherent and repetitive phrases \cite{paulus2017deep}.
no code implementations • 9 Jan 2019 • Peng Xu, Zhaohong Deng, Jun Wang, Qun Zhang, Shitong Wang
A core issue in transfer learning is to learn a shared feature space in where the distributions of the data from two domains are matched.
2 code implementations • CVPR 2019 • Zhewei Yao, Amir Gholami, Peng Xu, Kurt Keutzer, Michael Mahoney
To address this problem, we present a new family of trust region based adversarial attacks, with the goal of computing adversarial perturbations efficiently.
no code implementations • 30 Sep 2018 • Fred Roosta, Yang Liu, Peng Xu, Michael W. Mahoney
We consider a variant of inexact Newton Method, called Newton-MR, in which the least-squares sub-problems are solved approximately using Minimum Residual method.
1 code implementation • WS 2018 • Peng Xu, Andrea Madotto, Chien-Sheng Wu, Ji Ho Park, Pascale Fung
In this paper, we propose Emo2Vec which encodes emotional semantics into vectors.
Ranked #28 on
Sentiment Analysis
on SST-5 Fine-grained classification
no code implementations • 3 Jul 2018 • Yuchen Lu, Peng Xu
If we focus on specific diseases, the model is able to detect melanoma with 0. 864 AUCROC and detect actinic keratosis with 0. 872 AUCROC, even if it only sees the images of nevus.
no code implementations • SEMEVAL 2018 • Ji Ho Park, Peng Xu, Pascale Fung
This paper describes our system that has been submitted to SemEval-2018 Task 1: Affect in Tweets (AIT) to solve five subtasks.
no code implementations • SEMEVAL 2018 • Ji Ho Park, Peng Xu, Pascale Fung
This paper describes our system that has been submitted to SemEval-2018 Task 1: Affect in Tweets (AIT) to solve five subtasks.
1 code implementation • CVPR 2018 • Peng Xu, Yongye Huang, Tongtong Yuan, Kaiyue Pang, Yi-Zhe Song, Tao Xiang, Timothy M. Hospedales, Zhanyu Ma, Jun Guo
Key to our network design is the embedding of unique characteristics of human sketch, where (i) a two-branch CNN-RNN architecture is adapted to explore the temporal ordering of strokes, and (ii) a novel hashing loss is specifically designed to accommodate both the temporal and abstract traits of sketches.
3 code implementations • NAACL 2018 • Peng Xu, Denilson Barbosa
The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text.
no code implementations • 6 Feb 2018 • Peng Xu, Denilson Barbosa
We report an evaluation of the effectiveness of the existing knowledge base embedding models for relation prediction and for relation extraction on a wide range of benchmarks.
no code implementations • NeurIPS 2018 • Shusen Wang, Farbod Roosta-Khorasani, Peng Xu, Michael W. Mahoney
For distributed computing environment, we consider the empirical risk minimization problem and propose a distributed and communication-efficient Newton-type optimization method.
no code implementations • 25 Aug 2017 • Peng Xu, Farbod Roosta-Khorasani, Michael W. Mahoney
While first-order optimization methods such as stochastic gradient descent (SGD) are popular in machine learning (ML), they come with well-known deficiencies, including relatively-slow convergence, sensitivity to the settings of hyper-parameters such as learning rate, stagnation at high training errors, and difficulty in escaping flat regions and saddle points.
no code implementations • 23 Aug 2017 • Peng Xu, Fred Roosta, Michael W. Mahoney
In this light, we consider the canonical problem of finite-sum minimization, provide appropriate uniform and non-uniform sub-sampling strategies to construct such Hessian approximations, and obtain optimal iteration complexity for the corresponding sub-sampled trust-region and cubic regularization methods.
2 code implementations • 10 Jul 2017 • Christopher De Sa, Bryan He, Ioannis Mitliagkas, Christopher Ré, Peng Xu
We propose a simple variant of the power iteration with an added momentum term, that achieves both the optimal sample and iteration complexity.
no code implementations • 28 May 2017 • Peng Xu, Qiyue Yin, Yongye Huang, Yi-Zhe Song, Zhanyu Ma, Liang Wang, Tao Xiang, W. Bastiaan Kleijn, Jun Guo
Sketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo.
Ranked #5 on
Sketch-Based Image Retrieval
on Chairs
no code implementations • 25 Oct 2016 • Paroma Varma, Bryan He, Dan Iter, Peng Xu, Rose Yu, Christopher De Sa, Christopher Ré
Prior work has explored learning accuracies for these sources even without ground truth labels, but they assume that a single accuracy parameter is sufficient to model the behavior of these sources over the entire training set.
no code implementations • NeurIPS 2016 • Peng Xu, Jiyan Yang, Farbod Roosta-Khorasani, Christopher Ré, Michael W. Mahoney
As second-order methods prove to be effective in finding the minimizer to a high-precision, in this work, we propose randomized Newton-type algorithms that exploit \textit{non-uniform} sub-sampling of $\{\nabla^2 f_i(w)\}_{i=1}^{n}$, as well as inexact updates, as means to reduce the computational complexity.
no code implementations • 12 Jan 2016 • Xinchao Li, Peng Xu, Yue Shi, Martha Larson, Alan Hanjalic
The novelty of the approach is that subclass representations make use of not only the content of the photos themselves, but also information on the co-occurrence of their tags, which determines membership in both subclasses and top-level classes.
no code implementations • NeurIPS 2008 • Peng Xu, Timothy K. Horiuchi, Pamela A. Abshire
We report a compact realization of short-term depression (STD) in a VLSI stochastic synapse.