2 code implementations • ICML 2020 • Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson
However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing.
Ranked #1 on
Zero-Shot Cross-Lingual Transfer
on XTREME
(AVG metric)
1 code implementation • 28 Mar 2023 • Mingjian Liang, Junjie Hu, Chenyu Bao, Hua Feng, Fuqin Deng, Tin Lun Lam
Specifically, we consider the following cases: i) both RGB data and thermal data, ii) only one of the types of data, and iii) none of them generate discriminative features.
no code implementations • 9 Mar 2023 • Junjie Hu, Chenyou Fan, Liguang Zhou, Qing Gao, Honghai Liu, Tin Lun Lam
In this paper, we seek to enable lifelong learning for MDE, which performs cross-domain depth learning sequentially, to achieve high plasticity on a new domain and maintain good stability on original domains.
1 code implementation • 2 Mar 2023 • Zachary Huemann, Junjie Hu, Tyler Bradshaw
In this work, we develop a vision-language model for the task of pneumothorax segmentation.
no code implementations • 1 Mar 2023 • Zachary Huemann, Changhee Lee, Junjie Hu, Steve Y. Cho, Tyler Bradshaw
Domain adaptation improved the performance of large language models in interpreting nuclear medicine text reports.
no code implementations • 31 Dec 2022 • Liguang Zhou, Junjie Hu, Yuhongze Zhou, Tin Lun Lam, Yangsheng Xu
Unbiased scene graph generation (USGG) is a challenging task that requires predicting diverse and heavily imbalanced predicates between objects in an image.
no code implementations • 31 Dec 2022 • Liguang Zhou, Yuhongze Zhou, Xiaonan Qi, Junjie Hu, Tin Lun Lam, Yangsheng Xu
Then, to build multi-scale hierarchical information of input features, we utilize an attention fusion mechanism to aggregate features from multiple layers of the backbone network.
no code implementations • 28 Nov 2022 • Xinyan Velocity Yu, Akari Asai, Trina Chatterjee, Junjie Hu, Eunsol Choi
While the NLP community is generally aware of resource disparities among languages, we lack research that quantifies the extent and types of such disparity.
1 code implementation • 29 Aug 2022 • Junjie Hu, Chenyou Fan, Mete Ozay, Hua Feng, Yuan Gao, Tin Lun Lam
In this paper, we introduce the ground-to-aerial perception knowledge transfer and propose a progressive semi-supervised learning framework that enables drone perception using only labeled data of ground viewpoint and unlabeled data of flying viewpoints.
no code implementations • 26 Aug 2022 • Junjie Hu, Chenyou Fan, Mete Ozay, Hualie Jiang, Tin Lun Lam
We study data-free knowledge distillation (KD) for monocular depth estimation (MDE), which learns a lightweight model for real-world depth perception tasks by compressing it from a trained teacher model while lacking training data in the target domain.
no code implementations • NAACL (MIA) 2022 • Akari Asai, Shayne Longpre, Jungo Kasai, Chia-Hsuan Lee, Rui Zhang, Junjie Hu, Ikuya Yamada, Jonathan H. Clark, Eunsol Choi
We present the results of the Workshop on Multilingual Information Access (MIA) 2022 Shared Task, evaluating cross-lingual open-retrieval question answering (QA) systems in 16 typologically diverse languages.
1 code implementation • 23 May 2022 • Tuan Dinh, Jy-yong Sohn, Shashank Rajput, Timothy Ossowski, Yifei Ming, Junjie Hu, Dimitris Papailiopoulos, Kangwook Lee
Word translation without parallel corpora has become feasible, rivaling the performance of supervised methods.
no code implementations • 23 May 2022 • Makesh Narsimhan Sreedhar, Xiangpeng Wan, Yu Cheng, Junjie Hu
Subword tokenization schemes are the dominant technique used in current NLP models.
no code implementations • 11 May 2022 • Junjie Hu, Chenyu Bao, Mete Ozay, Chenyou Fan, Qing Gao, Honghai Liu, Tin Lun Lam
Depth completion aims at predicting dense pixel-wise depth from an extremely sparse map captured from a depth sensor, e. g., LiDARs.
no code implementations • ACL 2022 • Junjie Hu, Hiroaki Hayashi, Kyunghyun Cho, Graham Neubig
It has been shown that machine translation models usually generate poor translations for named entities that are infrequent in the training corpus.
no code implementations • 19 Oct 2021 • Junjie Hu, Brenda López Cabrera, Awdesch Melzer
The predictive information is fundamental for the risk and production management of electricity consumers.
1 code implementation • 18 Oct 2021 • Fuqin Deng, Hua Feng, Mingjian Liang, Hongmin Wang, Yong Yang, Yuan Gao, Junfeng Chen, Junjie Hu, Xiyue Guo, Tin Lun Lam
To better extract detail spatial information, we propose a two-stage Feature-Enhanced Attention Network (FEANet) for the RGB-T semantic segmentation task.
Ranked #6 on
Thermal Image Segmentation
on MFN Dataset
1 code implementation • ACL 2022 • Bosheng Ding, Junjie Hu, Lidong Bing, Sharifah Mahani Aljunied, Shafiq Joty, Luo Si, Chunyan Miao
Much recent progress in task-oriented dialogue (ToD) systems has been driven by available annotation data across multiple domains for training.
1 code implementation • 12 Oct 2021 • Hualie Jiang, Laiyan Ding, Junjie Hu, Rui Huang
Unsupervised learning of depth from indoor monocular videos is challenging as the artificial environment contains many textureless regions.
1 code implementation • EMNLP 2021 • Machel Reid, Junjie Hu, Graham Neubig, Yutaka Matsuo
Reproducible benchmarks are crucial in driving progress of machine translation research.
no code implementations • 8 Sep 2021 • Chongyang Wang, Yuan Gao, Chenyou Fan, Junjie Hu, Tin Lun Lam, Nicholas D. Lane, Nadia Bianchi-Berthouze
For such issues, we propose a novel Learning to Agreement (Learn2Agree) framework to tackle the challenge of learning from multiple annotators without objective ground truth.
no code implementations • 12 Aug 2021 • Junjie Hu, Wolfgang Karl Härdle
We uncover networks from news articles to study cross-sectional stock returns.
no code implementations • WMT (EMNLP) 2021 • Junjie Hu, Graham Neubig
Neural machine translation (NMT) is sensitive to domain shift.
2 code implementations • 13 May 2021 • Junjie Hu, Chenyou Fan, Hualie Jiang, Xiyue Guo, Yuan Gao, Xiangyong Lu, Tin Lun Lam
In this paper, we aim to achieve accurate depth estimation with a light-weight network.
1 code implementation • EMNLP 2021 • Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, PengFei Liu, Junjie Hu, Dan Garrette, Graham Neubig, Melvin Johnson
While a sizeable gap to human-level performance remains, improvements have been easier to achieve in some tasks than in others.
1 code implementation • NAACL 2021 • Po-Yao Huang, Mandela Patrick, Junjie Hu, Graham Neubig, Florian Metze, Alexander Hauptmann
Specifically, we focus on multilingual text-to-video search and propose a Transformer-based model that learns contextualized multilingual multimodal embeddings.
no code implementations • 19 Oct 2020 • Junjie Hu, Xiyue Guo, Junfeng Chen, Guanqi Liang, Fuqin Deng, Tin Lun Lam
However, most of them suffer from the following problems: 1) the need of pairs of low light and normal light images for training, 2) the poor performance for dark images, 3) the amplification of noise.
Low-Light Image Enhancement
Simultaneous Localization and Mapping
3 code implementations • 19 Oct 2020 • Xiyue Guo, Junjie Hu, Junfeng Chen, Fuqin Deng, Tin Lun Lam
The core problem of visual multi-robot simultaneous localization and mapping (MR-SLAM) is how to efficiently and accurately perform multi-robot global localization (MR-GL).
no code implementations • NAACL 2021 • Junjie Hu, Melvin Johnson, Orhan Firat, Aditya Siddhant, Graham Neubig
Pre-trained cross-lingual encoders such as mBERT (Devlin et al., 2019) and XLMR (Conneau et al., 2020) have proven to be impressively effective at enabling transfer-learning of NLP systems from high-resource languages to low-resource languages.
no code implementations • ICML 2020 • Han Zhao, Junjie Hu, Andrej Risteski
The goal of universal machine translation is to learn to translate between any pair of languages, given a corpus of paired translated documents for \emph{a small subset} of all pairs of languages.
no code implementations • EMNLP (NLP-COVID19) 2020 • Antonios Anastasopoulos, Alessandro Cattelan, Zi-Yi Dou, Marcello Federico, Christian Federman, Dmitriy Genzel, Francisco Guzmán, Junjie Hu, Macduff Hughes, Philipp Koehn, Rosie Lazar, Will Lewis, Graham Neubig, Mengmeng Niu, Alp Öktem, Eric Paquin, Grace Tang, Sylwia Tur
Further, the team is converting the test and development data into translation memories (TMXs) that can be used by localizers from and to any of the languages.
no code implementations • ACL 2020 • Po-Yao Huang, Junjie Hu, Xiaojun Chang, Alexander Hauptmann
In this paper, we investigate how to utilize visual content for disambiguation and promoting latent space alignment in unsupervised MMT.
3 code implementations • 24 Mar 2020 • Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson
However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing.
no code implementations • 11 Dec 2019 • Junjie Hu, Wolfgang Karl Härdle, Weiyu Kuo
Cryptocurrency, the most controversial and simultaneously the most interesting asset, has attracted many investors and speculators in recent years.
no code implementations • 20 Nov 2019 • Junjie Hu, Takayuki Okatani
However, the prediction of saliency maps is itself vulnerable to the attacks, even though it is not the direct target of the attacks.
1 code implementation • WS 2019 • Zi-Yi Dou, Xinyi Wang, Junjie Hu, Graham Neubig
We then use these learned domain differentials to adapt models for the target task accordingly.
1 code implementation • 11 Sep 2019 • Junjie Hu, Yu Cheng, Zhe Gan, Jingjing Liu, Jianfeng Gao, Graham Neubig
Previous storytelling approaches mostly focused on optimizing traditional metrics such as BLEU, ROUGE and CIDEr.
1 code implementation • IJCNLP 2019 • Ming Jiang, Junjie Hu, Qiuyuan Huang, Lei Zhang, Jana Diesner, Jianfeng Gao
In this study, we present a fine-grained evaluation method REO for automatically measuring the performance of image captioning systems.
1 code implementation • IJCNLP 2019 • Chunting Zhou, Xuezhe Ma, Junjie Hu, Graham Neubig
Despite impressive empirical successes of neural machine translation (NMT) on standard benchmarks, limited parallel data impedes the application of NMT models to many language pairs.
1 code implementation • IJCNLP 2019 • Zi-Yi Dou, Junjie Hu, Antonios Anastasopoulos, Graham Neubig
The recent success of neural machine translation models relies on the availability of high quality, in-domain data.
2 code implementations • ACL 2019 • Junjie Hu, Mengzhou Xia, Graham Neubig, Jaime Carbonell
It has been previously noted that neural machine translation (NMT) is very sensitive to domain shift.
1 code implementation • 19 Apr 2019 • Liu Yang, Junjie Hu, Minghui Qiu, Chen Qu, Jianfeng Gao, W. Bruce Croft, Xiaodong Liu, Yelong Shen, Jingjing Liu
In this paper, we propose a hybrid neural conversation model that combines the merits of both response retrieval and generation methods.
1 code implementation • ICCV 2019 • Junjie Hu, Yan Zhang, Takayuki Okatani
We formulate it as an optimization problem of identifying the smallest number of image pixels from which the CNN can estimate a depth map with the minimum difference from the estimate from the entire image.
2 code implementations • NAACL 2019 • Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, Xinyi Wang, John Wieting
In this paper, we describe compare-mt, a tool for holistic analysis and comparison of the results of systems for language generation tasks such as machine translation.
no code implementations • 24 Feb 2019 • Aditi Chaudhary, Siddharth Dalmia, Junjie Hu, Xinjian Li, Austin Matthews, Aldrian Obaja Muis, Naoki Otani, Shruti Rijhwani, Zaid Sheikh, Nidhi Vyas, Xinyi Wang, Jiateng Xie, Ruochen Xu, Chunting Zhou, Peter J. Jansen, Yiming Yang, Lori Levin, Florian Metze, Teruko Mitamura, David R. Mortensen, Graham Neubig, Eduard Hovy, Alan W. black, Jaime Carbonell, Graham V. Horwood, Shabnam Tafreshi, Mona Diab, Efsun S. Kayi, Noura Farra, Kathleen McKeown
This paper describes the ARIEL-CMU submissions to the Low Resource Human Language Technologies (LoReHLT) 2018 evaluations for the tasks Machine Translation (MT), Entity Discovery and Linking (EDL), and detection of Situation Frames in Text and Speech (SF Text and Speech).
1 code implementation • WS 2018 • Junjie Hu, Wei-Cheng Chang, Yuexin Wu, Graham Neubig
In this paper, propose a method to effectively encode the local and global contextual information for each target word using a three-part neural network approach.
1 code implementation • EMNLP 2018 • Graham Neubig, Junjie Hu
This paper examines the problem of adapting neural machine translation systems to new, low-resourced languages (LRLs) as effectively and rapidly as possible.
1 code implementation • ACL 2018 • Craig Stewart, Nikolai Vogler, Junjie Hu, Jordan Boyd-Graber, Graham Neubig
Simultaneous interpretation, translation of the spoken word in real-time, is both highly challenging and physically demanding.
4 code implementations • 23 Mar 2018 • Junjie Hu, Mete Ozay, Yan Zhang, Takayuki Okatani
Experimental results show that these two improvements enable to attain higher accuracy than the current state-of-the-arts, which is given by finer resolution reconstruction, for example, with small objects and object boundaries.
Ranked #39 on
Monocular Depth Estimation
on NYU-Depth V2
no code implementations • ICLR 2018 • Han Zhao, Zhenyao Zhu, Junjie Hu, Adam Coates, Geoff Gordon
This provides us a very general way to interpolate between generative and discriminative extremes through different choices of priors.
no code implementations • EMNLP 2017 • Rui Liu, Junjie Hu, Wei Wei, Zi Yang, Eric Nyberg
Deep neural networks for machine comprehension typically utilizes only word or character embeddings without explicitly taking advantage of structured linguistic information such as constituency trees and dependency trees.
Ranked #40 on
Question Answering
on SQuAD1.1 dev
no code implementations • ACL 2017 • Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, William W. Cohen
In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models.
1 code implementation • 6 Nov 2016 • Zhilin Yang, Bhuwan Dhingra, Ye Yuan, Junjie Hu, William W. Cohen, Ruslan Salakhutdinov
Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension.
Ranked #50 on
Question Answering
on SQuAD1.1 dev
no code implementations • 8 Sep 2016 • Junjie Hu, Jean Oh, Anatole Gershman
Robotic commands in natural language usually contain various spatial descriptions that are semantically similar but syntactically different.