no code implementations • Findings (NAACL) 2022 • Liwen Zhang, Zixia Jia, Wenjuan Han, Zilong Zheng, Kewei Tu
Adversarial attack of structured prediction models faces various challenges such as the difficulty of perturbing discrete words, the sentence quality issue, and the sensitivity of outputs to small perturbations.
no code implementations • 14 Mar 2024 • Ning Cheng, You Li, Jing Gao, Bin Fang, Jinan Xu, Wenjuan Han
Tactility provides crucial support and enhancement for the perception and interaction capabilities of both humans and robots.
no code implementations • 11 Feb 2024 • Peng Wang, Xiang Wei, Fangxu Hu, Wenjuan Han
TransGPT-MM is finetuned on a multi-modal Transportation dataset (MTD) that we manually collected from three areas of the transportation domain: driving tests, traffic signs, and landmarks.
no code implementations • 9 Jan 2024 • Xue Zhang, Xiangyu Shi, Xinyue Lou, Rui Qi, Yufeng Chen, Jinan Xu, Wenjuan Han
Large language models (LLMs) and multimodal large language models (MLLMs) have shown excellent general capabilities, even exhibiting adaptability in many professional domains such as law, economics, transportation, and medicine.
no code implementations • 18 Dec 2023 • Zhi Gao, Yuntao Du, Xintong Zhang, Xiaojian Ma, Wenjuan Han, Song-Chun Zhu, Qing Li
Leveraging large language models (LLMs) to integrate off-the-shelf tools (e. g., visual models and image processing functions) is a promising research direction to build powerful visual assistants for solving diverse visual tasks.
1 code implementation • ACL 2022 • Hai Ye, Hwee Tou Ng, Wenjuan Han
In conversational question answering (CQA), the task of question rewriting~(QR) in context aims to rewrite a context-dependent question into an equivalent self-contained question that gives the same answer.
no code implementations • 5 Nov 2023 • Jiaxin Shen, Yanyao Liu, ZiMing Wang, Ziyuan Jiao, Yufeng Chen, Wenjuan Han
To facilitate the advancement of research in healthcare robots without human intervention or commands, we introduce the Autonomous Helping Challenge, along with a crowd-sourcing large-scale dataset.
1 code implementation • 20 Oct 2023 • Xue Zhang, Songming Zhang, Yunlong Liang, Yufeng Chen, Jian Liu, Wenjuan Han, Jinan Xu
Furthermore, for situations requiring multiple paraphrases for each source sentence, we design a Diverse Templates Search (DTS) algorithm, which can enhance the diversity between paraphrases without sacrificing quality.
2 code implementations • 14 Sep 2023 • Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
In this paper, we address the limitation above by 1) introducing vision-language Model with Multi-Modal In-Context Learning(MMICL), a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts.
Ranked #16 on Visual Reasoning on Winoground
1 code implementation • 3 Jul 2023 • Xiang Wei, Yufeng Chen, Ning Cheng, Xingyu Cui, Jinan Xu, Wenjuan Han
In order to construct or extend entity-centric and event-centric knowledge graphs (KG and EKG), the information extraction (IE) annotation toolkit is essential.
no code implementations • 10 Jun 2023 • Zefan Cai, Baobao Chang, Wenjuan Han
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning.
no code implementations • 22 May 2023 • Yueting Yang, Xintong Zhang, Wenjuan Han
Thinking stage combines the image information and task description as the prompt of the LLM, inference with the rationals.
1 code implementation • 14 May 2023 • Songming Zhang, Yunlong Liang, Shuaibo Wang, Wenjuan Han, Jian Liu, Jinan Xu, Yufeng Chen
In this work, we first unravel this mystery from an empirical perspective and show that the knowledge comes from the top-1 predictions of teachers, which also helps us build a potential connection between word- and sequence-level KD.
1 code implementation • 20 Feb 2023 • Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, Yong Jiang, Wenjuan Han
Zero-shot information extraction (IE) aims to build IE systems from the unannotated text.
1 code implementation • ICCV 2023 • Jiapeng Li, Ping Wei, Wenjuan Han, Lifeng Fan
In this paper, we propose a novel task IntentQA, a special VideoQA task focusing on video intent reasoning, which has become increasingly important for AI with its advantages in equipping AI agents with the capability of reasoning beyond mere recognition in daily tasks.
1 code implementation • 17 Dec 2022 • Zixia Jia, Zhaohui Yan, Wenjuan Han, Zilong Zheng, Kewei Tu
Prior works on joint Information Extraction (IE) typically model instance (e. g., event triggers, entities, roles, relations) interactions by representation enhancement, type dependencies scoring, or global decoding.
no code implementations • 1 Dec 2022 • Yu-Zhe Shi, Manjie Xu, Wenjuan Han, Yixin Zhu
If scientific discovery is one of the main driving forces of human progress, insight is the fuel for the engine, which has long attracted behavior-level research to understand and model its underlying cognitive process.
1 code implementation • 20 Nov 2022 • Yu-Zhe Shi, Manjie Xu, John E. Hopcroft, Kun He, Joshua B. Tenenbaum, Song-Chun Zhu, Ying Nian Wu, Wenjuan Han, Yixin Zhu
Specifically, at the $representational \ level$, we seek to answer how the complexity varies when a visual concept is mapped to the representation space.
1 code implementation • 7 Sep 2022 • Yanzeng Li, Zilong Zheng, Wenjuan Han, Lei Zou
Semantic Web technology has successfully facilitated many RDF models with rich data representation methods.
1 code implementation • CVPR 2022 • Chao Lou, Wenjuan Han, Yuhuan Lin, Zilong Zheng
Our goal is to bridge the visual scene graphs and linguistic dependency trees seamlessly.
no code implementations • 28 Oct 2021 • Wenjuan Han, Hwee Tou Ng
However, most existing state-of-the-art GEC approaches are based on similar sequence-to-sequence neural networks, so the gains are limited from combining the outputs of component systems similar to one another.
no code implementations • ICLR 2022 • Bo Wan, Wenjuan Han, Zilong Zheng, Tinne Tuytelaars
We introduce a new task, unsupervised vision-language (VL) grammar induction.
no code implementations • ACL 2021 • Wenjuan Han, Bo Pang, YingNian Wu
Transfer learning with large pretrained transformer-based language models like BERT has become a dominating approach for most NLP tasks.
no code implementations • ACL 2021 • Liwen Zhang, Ge Wang, Wenjuan Han, Kewei Tu
In this paper, we propose a simple yet effective method to adapt unsupervised syntactic dependency parsing methodology for unsupervised discourse dependency parsing.
no code implementations • EACL 2021 • Kewei Tu, Yong Jiang, Wenjuan Han, Yanpeng Zhao
Unsupervised parsing learns a syntactic parser from training sentences without parse tree annotations.
no code implementations • 12 Mar 2021 • Yixian Liu, Liwen Zhang, Wenjuan Han, Yue Zhang, Kewei Tu
We focus on CommonGen, the task of generating text based on a set of concepts, as a representative task of constrained text generation.
no code implementations • COLING 2020 • Erxin Yu, Wenjuan Han, Yuan Tian, Yi Chang
Distantly Supervised Relation Extraction (DSRE) has proven to be effective to find relational facts from texts, but it still suffers from two main problems: the wrong labeling problem and the long-tail problem.
1 code implementation • COLING 2020 • Songlin Yang, Yong Jiang, Wenjuan Han, Kewei Tu
Inspired by second-order supervised dependency parsing, we proposed a second-order extension of unsupervised neural dependency models that incorporate grandparent-child or sibling information.
Ranked #1 on Dependency Grammar Induction on WSJ10
no code implementations • COLING 2020 • Wenjuan Han, Yong Jiang, Hwee Tou Ng, Kewei Tu
Syntactic dependency parsing is an important task in natural language processing.
1 code implementation • EMNLP 2020 • Wenjuan Han, Liwen Zhang, Yong Jiang, Kewei Tu
To address these problems, we propose a novel and unified framework that learns to attack a structured prediction model using a sequence-to-sequence model with feedbacks from multiple reference models of the same structured prediction task.
1 code implementation • ACL 2020 • Bo Pang, Erik Nijkamp, Wenjuan Han, Linqi Zhou, Yixian Liu, Kewei Tu
Open-domain dialogue generation has gained increasing attention in Natural Language Processing.
no code implementations • IJCNLP 2019 • Wenjuan Han, Ge Wang, Yong Jiang, Kewei Tu
The key to multilingual grammar induction is to couple grammar parameters of different languages together by exploiting the similarity between languages.
no code implementations • IJCNLP 2019 • Yong Jiang, Wenjuan Han, Kewei Tu
Grammar induction aims to discover syntactic structures from unannotated sentences.
no code implementations • ACL 2019 • Wenjuan Han, Yong Jiang, Kewei Tu
In this paper, we propose a novel probabilistic model called discriminative neural dependency model with valence (D-NDMV) that generates a sentence and its parse from a continuous latent representation, which encodes global contextual information of the generated sentence.
Ranked #2 on Dependency Grammar Induction on WSJ10
Constituency Grammar Induction Dependency Grammar Induction +2
no code implementations • EMNLP 2017 • Wenjuan Han, Yong Jiang, Kewei Tu
We study the impact of big models (in terms of the degree of lexicalization) and big data (in terms of the training corpus size) on dependency grammar induction.
no code implementations • EMNLP 2017 • Yong Jiang, Wenjuan Han, Kewei Tu
Unsupervised dependency parsing aims to learn a dependency parser from unannotated sentences.