4 code implementations • 11 May 2019 • Hua Wei, Nan Xu, Huichu Zhang, Guanjie Zheng, Xinshi Zang, Chacha Chen, Wei-Nan Zhang, Yanmin Zhu, Kai Xu, Zhenhui Li
To enable cooperation of traffic signals, in this paper, we propose a model, CoLight, which uses graph attentional networks to facilitate communication.
2 code implementations • 3 Oct 2023 • Xiaogeng Liu, Nan Xu, Muhao Chen, Chaowei Xiao
In light of these challenges, we intend to answer this question: Can we develop an approach that can automatically generate stealthy jailbreak prompts?
1 code implementation • 24 Dec 2023 • Xinglin Xiao, Yijie Wang, Nan Xu, Yuqi Wang, Hanxuan Yang, Minzheng Wang, Yin Luo, Lei Wang, Wenji Mao, Daniel Zeng
The difficulty of the information extraction task lies in dealing with the task-specific label schemas and heterogeneous data structures.
1 code implementation • ICCV 2023 • Zhengbo Wang, Jian Liang, Ran He, Nan Xu, Zilei Wang, Tieniu Tan
Thereafter, we fine-tune CLIP with off-the-shelf methods by combining labeled and synthesized features.
1 code implementation • 12 Oct 2021 • Xiangtian Zheng, Nan Xu, Loc Trinh, Dongqi Wu, Tong Huang, S. Sivaranjani, Yan Liu, Le Xie
The electric grid is a key enabling infrastructure for the ambitious transition towards carbon neutrality as we grapple with climate change.
2 code implementations • 29 Nov 2023 • Yijun Yang, Ruiyuan Gao, Xiaosen Wang, Tsung-Yi Ho, Nan Xu, Qiang Xu
In recent years, Text-to-Image (T2I) models have seen remarkable advancements, gaining widespread adoption.
1 code implementation • 23 Oct 2023 • Jiao Sun, Yufei Tian, Wangchunshu Zhou, Nan Xu, Qian Hu, Rahul Gupta, John Frederick Wieting, Nanyun Peng, Xuezhe Ma
While recent studies have looked into the abilities of large language models in various benchmark tasks, including question generation, reading comprehension, multilingual and etc, there have been few studies looking into the controllability of large language models on generation tasks.
1 code implementation • 25 May 2022 • Nan Xu, Fei Wang, Bangzheng Li, Mingtao Dong, Muhao Chen
Due to shortcuts from surface patterns to annotated entity labels and biased training, existing entity typing models are subject to the problem of spurious correlations.
1 code implementation • 28 Oct 2023 • Nan Xu, Fei Wang, Mingtao Dong, Muhao Chen
Many discriminative natural language understanding (NLU) tasks have large label spaces.
1 code implementation • 22 May 2023 • Nan Xu, Chunting Zhou, Asli Celikyilmaz, Xuezhe Ma
Given a prefix (context), open-ended generation aims to decode texts that are coherent, which do not abruptly drift from previous topics, and informative, which do not suffer from undesired repetitions.
no code implementations • 12 Sep 2017 • Nan Xu, Yanqing Guo, Jiujun Wang, Xiangyang Luo, Ran He
In this method, we use the subspace representations of different views to adaptively learn a consensus similarity matrix, uncovering the subspace structure and avoiding noisy nature of original data.
no code implementations • 12 May 2019 • Guanjie Zheng, Xinshi Zang, Nan Xu, Hua Wei, Zhengyao Yu, Vikash Gayah, Kai Xu, Zhenhui Li
In this paper, we propose to re-examine the RL approaches through the lens of classic transportation theory.
no code implementations • IJCNLP 2019 • Penghui Wei, Nan Xu, Wenji Mao
The bottom component of our framework classifies the stances of tweets in a conversation discussing a rumor via modeling the structural property based on a novel graph convolutional network.
no code implementations • ACL 2020 • Nan Xu, Zhixiong Zeng, Wenji Mao
In multimodal context, sarcasm is no longer a pure linguistic phenomenon, and due to the nature of social media short text, the opposite is more often manifested via cross-modality expressions.
no code implementations • 27 Sep 2020 • Nan Xu, Oluwaseyi Feyisetan, Abhinav Aggarwal, Zekun Xu, Nathanael Teissier
Deep Neural Networks, despite their great success in diverse domains, are provably sensitive to small perturbations on correctly classified examples and lead to erroneous predictions.
no code implementations • 25 Sep 2020 • Bruno Henrique Groenner Barbosa, Nan Xu, Hassan Askari, Amir Khajepour
It is delineated that the proposed intelligent tire system can provide reliable information about the tire-road interactions even in the case of high slip angles.
no code implementations • 1 Jan 2021 • Nan Xu, Nitin Kamra, Yan Liu
Treatment recommendation is a complex multi-faceted problem with many conflicting objectives, e. g., optimizing the survival rate (or expected lifetime), mitigating negative impacts, reducing financial expenses and time costs, avoiding over-treatment, etc.
1 code implementation • 22 Aug 2019 • Vei Wang, Nan Xu, Jin Cheng Liu, Gang Tang, Wen-Tong Geng
The executable versions of VASPKIT and the related examples, together with the tutorials, are available in its official website vaspkit. com.
Materials Science
no code implementations • 12 Feb 2021 • Chuizheng Meng, Loc Trinh, Nan Xu, Yan Liu
The recent release of large-scale healthcare datasets has greatly propelled the research of data-driven deep learning models for healthcare applications.
no code implementations • 9 Jun 2021 • Nan Xu, Zepeng Tang, Hassan Askari, Jianfeng Zhou, Amir Khajepour
The proposed estimation model is able to estimate the slip ratio continuously and stably using only the acceleration from the intelligent tire system, and the estimated slip ratio range can reach 30%.
no code implementations • 2 Sep 2021 • Nan Xu, Junyan Wang, Yuan Tian, Ruike Zhang, Wenji Mao
Thus researchers study the definition of cross-modal correlation category and construct various classification systems and predictive models.
no code implementations • 10 Nov 2021 • Nan Xu, Theodore J. LaGrow, Nmachi Anumba, Azalea Lee, Xiaodi Zhang, Behnaz Yousefi, Yasmine Bassil, Gloria Perrin Clavijo, Vahid Khalilzad Sharghi, Eric Maltbie, Lisa Meyer-Baese, Maysam Nezafati, Wen-Ju Pan, Shella Keilholz
This review begins by examining similarities and differences in anatomical features, acquisition parameters, and preprocessing techniques, as factors that contribute to functional connectivity.
no code implementations • 2 Mar 2022 • Nan Xu, Jingchen Li, Yue Yu, Yang Li, Jinglei Yang
Positive feedback has been collected from pilot tests in several labs.
no code implementations • ACL 2022 • Yuhao Zhang, Hongji Zhu, Yongliang Wang, Nan Xu, Xiaobo Li, Binqiang Zhao
Learning high-quality sentence representations is a fundamental problem of natural language processing which could benefit a wide range of downstream tasks.
no code implementations • 5 Mar 2023 • Nan Xu, Yongming Liu
CAMEL utilizes a topology metric defined on the Riemannian manifold, and a unique Riemannian metric for both distance and curvature to enhance its expressibility.
no code implementations • 22 May 2023 • Nan Xu, Hongming Zhang, Jianshu Chen
Existing event-centric NLP models often only apply to the pre-defined ontology, which significantly restricts their generalization capabilities.
no code implementations • 16 Nov 2023 • Nan Xu, Fei Wang, Ben Zhou, Bang Zheng Li, Chaowei Xiao, Muhao Chen
While large language models (LLMs) have demonstrated increasing power, they have also given rise to a wide range of harmful behaviors.
no code implementations • 14 Dec 2023 • Linzhuang Sun, Nan Xu, Jingxuan Wei, Bihui Yu, Liping Bu, Yin Luo
Having the ability to empathize is crucial for accurately representing human behavior during conversations.
no code implementations • 22 Dec 2023 • Yin Luo, Qingchao Kong, Nan Xu, Jia Cao, Bao Hao, Baoyu Qu, Bo Chen, Chao Zhu, Chenyang Zhao, Donglei Zhang, Fan Feng, Feifei Zhao, Hailong Sun, Hanxuan Yang, Haojun Pan, Hongyu Liu, Jianbin Guo, Jiangtao Du, Jingyi Wang, Junfeng Li, Lei Sun, Liduo Liu, Lifeng Dong, Lili Liu, Lin Wang, Liwen Zhang, Minzheng Wang, Pin Wang, Ping Yu, Qingxiao Li, Rui Yan, Rui Zou, Ruiqun Li, Taiwen Huang, Xiaodong Wang, Xiaofei Wu, Xin Peng, Xina Zhang, Xing Fang, Xinglin Xiao, Yanni Hao, Yao Dong, Yigang Wang, Ying Liu, Yongyu Jiang, Yungan Wang, Yuqi Wang, Zhangsheng Wang, Zhaoxin Yu, Zhen Luo, Wenji Mao, Lei Wang, Dajun Zeng
As the latest advancements in natural language processing, large language models (LLMs) have achieved human-level language understanding and generation abilities in many real-world tasks, and even have been regarded as a potential path to the artificial general intelligence.
no code implementations • 24 Mar 2024 • Qin Liu, Fei Wang, Nan Xu, Tianyi Yan, Tao Meng, Muhao Chen
In this paper, we propose monotonic paraphrasing (MonoPara), an end-to-end decoding strategy that paraphrases given prompts or instructions into their lower perplexity counterparts based on an ensemble of a paraphrase LM for prompt (or instruction) rewriting, and a target LM (i. e. the prompt or instruction executor) that constrains the generation for lower perplexity.
no code implementations • 2 Apr 2024 • Jingxuan Wei, Nan Xu, Guiyong Chang, Yin Luo, Bihui Yu, Ruifeng Guo
In the fields of computer vision and natural language processing, multimodal chart question-answering, especially involving color, structure, and textless charts, poses significant challenges.