1 code implementation • ACL 2022 • Xin Mao, Meirong Ma, Hao Yuan, Jianchao Zhu, ZongYu Wang, Rui Xie, Wei Wu, Man Lan
Entity alignment (EA) aims to discover the equivalent entity pairs between KGs, which is a crucial step for integrating multi-source KGs. For a long time, most researchers have regarded EA as a pure graph representation learning task and focused on improving graph encoders while paying little attention to the decoding process. In this paper, we propose an effective and efficient EA Decoding Algorithm via Third-order Tensor Isomorphism (DATTI). Specifically, we derive two sets of isomorphism equations: (1) Adjacency tensor isomorphism equations and (2) Gramian tensor isomorphism equations. By combining these equations, DATTI could effectively utilize the adjacency and inner correlation isomorphisms of KGs to enhance the decoding process of EA. Extensive experiments on public datasets indicate that our decoding algorithm can deliver significant performance improvements even on the most advanced EA methods, while the extra required time is less than 3 seconds.
1 code implementation • ACL 2022 • Yutao Mou, Keqing He, Yanan Wu, Zhiyuan Zeng, Hong Xu, Huixing Jiang, Wei Wu, Weiran Xu
Discovering Out-of-Domain(OOD) intents is essential for developing new skills in a task-oriented dialogue system.
1 code implementation • ACL 2022 • Zichu Fei, Qi Zhang, Tao Gui, Di Liang, Sirui Wang, Wei Wu, Xuanjing Huang
CQG employs a simple method to generate the multi-hop questions that contain key entities in multi-hop reasoning chains, which ensure the complexity and quality of the questions.
no code implementations • COLING 2022 • Rui Zheng, Rong Bao, Qin Liu, Tao Gui, Qi Zhang, Xuanjing Huang, Rui Xie, Wei Wu
To reduce the potential side effects of using defense modules, we further propose a novel forgetting restricted adversarial training, which filters out bad adversarial examples that impair the performance of original ones.
1 code implementation • Findings (EMNLP) 2021 • Chenxu Lv, Hengtong Lu, Shuyu Lei, Huixing Jiang, Wei Wu, Caixia Yuan, Xiaojie Wang
A reliable clustering algorithm for task-oriented dialogues can help developer analysis and define dialogue tasks efficiently.
1 code implementation • EMNLP 2021 • Yuanmeng Yan, Rumei Li, Sirui Wang, Hongzhi Zhang, Zan Daoguang, Fuzheng Zhang, Wei Wu, Weiran Xu
The key challenge of question answering over knowledge bases (KBQA) is the inconsistency between the natural language questions and the reasoning paths in the knowledge base (KB).
1 code implementation • COLING 2022 • Xin Zhou, Ruotian Ma, Yicheng Zou, Xuanting Chen, Tao Gui, Qi Zhang, Xuanjing Huang, Rui Xie, Wei Wu
Specifically, we re-formulate both token and sentence classification tasks into a unified language modeling task, and map label spaces of different tasks into the same vocabulary space.
1 code implementation • NAACL 2022 • Yanan Wu, Keqing He, Yuanmeng Yan, QiXiang Gao, Zhiyuan Zeng, Fujia Zheng, Lulu Zhao, Huixing Jiang, Wei Wu, Weiran Xu
Detecting Out-of-Domain (OOD) or unknown intents from user queries is essential in a task-oriented dialog system.
no code implementations • 19 Mar 2025 • Jia-Nan Li, Jian Guan, Songhao Wu, Wei Wu, Rui Yan
Large language models (LLMs) have traditionally been aligned through one-size-fits-all approaches that assume uniform human preferences, fundamentally overlooking the diversity in user values and needs.
no code implementations • 15 Mar 2025 • Xin Jin, Haisheng Su, Kai Liu, Cong Ma, Wei Wu, Fei Hui, Junchi Yan
Inspired by the impressive performance of State Space Models (SSM) achieved in the field of 2D vision tasks, in this paper, we propose a novel Unified Mamba (UniMamba), which seamlessly integrates the merits of 3D convolution and SSM in a concise multi-head manner, aiming to perform "local and global" spatial context aggregation efficiently and simultaneously.
no code implementations • 8 Mar 2025 • Qizhe Wu, Huawen Liang, Yuchen Gui, Zhichen Zeng, Zerong He, Linfeng Tao, Xiaotian Wang, Letian Zhao, Zhaoxi Zeng, Wei Yuan, Wei Wu, Xi Jin
Based on this notation and its transformations, we propose four optimization techniques that improve timing, area, and power consumption.
no code implementations • 7 Mar 2025 • Ling Team, Binwei Zeng, Chao Huang, Chao Zhang, Changxin Tian, Cong Chen, dingnan jin, Feng Yu, Feng Zhu, Feng Yuan, Fakang Wang, Gangshan Wang, Guangyao Zhai, HaiTao Zhang, Huizhong Li, Jun Zhou, Jia Liu, Junpeng Fang, Junjie Ou, Jun Hu, Ji Luo, Ji Zhang, Jian Liu, Jian Sha, Jianxue Qian, Jiewei Wu, Junping Zhao, Jianguo Li, Jubao Feng, Jingchao Di, Junming Xu, Jinghua Yao, Kuan Xu, Kewei Du, Longfei Li, Lei Liang, Lu Yu, Li Tang, Lin Ju, Peng Xu, Qing Cui, Song Liu, Shicheng Li, Shun Song, Song Yan, Tengwei Cai, Tianyi Chen, Ting Guo, Ting Huang, Tao Feng, Tao Wu, Wei Wu, Xiaolu Zhang, Xueming Yang, Xin Zhao, Xiaobo Hu, Xin Lin, Yao Zhao, Yilong Wang, Yongzhen Guo, Yuanyuan Wang, Yue Yang, Yang Cao, Yuhao Fu, Yi Xiong, Yanzhe Li, Zhe Li, Zhiqiang Zhang, Ziqi Liu, ZhaoXin Huan, Zujie Wen, Zhenhang Sun, Zhuoxuan Du, Zhengyu He
Ultimately, our experimental findings demonstrate that a 300B MoE LLM can be effectively trained on lower-performance devices while achieving comparable performance to models of a similar scale, including dense and MoE models.
1 code implementation • 4 Mar 2025 • Xueliang Zhao, Wei Wu, Jian Guan, Lingpeng Kong
The ability of large language models to solve complex mathematical problems has progressed significantly, particularly for tasks requiring advanced reasoning.
1 code implementation • 17 Feb 2025 • Jiayang Zhang, Xianyuan Liu, Wei Wu, Sina Tabakhi, Wenrui Fan, Shuo Zhou, Kang Lan Tee, Tuck Seng Wong, Haiping Lu
Virus-like particles (VLPs) are valuable for vaccine development due to their immune-triggering properties.
no code implementations • 13 Feb 2025 • Guhao Feng, Yihan Geng, Jian Guan, Wei Wu, LiWei Wang, Di He
In this paper, we present a rigorous theoretical analysis of a widely used type of diffusion language model, the Masked Diffusion Model (MDM), and find that its effectiveness heavily depends on the target evaluation metric.
no code implementations • 11 Feb 2025 • Sahand Sabour, June M. Liu, Siyang Liu, Chris Z. Yao, Shiyao Cui, Xuanming Zhang, Wen Zhang, Yaru Cao, Advait Bhat, Jian Guan, Wei Wu, Rada Mihalcea, Hongning Wang, Tim Althoff, Tatia M. C. Lee, Minlie Huang
Through a randomized controlled trial with 233 participants, we examined human susceptibility to such manipulation in financial (e. g., purchases) and emotional (e. g., conflict resolution) decision-making contexts.
1 code implementation • 11 Feb 2025 • Wei Wu, Qiuyi Li, Mingyang Li, Kun fu, Fuli Feng, Jieping Ye, Hui Xiong, Zheng Wang
Recent developments in genomic language models have underscored the potential of LLMs in deciphering DNA sequences.
no code implementations • 6 Feb 2025 • Wei Wu, Can Liao, Zizhen Deng, Zhengrui Guo, Jinzhuo Wang
It uses contrastive learning, with segments from the same neuron as positive pairs and those from different neurons as negative pairs.
no code implementations • 19 Jan 2025 • CongCong Li, Jin Wang, Xiaomeng Wang, Xingchen Zhou, Wei Wu, Yuzhi Zhang, Tongyi Cao
3D car modeling is crucial for applications in autonomous driving systems, virtual and augmented reality, and gaming.
no code implementations • 4 Jan 2025 • Wei Wu, Zizhen Deng, Chi Zhang, Can Liao, Jinzhuo Wang
Addressing the unavoidable bias inherent in supervised aging clocks, we introduce Sundial, a novel framework that models molecular dynamics through a diffusion field, capturing both the population-level aging process and the individual-level relative aging order.
no code implementations • 28 Dec 2024 • Hanjing Zhou, Mingze Yin, Wei Wu, Mingyang Li, Kun fu, Jintai Chen, Jian Wu, Zheng Wang
However, these works were still unable to replicate the extraordinary success of language-supervised visual foundation models due to the ineffective usage of aligned protein-text paired data and the lack of an effective function-informed pre-training paradigm.
1 code implementation • 11 Dec 2024 • Fan Lu, Wei Wu, Kecheng Zheng, Shuailei Ma, Biao Gong, Jiawei Liu, Wei Zhai, Yang Cao, Yujun Shen, Zheng-Jun Zha
Generating detailed captions comprehending text-rich visual content in images has received growing attention for Large Vision-Language Models (LVLMs).
no code implementations • 11 Dec 2024 • Zhuoran Yang, Xi Guo, Chenjing Ding, Chiyu Wang, Wei Wu
Autonomous driving requires robust perception models trained on high-quality, large-scale multi-view driving videos for tasks like 3D object detection, segmentation and trajectory prediction.
no code implementations • 10 Dec 2024 • Shuailei Ma, Kecheng Zheng, Ying WEI, Wei Wu, Fan Lu, Yifei Zhang, Chen-Wei Xie, Biao Gong, Jiapeng Zhu, Yujun Shen
Although text-to-image (T2I) models have recently thrived as visual generative priors, their reliance on high-quality text-image pairs makes scaling up expensive.
no code implementations • 6 Dec 2024 • Qingyuan Li, Bo Zhang, Liang Ye, Yifan Zhang, Wei Wu, Yerui Sun, Lin Ma, Yuchen Xie
The ever-increasing sizes of large language models necessitate distributed solutions for fast inference that exploit multi-dimensional parallelism, where computational loads are split across various accelerators such as GPU clusters.
no code implementations • 2 Dec 2024 • Xi Guo, Chenjing Ding, Haoxuan Dou, Xin Zhang, Weixuan Tang, Wei Wu
Comprehensive experiments in multiple datasets validate InfinityDrive's ability to generate complex and varied scenarios, highlighting its potential as a next-generation driving world model built for the evolving demands of autonomous driving.
no code implementations • 5 Nov 2024 • Wei Wu, Zhuoshi Pan, Chao Wang, Liyi Chen, Yunchu Bai, Kun fu, Zheng Wang, Hui Xiong
With the development of large language models (LLMs), the ability to handle longer contexts has become a key capability for Web applications such as cross-document understanding and LLM-powered search systems.
no code implementations • 4 Nov 2024 • Lingyi Wang, Wei Wu, Fuhui Zhou, Zhijin Qin, Qihui Wu
In this paper, intelligent reflective surface (IRS)-enhanced secure semantic communication (IRS-SSC) is proposed to guarantee the physical layer security from a task-oriented semantic perspective.
1 code implementation • 4 Nov 2024 • Yiheng Zhu, Jialu Wu, Qiuyi Li, Jiahuan Yan, Mingze Yin, Wei Wu, Mingyang Li, Jieping Ye, Zheng Wang, Jian Wu
To fill these gaps, we propose Bridge-IF, a generative diffusion bridge model for inverse folding, which is designed to learn the probabilistic dependency between the distributions of backbone structures and protein sequences.
no code implementations • 30 Oct 2024 • Wei Wu, Liang Tang, Zhongjie Zhao, Chung-Piaw Teo
Stacking, a potent ensemble learning method, leverages a meta-model to harness the strengths of multiple base models, thereby enhancing prediction accuracy.
no code implementations • 15 Oct 2024 • Songyuan Liu, Ziyang Zhang, Runze Yan, Wei Wu, Carl Yang, Jiaying Lu
Large language models (LLMs) have become integral tool for users from various backgrounds.
no code implementations • 13 Oct 2024 • Xinxi Chen, Li Wang, Wei Wu, Qi Tang, Yiyao Liu
In this paper, we propose Honest AI: a novel strategy to fine-tune "small" language models to say "I don't know" to reduce hallucination, along with several alternative RAG approaches.
no code implementations • 7 Oct 2024 • Wei Wu, Kecheng Zheng, Shuailei Ma, Fan Lu, Yuxin Guo, Yifei Zhang, Wei Chen, Qingpei Guo, Yujun Shen, Zheng-Jun Zha
Then, after incorporating corner tokens to aggregate diverse textual information, we manage to help the model catch up to its original level of short text understanding yet greatly enhance its capability of long text understanding.
no code implementations • 4 Oct 2024 • Wei Wu, Chao Wang, Liyi Chen, Mingze Yin, Yiheng Zhu, Kun fu, Jieping Ye, Hui Xiong, Zheng Wang
Recent development of protein language models (pLMs) with supervised fine tuning provides a promising solution to this problem.
1 code implementation • 2 Oct 2024 • Zhengrui Guo, Fangxu Zhou, Wei Wu, Qichen Sun, Lishuang Feng, Jinzhuo Wang, Hao Chen
To this end, we propose BLEND, the behavior-guided neural population dynamics modeling framework via privileged knowledge distillation.
no code implementations • 2 Oct 2024 • Xiang Hu, Zhihao Teng, Jun Zhao, Wei Wu, Kewei Tu
In this paper, we propose a novel attention mechanism based on dynamic context, Grouped Cross Attention (GCA), which can generalize to 1000 times the pre-training context length while maintaining the ability to access distant information with a constant attention window size.
1 code implementation • 29 Sep 2024 • Jia-Nan Li, Jian Guan, Wei Wu, Zhengtao Yu, Rui Yan
Tables are ubiquitous across various domains for concisely representing structured information.
no code implementations • 26 Sep 2024 • Zehao Zhu, Wei Sun, Jun Jia, Wei Wu, Sibin Deng, Kai Li, Ying Chen, Xiongkuo Min, Jia Wang, Guangtao Zhai
For the subjective QoE study, we introduce the first live video streaming QoE dataset, TaoLive QoE, which consists of $42$ source videos collected from real live broadcasts and $1, 155$ corresponding distorted ones degraded due to a variety of streaming distortions, including conventional streaming distortions such as compression, stalling, as well as live streaming-specific distortions like frame skipping, variable frame rate, etc.
2 code implementations • 19 Sep 2024 • Jiaxin Wen, Jian Guan, Hongning Wang, Wei Wu, Minlie Huang
To train CodePlan, we construct a large-scale dataset of 2M examples that integrate code-form plans with standard prompt-response pairs from existing corpora.
1 code implementation • 15 Sep 2024 • Haisheng Su, Wei Wu, Junchi Yan
Specifically, DiFSD mainly consists of sparse perception, hierarchical interaction and iterative motion planner.
Ranked #10 on
Bench2Drive
on Bench2Drive
no code implementations • 10 Sep 2024 • Yining Yao, Xi Guo, Chenjing Ding, Wei Wu
High-quality driving video generation is crucial for providing training data for autonomous driving models.
no code implementations • 9 Sep 2024 • Chenjing Ding, Chiyu Wang, Boshi Liu, Xi Guo, Weixuan Tang, Wei Wu
Utilizing inference results from segmentation model , our approach constructs a temporospatially consistent semantic codebook, addressing issues of codebook collapse and imbalanced token semantics.
no code implementations • 9 Sep 2024 • Wei Wu, Xi Guo, Weixuan Tang, Tingxuan Huang, Chiyu Wang, Dongyue Chen, Chenjing Ding
However, existing approaches often struggle with multi-view video generation due to the challenges of integrating 3D information while maintaining spatial-temporal consistency and effectively learning from a unified model.
no code implementations • 7 Sep 2024 • Thomas Yu CHow Tam, Litian Liang, Ke Chen, Haohan Wang, Wei Wu
To bridge such gap, in this study, we developed a quantitative disease-focusing strategy to first enhance the interpretability of DL models using saliency maps and brain segmentations; then we propose a disease-focus (DF) score that quantifies how much a DL model focuses on brain areas relevant to AD pathology based on clinically known MRI-based pathological regions of AD.
1 code implementation • 28 Aug 2024 • Haisheng Su, Feixiang Song, Cong Ma, Wei Wu, Junchi Yan
Reliable embodied perception from an egocentric perspective is challenging yet essential for autonomous navigation technology of intelligent mobile agents.
1 code implementation • 11 Jul 2024 • Yihan Zhang, Xuanshuo Zhang, Wei Wu, Haohan Wang
In order to leverage the fact that the brain volume shrinkage happens in AD patients during disease progression, we define a new evaluation metric, brain volume change score (VCS), by computing the average Pearson correlation of the brain volume changes and the saliency values of a model in different brain regions for each patient.
no code implementations • 9 Jul 2024 • Zhuocheng Gong, Ang Lv, Jian Guan, Junxi Yan, Wei Wu, Huishuai Zhang, Minlie Huang, Dongyan Zhao, Rui Yan
More interestingly, with a fixed parameter budget, MoM-large enables an over 38% increase in depth for computation graphs compared to GPT-2-large, resulting in absolute gains of 1. 4 on GLUE and 1 on XSUM.
no code implementations • 4 Jul 2024 • Yuyan Chen, Zhihao Wen, Ge Fan, Zhengyu Chen, Wei Wu, Dayiheng Liu, Zhixu Li, Bang Liu, Yanghua Xiao
Prompt engineering, as an efficient and effective way to leverage Large Language Models (LLM), has drawn a lot of attention from the research community.
2 code implementations • 28 Jun 2024 • Chuanqi Cheng, Jian Guan, Wei Wu, Rui Yan
To overcome the challenge, we first introduce a least-to-most visual reasoning paradigm, which interleaves steps of decomposing a question into sub-questions and invoking external tools for resolving sub-questions.
no code implementations • 21 Jun 2024 • Qingyang Zhu, Xiang Hu, Pengyu Ji, Wei Wu, Kewei Tu
Specifically, the deep model jointly encodes internal structures and representations of words with a mechanism named $\textit{MorphOverriding}$ to ensure the indecomposability of morphemes.
1 code implementation • 24 May 2024 • Wei Wu, Xiaoxin Feng, Ziyan Gao, Yuheng Kan
We have released all the code to promote the exploration of models for motion generation in the autonomous driving field.
1 code implementation • 18 Apr 2024 • Wei Wu, Qingnan Fan, Shuai Qin, Hong Gu, Ruoyu Zhao, Antoni B. Chan
Precise image editing with text-to-image models has attracted increasing interest due to their remarkable generative capabilities and user-friendly nature.
1 code implementation • 28 Mar 2024 • Huanpeng Chu, Wei Wu, Chengjie Zang, Kun Yuan
Diffusion models have revolutionized image synthesis, setting new benchmarks in quality and creativity.
no code implementations • 27 Mar 2024 • Ruoyu Zhao, Qingnan Fan, Fei Kou, Shuai Qin, Hong Gu, Wei Wu, Pengcheng Xu, Mingrui Zhu, Nannan Wang, Xinbo Gao
Two key techniques are introduced into InstructBrush, Attention-based Instruction Optimization and Transformation-oriented Instruction Initialization, to address the limitations of the previous method in terms of inversion effects and instruction generalization.
1 code implementation • 26 Mar 2024 • Wei Wu, Chao Wang, Dazhong Shen, Chuan Qin, Liyi Chen, Hui Xiong
Collaborative filtering methods based on graph neural networks (GNNs) have witnessed significant success in recommender systems (RS), capitalizing on their ability to capture collaborative signals within intricate user-item relationships via message-passing mechanisms.
1 code implementation • 25 Mar 2024 • Kecheng Zheng, Yifei Zhang, Wei Wu, Fan Lu, Shuailei Ma, Xin Jin, Wei Chen, Yujun Shen
Motivated by this, we propose to dynamically sample sub-captions from the text label to construct multiple positive pairs, and introduce a grouping loss to match the embeddings of each sub-caption with its corresponding local image patches in a self-supervised manner.
2 code implementations • 13 Mar 2024 • Xiang Hu, Pengyu Ji, Qingyang Zhu, Wei Wu, Kewei Tu
A syntactic language model (SLM) incrementally generates a sentence with its syntactic tree in a left-to-right manner.
no code implementations • 6 Mar 2024 • Yuling Wang, Xiao Wang, Xiangzhou Huang, Yanhua Yu, Haoyang Li, Mengdi Zhang, Zirui Guo, Wei Wu
The other is different behaviors have different intent distributions, so how to establish their relations for a more explainable recommender system.
no code implementations • CVPR 2024 • Cong Ma, Lei Qiao, Chengkai Zhu, Kai Liu, Zelong Kong, Qing Li, Xueqi Zhou, Yuheng Kan, Wei Wu
Based on HoloVIC, we formulated four tasks to facilitate the development of related research.
no code implementations • 5 Mar 2024 • Chuanqi Cheng, Quan Tu, Shuo Shang, Cunli Mao, Zhengtao Yu, Wei Wu, Rui Yan
Personalized dialogue systems have gained significant attention in recent years for their ability to generate responses in alignment with different personas.
1 code implementation • 22 Feb 2024 • Yuzhe Yang, Yujia Liu, Xin Liu, Avanti Gulhane, Domenico Mastrodicasa, Wei Wu, Edward J Wang, Dushyant W Sahani, Shwetak Patel
Such demographic biases present over a wide range of pathologies and demographic attributes.
no code implementations • 21 Feb 2024 • Haobo Liu, Zhengyang Qian, Wei Wu, Hongwei Ren, Zhiwei Liu, Leibin Ni
Moreover, a novel FP-DAC is also implemented which reconstructs FP digital codes into analog values to perform analog computation.
no code implementations • 9 Feb 2024 • Aven-Le Zhou, Yu-Ao Wang, Wei Wu, Kang Zhang
This paper introduces a prompting-free generative approach that empowers users to automatically generate personalized painterly content that incorporates their aesthetic preferences in a customized artistic style.
1 code implementation • 2 Feb 2024 • Jian Guan, Wei Wu, Zujie Wen, Peng Xu, Hongning Wang, Minlie Huang
We present AMOR, an agent framework based on open-source LLMs, which reasons with external knowledge bases and adapts to specific domains through human supervision to the reasoning process.
1 code implementation • 21 Dec 2023 • Qinying Liu, Wei Wu, Kecheng Zheng, Zhan Tong, Jiawei Liu, Yu Liu, Wei Chen, Zilei Wang, Yujun Shen
The crux of learning vision-language models is to extract semantically aligned information from visual and linguistic data.
no code implementations • 2 Dec 2023 • Lingyi Wang, Wei Wu, Fuhui Zhou, Zhaohui Yang, Zhijin Qin
In order to investigate the performance of semantic communication networks, the quality of service for semantic communication (SC-QoS), including the semantic quantization efficiency (SQE) and transmission latency, is proposed for the first time.
1 code implementation • 1 Nov 2023 • Wei Wu, Hao Chang, Zhu Li
One is difference of Gaussian (DoG) pyramid recovery network (DPRNet) for SIFT detection, and the other gradients of Gaussian images recovery network (GGIRNet) for SIFT description.
no code implementations • 9 Oct 2023 • Zhihua Wen, Zhiliang Tian, Wei Wu, Yuxin Yang, Yanqi Shi, Zhen Huang, Dongsheng Li
Finally, we select the most fitting chains of evidence from the evidence forest and integrate them into the generated story, thereby enhancing the narrative's complexity and credibility.
1 code implementation • 28 Sep 2023 • Xiang Hu, Qingyang Zhu, Kewei Tu, Wei Wu
More interestingly, the hierarchical structures induced by ReCAT exhibit strong consistency with human-annotated syntactic trees, indicating good interpretability brought by the CIO layers.
Ranked #4 on
Semantic Role Labeling
on OntoNotes
Constituency Grammar Induction
Natural Language Inference
+2
no code implementations • ICCV 2023 • Qiangqiang Wu, Tianyu Yang, Wei Wu, Antoni Chan
The current popular methods for video object segmentation (VOS) implement feature matching through several hand-crafted modules that separately perform feature extraction and matching.
no code implementations • ICCV 2023 • Kecheng Zheng, Wei Wu, Ruili Feng, Kai Zhu, Jiawei Liu, Deli Zhao, Zheng-Jun Zha, Wei Chen, Yujun Shen
To bring the useful knowledge back into light, we first identify a set of parameters that are important to a given downstream task, then attach a binary mask to each parameter, and finally optimize these masks on the downstream data with the parameters frozen.
no code implementations • 26 Jul 2023 • Chao Zhang, Xinyu Chen, Wensheng Li, Lixue Liu, Wei Wu, DaCheng Tao
In this paper, we measure the linear separability of hidden layer outputs to study the characteristics of deep neural networks.
no code implementations • 19 Jul 2023 • Tongshuang Wu, Haiyi Zhu, Maya Albayrak, Alexis Axon, Amanda Bertsch, Wenxing Deng, Ziqi Ding, Bill Guo, Sireesh Gururaja, Tzu-Sheng Kuo, Jenny T. Liang, Ryan Liu, Ihita Mandal, Jeremiah Milbauer, Xiaolin Ni, Namrata Padmanabhan, Subhashini Ramkumar, Alexis Sudjianto, Jordan Taylor, Ying-Jui Tseng, Patricia Vaidos, Zhijin Wu, Wei Wu, Chenyang Yang
We reflect on human and LLMs' different sensitivities to instructions, stress the importance of enabling human-facing safeguards for LLMs, and discuss the potential of training humans and LLMs with complementary skill sets.
no code implementations • 19 Jul 2023 • Xiaohong Liu, Xiongkuo Min, Wei Sun, Yulun Zhang, Kai Zhang, Radu Timofte, Guangtao Zhai, Yixuan Gao, Yuqin Cao, Tengchuan Kou, Yunlong Dong, Ziheng Jia, Yilin Li, Wei Wu, Shuming Hu, Sibin Deng, Pengxiang Xiao, Ying Chen, Kai Li, Kai Zhao, Kun Yuan, Ming Sun, Heng Cong, Hao Wang, Lingzhi Fu, Yusheng Zhang, Rongyu Zhang, Hang Shi, Qihang Xu, Longan Xiao, Zhiliang Ma, Mirko Agarla, Luigi Celona, Claudio Rota, Raimondo Schettini, Zhiwei Huang, Yanan Li, Xiaotao Wang, Lei Lei, Hongye Liu, Wei Hong, Ironhead Chuang, Allen Lin, Drake Guan, Iris Chen, Kae Lou, Willy Huang, Yachun Tasi, Yvonne Kao, Haotian Fan, Fangyuan Kong, Shiqi Zhou, Hao liu, Yu Lai, Shanshan Chen, Wenqi Wang, HaoNing Wu, Chaofeng Chen, Chunzheng Zhu, Zekun Guo, Shiling Zhao, Haibing Yin, Hongkui Wang, Hanene Brachemi Meftah, Sid Ahmed Fezza, Wassim Hamidouche, Olivier Déforges, Tengfei Shi, Azadeh Mansouri, Hossein Motamednia, Amir Hossein Bakhtiari, Ahmad Mahmoudi Aznaveh
61 participating teams submitted their prediction results during the development phase, with a total of 3168 submissions.
no code implementations • 25 Jun 2023 • YuXing Lee, Wei Wu
Due to the point cloud's irregular and unordered geometry structure, conventional knowledge distillation technology lost a lot of information when directly used on point cloud tasks.
1 code implementation • 17 Jun 2023 • Weihao Zeng, Lulu Zhao, Keqing He, Ruotong Geng, Jingang Wang, Wei Wu, Weiran Xu
In this paper, we explore the compositional generalization for multi-attribute controllable dialogue generation where a model can learn from seen attribute values and generalize to unseen combinations.
1 code implementation • 14 Jun 2023 • Yuntao Li, Zhenpeng Su, Yutian Li, Hanchu Zhang, Sirui Wang, Wei Wu, Yan Zhang
Translating natural language queries into SQLs in a seq2seq manner has attracted much attention recently.
Ranked #11 on
Text-To-SQL
on spider
no code implementations • 30 May 2023 • Zhuocheng Gong, Jiahao Liu, Qifan Wang, Yang Yang, Jingang Wang, Wei Wu, Yunsen Xian, Dongyan Zhao, Rui Yan
While transformer-based pre-trained language models (PLMs) have dominated a number of NLP applications, these models are heavy to deploy and expensive to use.
no code implementations • 26 May 2023 • Gaole Dai, Wei Wu, Ziyu Wang, Jie Fu, Shanghang Zhang, Tiejun Huang
By incorporating hand-designed optimizers as the second component in our hybrid approach, we are able to retain the benefits of learned optimizers while stabilizing the training process and, more importantly, improving testing performance.
1 code implementation • 26 May 2023 • Jiduan Liu, Jiahao Liu, Qifan Wang, Jingang Wang, Wei Wu, Yunsen Xian, Dongyan Zhao, Kai Chen, Rui Yan
In this paper, we propose a novel approach, RankCSE, for unsupervised sentence representation learning, which incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
no code implementations • 18 May 2023 • Chao Wang, HengShu Zhu, Dazhong Shen, Wei Wu, Hui Xiong
In this way, the low-rating items will be treated as positive samples for modeling intents while the negative samples for modeling preferences.
1 code implementation • 16 Apr 2023 • Zepeng Huai, Yuji Yang, Mengdi Zhang, Zhongyi Zhang, YiChun Li, Wei Wu
(2) From the CDR perspective, not all inter-domain interests are helpful to infer intra-domain interests.
1 code implementation • CVPR 2023 • ZiCheng Zhang, Wei Wu, Wei Sun, Dangyang Tu, Wei Lu, Xiongkuo Min, Ying Chen, Guangtao Zhai
User-generated content (UGC) live videos are often bothered by various distortions during capture procedures and thus exhibit diverse visual qualities.
no code implementations • 20 Mar 2023 • Ying Mo, Hongyin Tang, Jiahao Liu, Qifan Wang, Zenglin Xu, Jingang Wang, Wei Wu, Zhoujun Li
There are three types of NER tasks, including flat, nested and discontinuous entity recognition.
no code implementations • 24 Feb 2023 • Yonghao Liu, Di Liang, Fang Fang, Sirui Wang, Wei Wu, Rui Jiang
For each given question, TMA first extracts the relevant concepts from the KG, and then feeds them into a multiway adaptive module to produce a \emph{temporal-specific} representation of the question.
Ranked #13 on
Question Answering
on TimeQuestions
no code implementations • 24 Feb 2023 • Chao Xue, Di Liang, Sirui Wang, Wei Wu, Jing Zhang
To alleviate this problem, we propose a novel Dual Path Modeling Framework to enhance the model's ability to perceive subtle differences in sentence pairs by separately modeling affinity and difference semantics.
1 code implementation • 14 Feb 2023 • Chengcheng Han, Renyu Zhu, Jun Kuang, FengJiao Chen, Xiang Li, Ming Gao, Xuezhi Cao, Wei Wu
We design an improved triplet network to map samples and prototype vectors into a low-dimensional space that is easier to be classified and propose an adaptive margin for each entity type.
no code implementations • 7 Feb 2023 • Wentao Shi, Junkang Wu, Xuezhi Cao, Jiawei Chen, Wenqiang Lei, Wei Wu, Xiangnan He
Specifically, they suffer from two main limitations: 1) existing Graph Convolutional Network (GCN) methods in hyperbolic space rely on tangent space approximation, which would incur approximation error in representation learning, and 2) due to the lack of inner product operation definition in hyperbolic space, existing methods can only measure the plausibility of facts (links) with hyperbolic distance, which is difficult to capture complex data patterns.
1 code implementation • 8 Jan 2023 • Zhengyi Liu, Wei Wu, Yacheng Tan, Guanghui Zhang
To better excavate multi-modal information, we use count-guided multi-modal fusion and modal-guided count enhancement to achieve the impressive performance.
1 code implementation • CVPR 2023 • Chuanfu Shen, Chao Fan, Wei Wu, Rui Wang, George Q. Huang, Shiqi Yu
Video-based gait recognition has achieved impressive results in constrained scenarios.
1 code implementation • 14 Nov 2022 • Yong-Lu Li, Hongwei Fan, Zuoyu Qiu, Yiming Dou, Liang Xu, Hao-Shu Fang, Peiyang Guo, Haisheng Su, Dongliang Wang, Wei Wu, Cewu Lu
In daily HOIs, humans often interact with a variety of objects, e. g., holding and touching dozens of household items in cleaning.
2 code implementations • ACL 2022 • Rui Zheng, Rong Bao, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, Xuanjing Huang
Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models.
no code implementations • 27 Oct 2022 • Zilin Yuan, Yinghui Li, Yangning Li, Rui Xie, Wei Wu, Hai-Tao Zheng
We noted that the distinctness of the domain-specific features is different, so in this paper, we propose to use a curriculum learning strategy based on keyword weight ranking to improve the performance of multi-domain text classification models.
no code implementations • 23 Oct 2022 • Jingheng Ye, Yinghui Li, Shirong Ma, Rui Xie, Wei Wu, Hai-Tao Zheng
Chinese Grammatical Error Correction (CGEC) aims to automatically detect and correct grammatical errors contained in Chinese text.
no code implementations • 22 Oct 2022 • Yupeng Zhang, Hongzhi Zhang, Sirui Wang, Wei Wu, Zhoujun Li
A wide range of NLP tasks benefit from the fine-tuning of pretrained language models (PLMs).
1 code implementation • 19 Oct 2022 • Yutao Mou, Pei Wang, Keqing He, Yanan Wu, Jingang Wang, Wei Wu, Weiran Xu
Specifically, we design a K-nearest neighbor contrastive learning (KNCL) objective for representation learning and introduce a KNN-based scoring function for OOD detection.
1 code implementation • 17 Oct 2022 • Weihao Zeng, Keqing He, Zechen Wang, Dayuan Fu, Guanting Dong, Ruotong Geng, Pei Wang, Jingang Wang, Chaobo Sun, Wei Wu, Weiran Xu
Recent advances in neural approaches greatly improve task-oriented dialogue (TOD) systems which assist users to accomplish their goals.
1 code implementation • 17 Oct 2022 • Yutao Mou, Keqing He, Pei Wang, Yanan Wu, Jingang Wang, Wei Wu, Weiran Xu
For OOD clustering stage, we propose a KCC method to form compact clusters by mining true hard negative samples, which bridges the gap between clustering and representation learning.
no code implementations • 16 Oct 2022 • Jian Song, Di Liang, Rumei Li, Yuntao Li, Sirui Wang, Minlong Peng, Wei Wu, Yongxin Yu
Transformer-based pre-trained models like BERT have achieved great progress on Semantic Sentence Matching.
no code implementations • 10 Oct 2022 • Fang Ma, Chen Zhang, Lei Ren, Jingang Wang, Qifan Wang, Wei Wu, Xiaojun Quan, Dawei Song
Prompt tuning learns soft prompts to condition frozen Pre-trained Language Models (PLMs) for performing downstream tasks in a parameter-efficient manner.
no code implementations • COLING 2022 • Sirui Wang, Di Liang, Jian Song, Yuntao Li, Wei Wu
To alleviate this problem, we propose a novel Dual Attention Enhanced BERT (DABERT) to enhance the ability of BERT to capture fine-grained differences in sentence pairs.
no code implementations • 25 Sep 2022 • Rui Wan, Shuangjie Xu, Wei Wu, Xiaoyi Zou, Tongyi Cao
The whole fusion architecture named Dynamic Cross Attention Network (DCAN) exploits multi-level image features and adapts to multiple representations of point clouds, which allows DCA to serve as a plug-in fusion module.
no code implementations • 20 Sep 2022 • Han Hu, Xingwu Zhu, Fuhui Zhou, Wei Wu, Rose Qingyang Hu, Hongbo Zhu
To effectively exploit the benefits enabled by semantic communication, in this paper, we propose a one-to-many semantic communication system.
1 code implementation • COLING 2022 • Yutao Mou, Keqing He, Yanan Wu, Pei Wang, Jingang Wang, Wei Wu, Yi Huang, Junlan Feng, Weiran Xu
Traditional intent classification models are based on a pre-defined intent set and only recognize limited in-domain (IND) intent classes.
1 code implementation • COLING 2022 • Chen Zhang, Lei Ren, Fang Ma, Jingang Wang, Wei Wu, Dawei Song
Thus, a natural question arises: Is structural bias still a necessity in the context of PLMs?
no code implementations • 31 Aug 2022 • Sirui Wang, Kaiwen Wei, Hongzhi Zhang, Yuntao Li, Wei Wu
Inspired by the human learning process, in this paper, we introduce Imitation DEMOnstration Learning (Imitation-Demo) to strengthen demonstration learning via explicitly imitating human review behaviour, which includes: (1) contrastive learning mechanism to concentrate on the similar demonstrations.
no code implementations • 31 Aug 2022 • Keqing He, Jingang Wang, Chaobo Sun, Wei Wu
In this paper, we propose a novel unified knowledge prompt pre-training framework, UFA (\textbf{U}nified Model \textbf{F}or \textbf{A}ll Tasks), for customer service dialogues.
1 code implementation • 30 Aug 2022 • ZiCheng Zhang, Wei Sun, Yucheng Zhu, Xiongkuo Min, Wei Wu, Ying Chen, Guangtao Zhai
To tackle the challenge of point cloud quality assessment (PCQA), many PCQA methods have been proposed to evaluate the visual quality levels of point clouds by assessing the rendered static 2D projections.
no code implementations • COLING 2022 • Borun Chen, Hongyin Tang, Jiahao Bu, Kai Zhang, Jingang Wang, Qifan Wang, Hai-Tao Zheng, Wei Wu, Liqian Yu
However, most current models use Chinese characters as inputs and are not able to encode semantic information contained in Chinese words.
1 code implementation • 16 Aug 2022 • Xiao Liu, Shiyu Zhao, Kai Su, Yukuo Cen, Jiezhong Qiu, Mengdi Zhang, Wei Wu, Yuxiao Dong, Jie Tang
In this work, we present the Knowledge Graph Transformer (kgTransformer) with masked pre-training and fine-tuning strategies.
no code implementations • 1 Aug 2022 • Huixuan Chi, Hao Xu, Hao Fu, Mengya Liu, Mengdi Zhang, Yuji Yang, Qinfen Hao, Wei Wu
In particular: 1) existing methods do not explicitly encode and capture the evolution of short-term preference as sequential methods do; 2) simply using last few interactions is not enough for modeling the changing trend.
1 code implementation • 23 Jul 2022 • Dong Yang, Fei Jiang, Wei Wu, Xuefei Fang, Muyong Cao
The Kalman filter has been adopted in acoustic echo cancellation due to its robustness to double-talk, fast convergence, and good steady-state performance.
no code implementations • 16 Jul 2022 • Wei Wu, Junlin He, Yu Qiao, Guoheng Fu, Li Liu, Jin Yu
The in-memory approximate nearest neighbor search (ANNS) algorithms have achieved great success for fast high-recall query processing, but are extremely inefficient when handling hybrid queries with unstructured (i. e., feature vectors) and structured (i. e., related attributes) constraints.
no code implementations • 8 Jun 2022 • ZiCheng Zhang, Wei Sun, Wei Wu, Ying Chen, Xiongkuo Min, Guangtao Zhai
Nowadays, the mainstream full-reference (FR) metrics are effective to predict the quality of compressed images at coarse-grained levels (the bit rates differences of compressed images are obvious), however, they may perform poorly for fine-grained compressed images whose bit rates differences are quite subtle.
1 code implementation • 7 Jun 2022 • Ruotian Ma, Yiding Tan, Xin Zhou, Xuanting Chen, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang
Input distribution shift is one of the vital problems in unsupervised domain adaptation (UDA).
1 code implementation • 29 May 2022 • Chen Zhang, Yang Yang, Qifan Wang, Jiahao Liu, Jingang Wang, Wei Wu, Dawei Song
In particular, motivated by the finding that the performance of the student is positively correlated to the scale-performance tradeoff of the teacher assistant, MiniDisc is designed with a $\lambda$-tradeoff to measure the optimality of the teacher assistant without trial distillation to the student.
no code implementations • 24 May 2022 • Yuling Wang, Hao Xu, Yanhua Yu, Mengdi Zhang, Zhenhao Li, Yuji Yang, Wei Wu
This EMR optimization objective is able to derive an iterative updating rule, which can be formalized as an ensemble message passing (EnMP) layer with multi-relations.
1 code implementation • 21 May 2022 • Zhengyi Liu, Zhili Zhang, Wei Wu
The foreground is just object, while foreground minus background is considered as boundary.
no code implementations • 18 May 2022 • Kai Zhang, Qi Liu, Zhenya Huang, Mingyue Cheng, Kun Zhang, Mengdi Zhang, Wei Wu, Enhong Chen
Existing studies in this task attach more attention to the sequence modeling of sentences while largely ignoring the rich domain-invariant semantics embedded in graph structures (i. e., the part-of-speech tags and dependency relations).
1 code implementation • 11 May 2022 • Chen Zhang, Lei Ren, Jingang Wang, Wei Wu, Dawei Song
Prompt-tuning has shown appealing performance in few-shot classification by virtue of its capability in effectively exploiting pre-trained knowledge.
1 code implementation • CVPR 2022 • Mengzhe He, Yali Wang, Jiaxi Wu, Yiru Wang, Hanqing Li, Bo Li, Weihao Gan, Wei Wu, Yu Qiao
It can adaptively enhance source detector to perceive objects in a target image, by leveraging target proposal contexts from iterative cross-attention.
no code implementations • 24 Apr 2022 • Wei Wu, Bin Li
Data similarity (or distance) computation is a fundamental research topic which fosters a variety of similarity-based machine learning and data mining applications.
no code implementations • 18 Apr 2022 • Jiduan Liu, Jiahao Liu, Yang Yang, Jingang Wang, Wei Wu, Dongyan Zhao, Rui Yan
To enhance the performance of dense retrieval models without loss of efficiency, we propose a GNN-encoder model in which query (passage) information is fused into passage (query) representations via graph neural networks that are constructed by queries and their top retrieved passages.
no code implementations • CVPR 2022 • Jiaxi Wu, Jiaxin Chen, Mengzhe He, Yiru Wang, Bo Li, Bingqi Ma, Weihao Gan, Wei Wu, Yali Wang, Di Huang
Specifically, TRKP adopts the teacher-student framework, where the multi-head teacher network is built to extract knowledge from labeled source domains and guide the student network to learn detectors in unlabeled target domain.
no code implementations • NAACL 2022 • Xueliang Zhao, Tingchen Fu, Chongyang Tao, Wei Wu, Dongyan Zhao, Rui Yan
Grounding dialogue generation by extra knowledge has shown great potentials towards building a system capable of replying with knowledgeable and engaging responses.
no code implementations • Findings (NAACL) 2022 • Ze Yang, Liran Wang, Zhoujin Tian, Wei Wu, Zhoujun Li
Another is that applying the existing pre-trained models to this task is tricky because of the structural dependence within the conversation and its informal expression, etc.
1 code implementation • NAACL 2022 • Lulu Zhao, Fujia Zheng, Weihao Zeng, Keqing He, Weiran Xu, Huixing Jiang, Wei Wu, Yanan Wu
The most advanced abstractive dialogue summarizers lack generalization ability on new domains and the existing researches for domain adaptation in summarization generally rely on large-scale pre-trainings.
1 code implementation • CVPR 2022 • Qiuhong Shen, Lei Qiao, Jinyang Guo, Peixia Li, Xin Li, Bo Li, Weitao Feng, Weihao Gan, Wei Wu, Wanli Ouyang
As unlimited self-supervision signals can be obtained by tracking a video along a cycle in time, we investigate evolving a Siamese tracker by tracking videos forward-backward.
no code implementations • Findings (ACL) 2022 • Kai Zhang, Kun Zhang, Mengdi Zhang, Hongke Zhao, Qi Liu, Wei Wu, Enhong Chen
Aspect-based sentiment analysis (ABSA) predicts sentiment polarity towards a specific aspect in the given sentence.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+3
1 code implementation • 28 Mar 2022 • Sijie Cheng, Zhouhong Gu, Bang Liu, Rui Xie, Wei Wu, Yanghua Xiao
Specifically, i) to fully exploit user behavioral information, we extract candidate hyponymy relations that match user interests from query-click concepts; ii) to enhance the semantic information of new concepts and better detect hyponymy relations, we model concepts and relations through both user-generated content and structural information in existing taxonomies and user click logs, by leveraging Pre-trained Language Models and Graph Neural Network combined with Contrastive Learning; iii) to reduce the cost of dataset construction and overcome data skews, we construct a high-quality and balanced training dataset from existing taxonomy with no supervision.
1 code implementation • 22 Mar 2022 • Xiaoyang Guo, Wei Wu, Anuj Srivastava
Alignment or registration of functions is a fundamental problem in statistical analysis of functions and shapes.
no code implementations • ICCV 2023 • Liang Xu, Ziyang Song, Dongliang Wang, Jing Su, Zhicheng Fang, Chenjing Ding, Weihao Gan, Yichao Yan, Xin Jin, Xiaokang Yang, Wenjun Zeng, Wei Wu
We present a GAN-based Transformer for general action-conditioned 3D human motion generation, including not only single-person actions but also multi-person interactive actions.
1 code implementation • 10 Mar 2022 • BoYu Chen, Peixia Li, Lei Bai, Lei Qiao, Qiuhong Shen, Bo Li, Weihao Gan, Wei Wu, Wanli Ouyang
Exploiting a general-purpose neural architecture to replace hand-wired designs or inductive biases has recently drawn extensive interest.
1 code implementation • 8 Mar 2022 • LiWen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, Weiran Xu
Recently, prompt-based methods have achieved significant performance in few-shot learning scenarios by bridging the gap between language model pre-training and fine-tuning for downstream tasks.
no code implementations • 15 Feb 2022 • Pei Li, Lingyi Wang, Wei Wu, Fuhui Zhou, Baoyun Wang, Qihui Wu
In this paper, we propose a novel graph neural networks (GNN) based approach that can map the considered system into a specific graph structure and achieve the optimal solution in a low complexity manner.
no code implementations • CVPR 2022 • Wei Wu, Jiawei Liu, Kecheng Zheng, Qibin Sun, Zheng-Jun Zha
Image-to-video person re-identification aims to retrieve the same pedestrian as the image-based query from a video-based gallery set.
Deep Reinforcement Learning
Image-To-Video Person Re-Identification
+5
no code implementations • CVPR 2022 • Xi Guo, Wei Wu, Dongliang Wang, Jing Su, Haisheng Su, Weihao Gan, Jian Huang, Qin Yang
In this paper, we take an early step towards video representation learning of human actions with the help of largescale synthetic videos, particularly for human motion representation enhancement.
1 code implementation • 16 Dec 2021 • Yuntao Li, Hanchu Zhang, Yutian Li, Sirui Wang, Wei Wu, Yan Zhang
Conversational text-to-SQL aims at converting multi-turn natural language queries into their corresponding SQL (Structured Query Language) representations.
Ranked #2 on
Text-To-SQL
on SParC
no code implementations • 8 Dec 2021 • Dan Li, Yang Yang, Hongyin Tang, Jingang Wang, Tong Xu, Wei Wu, Enhong Chen
With the booming of pre-trained transformers, representation-based models based on Siamese transformer encoders have become mainstream techniques for efficient text matching.
1 code implementation • 7 Dec 2021 • Shoubin Yu, Zhongyin Zhao, Haoshu Fang, Andong Deng, Haisheng Su, Dongliang Wang, Weihao Gan, Cewu Lu, Wei Wu
Different from pixel-based anomaly detection methods, pose-based methods utilize highly-structured skeleton data, which decreases the computational burden and also avoids the negative impact of background noise.
Ranked #2 on
Video Anomaly Detection
on HR-ShanghaiTech
Anomaly Detection In Surveillance Videos
Optical Flow Estimation
+1
1 code implementation • 27 Nov 2021 • Kecheng Zheng, Jiawei Liu, Wei Wu, Liang Li, Zheng-Jun Zha
The calibrated person representation is subtly decomposed into the identity-relevant feature, domain feature, and the remaining entangled one.
Domain Generalization
Generalizable Person Re-identification
no code implementations • 25 Oct 2021 • Lulu Zhao, Fujia Zheng, Keqing He, Weihao Zeng, Yuejie Lei, Huixing Jiang, Wei Wu, Weiran Xu, Jun Guo, Fanyu Meng
Previous dialogue summarization datasets mainly focus on open-domain chitchat dialogues, while summarization datasets for the broadly used task-oriented dialogue haven't been explored yet.
no code implementations • 16 Sep 2021 • Zihao Zhao, Jiawei Chen, Sheng Zhou, Xiangnan He, Xuezhi Cao, Fuzheng Zhang, Wei Wu
To sufficiently exploit such important information for recommendation, it is essential to disentangle the benign popularity bias caused by item quality from the harmful popularity bias caused by conformity.
no code implementations • 14 Sep 2021 • Jinlong Ruan, Wei Wu, Jiebo Luo
The stock market is volatile and complicated, especially in 2020.
1 code implementation • EMNLP 2021 • Kun Zhou, Wayne Xin Zhao, Sirui Wang, Fuzheng Zhang, Wei Wu, Ji-Rong Wen
To solve this issue, various data augmentation techniques are proposed to improve the robustness of PLMs.
2 code implementations • 22 Aug 2021 • Junkang Wu, Wentao Shi, Xuezhi Cao, Jiawei Chen, Wenqiang Lei, Fuzheng Zhang, Wei Wu, Xiangnan He
Knowledge graph completion (KGC) has become a focus of attention across deep learning community owing to its excellent contribution to numerous downstream tasks.
1 code implementation • ACM Transactions on Information Systems 2021 • Ruijian Xu, Chongyang Tao, Jiazhan Feng, Wei Wu, Rui Yan, Dongyan Zhao
To tackle these challenges, we propose a representation[K]-interaction[L]-matching framework that explores multiple types of deep interactive representations to build context-response matching models for response selection.
no code implementations • 5 Aug 2021 • Spiridon Penev, Pavel V. Shevchenko, Wei Wu
In the worst case scenario, the optimal robust strategy can be obtained in a semi-analytical form as a solution of a system of nonlinear equations.
2 code implementations • ACL 2022 • Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Jingang Wang, Juanzi Li, Wei Wu, Maosong Sun
Tuning pre-trained language models (PLMs) with task-specific prompts has been a promising approach for text classification.
no code implementations • 27 Jul 2021 • Haisheng Su, Peiqin Zhuang, Yukun Li, Dongliang Wang, Weihao Gan, Wei Wu, Yu Qiao
This technical report presents an overview of our solution used in the submission to 2021 HACS Temporal Action Localization Challenge on both Supervised Learning Track and Weakly-Supervised Learning Track.
no code implementations • ACL 2021 • Xiangyu Xi, Wei Ye, Shikun Zhang, Quanxiu Wang, Huixing Jiang, Wei Wu
Capturing interactions among event arguments is an essential step towards robust event argument extraction (EAE).
no code implementations • 2 Jun 2021 • Haisheng Su, Jinyuan Feng, Dongliang Wang, Weihao Gan, Wei Wu, Yu Qiao
Specifically, SME aims to highlight the motion-sensitive area through local-global motion modeling, where the saliency alignment and pyramidal feature difference are conducted successively between neighboring frames to capture motion dynamics with less noises caused by misaligned background.
no code implementations • 31 May 2021 • Hao Zhang, Fuhui Zhou, Qihui Wu, Wei Wu, Rose Qingyang Hu
Moreover, a novel loss function that combines the center loss and the cross entropy loss is exploited to learn both discriminative and separable features in order to further improve the classification performance.
1 code implementation • 29 May 2021 • Wei Wu, Bin Li, Chuan Luo, Wolfgang Nejdl
Networks are ubiquitous in the real world.
1 code implementation • ACL 2021 • Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, Weiran Xu
Learning high-quality sentence representations benefits a wide range of natural language processing tasks.
no code implementations • ACL 2021 • Hongyin Tang, Xingwu Sun, Beihong Jin, Jingang Wang, Fuzheng Zhang, Wei Wu
Recently, the retrieval models based on dense representations have been gradually applied in the first stage of the document retrieval tasks, showing better performance than traditional sparse vector space models.
no code implementations • CVPR 2021 • Jiawei Liu, Zheng-Jun Zha, Wei Wu, Kecheng Zheng, Qibin Sun
The key factor for video person re-identification is to effectively exploit both spatial and temporal clues from video sequences.
Ranked #10 on
Video Deinterlacing
on MSU Deinterlacer Benchmark
1 code implementation • CVPR 2021 • Zhiwu Qing, Haisheng Su, Weihao Gan, Dongliang Wang, Wei Wu, Xiang Wang, Yu Qiao, Junjie Yan, Changxin Gao, Nong Sang
In this paper, we propose Temporal Context Aggregation Network (TCANet) to generate high-quality action proposals through "local and global" temporal context aggregation and complementary as well as progressive boundary refinement.
Ranked #10 on
Temporal Action Localization
on ActivityNet-1.3
3 code implementations • ICCV 2021 • Kun Yuan, Shaopeng Guo, Ziwei Liu, Aojun Zhou, Fengwei Yu, Wei Wu
Motivated by the success of Transformers in natural language processing (NLP) tasks, there emerge some attempts (e. g., ViT and DeiT) to apply Transformers to the vision domain.
Ranked #1 on
Image Classification
on Oxford-IIIT Pets
no code implementations • 18 Mar 2021 • Jinghao Zhou, Bo Li, Peng Wang, Peixia Li, Weihao Gan, Wei Wu, Junjie Yan, Wanli Ouyang
Visual Object Tracking (VOT) can be seen as an extended task of Few-Shot Learning (FSL).
no code implementations • 18 Mar 2021 • Jinghao Zhou, Bo Li, Lei Qiao, Peng Wang, Weihao Gan, Wei Wu, Junjie Yan, Wanli Ouyang
Visual Object Tracking (VOT) has synchronous needs for both robustness and accuracy.
1 code implementation • NAACL 2021 • Jiahao Bu, Lei Ren, Shuang Zheng, Yang Yang, Jingang Wang, Fuzheng Zhang, Wei Wu
Aspect category sentiment analysis (ACSA) and review rating prediction (RP) are two essential tasks to detect the fine-to-coarse sentiment polarities.
1 code implementation • CVPR 2021 • Lanyun Zhu, Deyi Ji, Shiping Zhu, Weihao Gan, Wei Wu, Junjie Yan
In this paper, we fully take advantages of the low-level texture features and propose a novel Statistical Texture Learning Network (STLNet) for semantic segmentation.
no code implementations • 23 Dec 2020 • Letian Zhao, Rui Xu, Tianqi Wang, Teng Tian, Xiaotian Wang, Wei Wu, Chio-in Ieong, Xi Jin
The size of deep neural networks (DNNs) grows rapidly as the complexity of the machine learning algorithm increases.
no code implementations • 12 Dec 2020 • Yu Zhang, Tao Zhou, Wei Wu, Hua Xie, Hongru Zhu, Guoxu Zhou, Andrzej Cichocki
With the encoded label matrix, we devise a novel multi-task learning algorithm by exploiting the subclass relationship to jointly optimize the EEG pattern features from the uncovered subclasses.
no code implementations • 8 Dec 2020 • Deyi Ji, Haoran Wang, Hanzhe Hu, Weihao Gan, Wei Wu, Junjie Yan
Most existing re-identification methods focus on learning robust and discriminative features with deep convolution networks.
no code implementations • 3 Dec 2020 • Hanjia Lyu, Wei Wu, Junda Wang, Viet Duong, Xiyang Zhang, Jiebo Luo
People who have the worst personal pandemic experience are more likely to hold the anti-vaccine opinion.
Social and Information Networks
no code implementations • 19 Nov 2020 • Yufan Zhao, Wei Wu, Can Xu
We study knowledge-grounded dialogue generation with pre-trained language models.
1 code implementation • 29 Oct 2020 • Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Wei Wu
However, this comes at the cost of manually labeling similar questions to learn a retrieval model, which is tedious and expensive.
1 code implementation • 29 Oct 2020 • Yuncheng Hua, Yuan-Fang Li, Guilin Qi, Wei Wu, Jingyao Zhang, Daiqing Qi
Our framework consists of a neural generator and a symbolic executor that, respectively, transforms a natural-language question into a sequence of primitive actions, and executes them over the knowledge base to compute the answer.
no code implementations • 24 Oct 2020 • Gexin Huang, Jiawen Liang, Ke Liu, Chang Cai, Zhenghui Gu, Feifei Qi, Yuan Qing Li, Zhu Liang Yu, Wei Wu
Electromagnetic source imaging (ESI) requires solving a highly ill-posed inverse problem.
1 code implementation • EMNLP 2020 • Xueliang Zhao, Wei Wu, Can Xu, Chongyang Tao, Dongyan Zhao, Rui Yan
We study knowledge-grounded dialogue generation with pre-trained language models.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Ze Yang, Wei Wu, Can Xu, Xinnian Liang, Jiaqi Bai, Liran Wang, Wei Wang, Zhoujun Li
Generating responses following a desired style has great potentials to extend applications of open-domain dialogue systems, yet is refrained by lacking of parallel data for training.
no code implementations • 22 Sep 2020 • Weitao Feng, Zhihao Hu, Baopu Li, Weihao Gan, Wei Wu, Wanli Ouyang
Besides, we propose a new MOT evaluation measure, Still Another IDF score (SAIDF), aiming to focus more on identity issues. This new measure may overcome some problems of the previous measures and provide a better insight for identity issues in MOT.
1 code implementation • 15 Sep 2020 • Haisheng Su, Weihao Gan, Wei Wu, Yu Qiao, Junjie Yan
In this paper, we present BSN++, a new framework which exploits complementary boundary regressor and relation modeling for temporal proposal generation.
no code implementations • 15 Sep 2020 • Haisheng Su, Jing Su, Dongliang Wang, Weihao Gan, Wei Wu, Mengmeng Wang, Junjie Yan, Yu Qiao
Second, the parameter frequency distribution is further adopted to guide the student network to learn the appearance modeling process from the teacher.
1 code implementation • NeurIPS 2020 • Linxiao Li, Can Xu, Wei Wu, Yufan Zhao, Xueliang Zhao, Chongyang Tao
While neural conversation models have shown great potentials towards generating informative and engaging responses via introducing external knowledge, learning such a model often requires knowledge-grounded dialogues that are difficult to obtain.
no code implementations • 20 Jul 2020 • Haisheng Su, Jinyuan Feng, Hao Shao, Zhenyu Jiang, Manyuan Zhang, Wei Wu, Yu Liu, Hongsheng Li, Junjie Yan
Specifically, in order to generate high-quality proposals, we consider several factors including the video feature encoder, the proposal generator, the proposal-proposal relations, the scale imbalance, and ensemble strategy.
no code implementations • ECCV 2020 • Hanzhe Hu, Deyi Ji, Weihao Gan, Shuai Bai, Wei Wu, Junjie Yan
Specifically, the CDGC module takes the coarse segmentation result as class mask to extract node features for graph construction and performs dynamic graph convolutions on the constructed graph to learn the feature aggregation and weight allocation.
1 code implementation • ACL 2020 • Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, Jiwei Li
In this paper, we present CorefQA, an accurate and extensible approach for the coreference resolution task.
Ranked #3 on
Coreference Resolution
on CoNLL 2012
(using extra training data)
no code implementations • ICLR 2021 • Ruozi Huang, Huang Hu, Wei Wu, Kei Sawada, Mi Zhang, Daxin Jiang
In this paper, we formalize the music-conditioned dance generation as a sequence-to-sequence learning problem and devise a novel seq2seq architecture to efficiently process long sequences of music features and capture the fine-grained correspondence between music and dance.
Ranked #1 on
Motion Synthesis
on BRACE
no code implementations • 9 Jun 2020 • Wei Wu, Yu Shi, Xukun Li, Yukun Zhou, Peng Du, Shuangzhi Lv, Tingbo Liang, Jifang Sheng
For the segmented masks of intact lung and infected regions, the best method could achieve 0. 972 and 0. 757 measure in mean Dice similarity coefficient on our test benchmark.
no code implementations • CVPR 2020 • Jie Yang, Jiarou Fan, Yiru Wang, Yige Wang, Weihao Gan, Lin Liu, Wei Wu
Attribute recognition is a crucial but challenging task due to viewpoint changes, illumination variations and appearance diversities, etc.
no code implementations • 11 May 2020 • Geng Zhan, Dan Xu, Guo Lu, Wei Wu, Chunhua Shen, Wanli Ouyang
Existing anchor-based and anchor-free object detectors in multi-stage or one-stage pipelines have achieved very promising detection performance.
no code implementations • 8 May 2020 • Manchao Zhang, Yi Xie, Jie Zhang, Weichen Wang, Chunwang Wu, Ting Chen, Wei Wu, Pingxing Chen
Decoherence induced by the laser frequency noise is one of the most important obstacles in the quantum information processing.
Quantum Physics
no code implementations • 4 Apr 2020 • Ze Yang, Wei Wu, Huang Hu, Can Xu, Wei Wang, Zhoujun Li
Thus, we propose learning a response generation model with both image-grounded dialogues and textual dialogues by assuming that the visual scene information at the time of a conversation can be represented by an image, and trying to recover the latent images of the textual dialogues through text-to-image generation techniques.
no code implementations • EMNLP 2020 • Yufan Zhao, Can Xu, Wei Wu, Lei Yu
We study multi-turn response generation for open-domain dialogues.
no code implementations • 2 Mar 2020 • Hao Wang, Bin Guo, Wei Wu, Zhiwen Yu
Text generation system has made massive promising progress contributed by deep learning techniques and has been widely applied in our life.
no code implementations • ICLR 2020 • Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, Rui Yan
In such a low-resource setting, we devise a disentangled response decoder in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model.
no code implementations • 21 Feb 2020 • Xiaowei Xu, Xiangao Jiang, Chunlian Ma, Peng Du, Xukun Li, Shuangzhi Lv, Liang Yu, Yanfei Chen, Junwei Su, Guanjing Lang, Yongtao Li, Hong Zhao, Kaijin Xu, Lingxiang Ruan, Wei Wu
We found that the real time reverse transcription-polymerase chain reaction (RT-PCR) detection of viral RNA from sputum or nasopharyngeal swab has a relatively low positive rate in the early stage to determine COVID-19 (named by the World Health Organization).
no code implementations • ICML 2020 • Duo Chai, Wei Wu, Qinghong Han, Fei Wu, Jiwei Li
We observe significant performance boosts over strong baselines on a wide range of text classification tasks including single-label classification, multi-label classification and multi-aspect sentiment analysis.
no code implementations • ICLR 2020 • Feng Liang, Chen Lin, Ronghao Guo, Ming Sun, Wei Wu, Junjie Yan, Wanli Ouyang
However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal.
1 code implementation • 5 Nov 2019 • Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, Jiwei Li
In this paper, we present an accurate and extensible approach for the coreference resolution task.
no code implementations • IJCNLP 2019 • Jia Li, Chongyang Tao, Wei Wu, Yansong Feng, Dongyan Zhao, Rui Yan
We study how to sample negative examples to automatically construct a training set for effective model learning in retrieval-based dialogue systems.
no code implementations • CVPR 2020 • Xiang Li, Chen Lin, Chuming Li, Ming Sun, Wei Wu, Junjie Yan, Wanli Ouyang
In this paper, we analyse existing weight sharing one-shot NAS approaches from a Bayesian point of view and identify the posterior fading problem, which compromises the effectiveness of shared weights.
no code implementations • 5 Oct 2019 • Wei Wu, Xukun Li, Peng Du, Guanjing Lang, Min Xu, Kaijin Xu, Lanjuan Li
The best model was selected to annotate the spatial location of lesions and classify them into miliary, infiltrative, caseous, tuberculoma and cavitary types simultaneously. Then the Noisy-Or Bayesian function was used to generate an overall infection probability. Finally, a quantitative diagnostic report was exported. The results showed that the recall and precision rates, from the perspective of a single lesion region of PTB, were 85. 9% and 89. 2% respectively.
no code implementations • EMNLP2019 2019 • Ze Yang, Can Xu, Wei Wu, Zhoujun Li
Automatic news comment generation is a new testbed for techniques of natural language generation.
no code implementations • IJCNLP 2019 • Ze Yang, Can Xu, Wei Wu, Zhoujun Li
Automatic news comment generation is a new testbed for techniques of natural language generation.
1 code implementation • IJCNLP 2019 • Ze Yang, Wei Wu, Jian Yang, Can Xu, Zhoujun Li
Since the paired data now is no longer enough to train a neural generation model, we consider leveraging the large scale of unpaired data that are much easier to obtain, and propose response generation with both paired and unpaired data.