1 code implementation • 27 Sep 2017 • Xingyi Cheng, Ruiqing Zhang, Jie zhou, Wei Xu
Several pioneering approaches have been proposed based on traffic observations of the target location as well as its adjacent regions, but they obtain somewhat limited accuracy due to a lack of mining road topology.
5 code implementations • 20 Dec 2018 • Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, LiFeng Wang, Changcheng Li, Maosong Sun
Lots of learning tasks require dealing with graph data which contains rich relation information among elements.
2 code implementations • ICCV 2023 • Wenliang Zhao, Yongming Rao, Zuyan Liu, Benlin Liu, Jie zhou, Jiwen Lu
In this paper, we propose VPD (Visual Perception with a pre-trained Diffusion model), a new framework that exploits the semantic information of a pre-trained text-to-image diffusion model in visual perception tasks.
Ranked #7 on Referring Expression Segmentation on RefCoCo val
2 code implementations • CVPR 2023 • Yuanhui Huang, Wenzhao Zheng, Yunpeng Zhang, Jie zhou, Jiwen Lu
To lift image features to the 3D TPV space, we further propose a transformer-based TPV encoder (TPVFormer) to obtain the TPV features effectively.
Ranked #1 on Prediction Of Occupancy Grid Maps on nuScenes
1 code implementation • 31 Jul 2023 • Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun
Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction.
Ranked #3 on Trajectory Planning on ToolBench
1 code implementation • Findings (ACL) 2021 • Tianyu Gao, Xu Han, Keyue Qiu, Yuzhuo Bai, Zhiyu Xie, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Distantly supervised (DS) relation extraction (RE) has attracted much attention in the past few years as it can utilize large-scale auto-labeled data.
2 code implementations • 9 Apr 2024 • Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang, Yuxiang Huang, Weilin Zhao, Xinrong Zhang, Zheng Leng Thai, Kaihuo Zhang, Chongyi Wang, Yuan YAO, Chenyang Zhao, Jie zhou, Jie Cai, Zhongwu Zhai, Ning Ding, Chao Jia, Guoyang Zeng, Dahai Li, Zhiyuan Liu, Maosong Sun
For data scaling, we introduce a Warmup-Stable-Decay (WSD) learning rate scheduler (LRS), conducive to continuous training and domain adaptation.
1 code implementation • 21 Aug 2023 • Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou
Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks.
7 code implementations • 28 Jul 2022 • Yongming Rao, Wenliang Zhao, Yansong Tang, Jie zhou, Ser-Nam Lim, Jiwen Lu
In this paper, we show that the key ingredients behind the vision Transformers, namely input-adaptive, long-range and high-order spatial interactions, can also be efficiently implemented with a convolution-based framework.
Ranked #20 on Semantic Segmentation on ADE20K
2 code implementations • NeurIPS 2023 • Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie zhou, Yu Qiao, Jifeng Dai
We hope this model can set a new baseline for generalist vision and language models.
2 code implementations • CVPR 2023 • Chenyu Yang, Yuntao Chen, Hao Tian, Chenxin Tao, Xizhou Zhu, Zhaoxiang Zhang, Gao Huang, Hongyang Li, Yu Qiao, Lewei Lu, Jie zhou, Jifeng Dai
The proposed method is verified with a wide spectrum of traditional and modern image backbones and achieves new SoTA results on the large-scale nuScenes dataset.
Ranked #5 on 3D Object Detection on Rope3D
1 code implementation • 11 May 2023 • Yujia Qin, Zihan Cai, Dian Jin, Lan Yan, Shihao Liang, Kunlun Zhu, Yankai Lin, Xu Han, Ning Ding, Huadong Wang, Ruobing Xie, Fanchao Qi, Zhiyuan Liu, Maosong Sun, Jie zhou
We recruit annotators to search for relevant information using our interface and then answer questions.
1 code implementation • 12 Aug 2021 • Jiarui Fang, Zilin Zhu, Shenggui Li, Hui Su, Yang Yu, Jie zhou, Yang You
PatrickStar uses the CPU-GPU heterogeneous memory space to store the model data.
1 code implementation • IJCNLP 2019 • Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
We present FewRel 2. 0, a more challenging task to investigate two aspects of few-shot relation classification models: (1) Can they adapt to a new domain with only a handful of instances?
2 code implementations • ICCV 2023 • Yi Wei, Linqing Zhao, Wenzhao Zheng, Zheng Zhu, Jie zhou, Jiwen Lu
Towards a more comprehensive perception of a 3D scene, in this paper, we propose a SurroundOcc method to predict the 3D occupancy with multi-camera images.
4 code implementations • ACL 2019 • Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zheng-Hao Liu, Zhiyuan Liu, Lixin Huang, Jie zhou, Maosong Sun
Multiple entities in a document generally exhibit complex inter-sentence relations, and cannot be well handled by existing relation extraction (RE) methods that typically focus on extracting intra-sentence relations for single entity pairs.
Ranked #59 on Relation Extraction on DocRED
1 code implementation • 5 Aug 2023 • Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie zhou, Liang He, Xipeng Qiu
Currently, EduChat is available online as an open-source project, with its code, data, and model parameters available on platforms (e. g., GitHub https://github. com/icalk-nlp/EduChat, Hugging Face https://huggingface. co/ecnu-icalk ).
1 code implementation • ACL 2022 • Xu Han, Guoyang Zeng, Weilin Zhao, Zhiyuan Liu, Zhengyan Zhang, Jie zhou, Jun Zhang, Jia Chao, Maosong Sun
In recent years, large-scale pre-trained language models (PLMs) containing billions of parameters have achieved promising results on various NLP tasks.
1 code implementation • NeurIPS 2021 • Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie zhou, Cho-Jui Hsieh
Based on this observation, we propose a dynamic token sparsification framework to prune redundant tokens progressively and dynamically based on the input.
Ranked #3 on Efficient ViTs on ImageNet-1K (With LV-ViT-S)
1 code implementation • 4 Jul 2022 • Yongming Rao, Zuyan Liu, Wenliang Zhao, Jie zhou, Jiwen Lu
We extend our method to hierarchical models including CNNs and hierarchical vision Transformers as well as more complex dense prediction tasks that require structured feature maps by formulating a more generic dynamic spatial sparsification framework with progressive sparsification and asymmetric computation for different spatial locations.
1 code implementation • ICCV 2021 • Xumin Yu, Yongming Rao, Ziyi Wang, Zuyan Liu, Jiwen Lu, Jie zhou
In this paper, we present a new method that reformulates point cloud completion as a set-to-set translation problem and design a new model, called PoinTr that adopts a transformer encoder-decoder architecture for point cloud completion.
Ranked #1 on Point Cloud Completion on ShapeNet (Chamfer Distance L2 metric)
1 code implementation • 11 Jan 2023 • Xumin Yu, Yongming Rao, Ziyi Wang, Jiwen Lu, Jie zhou
In this paper, we present a new method that reformulates point cloud completion as a set-to-set translation problem and design a new model, called PoinTr, which adopts a Transformer encoder-decoder architecture for point cloud completion.
Ranked #2 on Point Cloud Completion on ShapeNet
1 code implementation • CVPR 2022 • Yongming Rao, Wenliang Zhao, Guangyi Chen, Yansong Tang, Zheng Zhu, Guan Huang, Jie zhou, Jiwen Lu
In this work, we present a new framework for dense prediction by implicitly and explicitly leveraging the pre-trained knowledge from CLIP.
2 code implementations • CVPR 2022 • Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie zhou, Jiwen Lu
Inspired by BERT, we devise a Masked Point Modeling (MPM) task to pre-train point cloud Transformers.
Ranked #13 on Few-Shot 3D Point Cloud Classification on ModelNet40 5-way (10-shot) (using extra training data)
3D Point Cloud Linear Classification Few-Shot 3D Point Cloud Classification +2
2 code implementations • CVPR 2020 • Cheng Ma, Yongming Rao, Yean Cheng, Ce Chen, Jiwen Lu, Jie zhou
In this paper, we propose a structure-preserving super resolution method to alleviate the above issue while maintaining the merits of GAN-based methods to generate perceptual-pleasant details.
Ranked #46 on Image Super-Resolution on Urban100 - 4x upscaling
1 code implementation • 26 Sep 2021 • Cheng Ma, Yongming Rao, Jiwen Lu, Jie zhou
Firstly, we propose SPSR with gradient guidance (SPSR-G) by exploiting gradient maps of images to guide the recovery in two aspects.
1 code implementation • ICCV 2021 • Yi Wei, Shaohui Liu, Yongming Rao, Wang Zhao, Jiwen Lu, Jie zhou
In this work, we present a new multi-view depth estimation method that utilizes both conventional reconstruction and learning-based priors over the recently proposed neural radiance fields (NeRF).
1 code implementation • CVPR 2023 • Shuai Shen, Wenliang Zhao, Zibin Meng, Wanhua Li, Zheng Zhu, Jie zhou, Jiwen Lu
In this way, the proposed DiffTalk is capable of producing high-quality talking head videos in synchronization with the source audio, and more importantly, it can be naturally generalized across different identities without any further fine-tuning.
4 code implementations • NeurIPS 2021 • Yongming Rao, Wenliang Zhao, Zheng Zhu, Jiwen Lu, Jie zhou
Recent advances in self-attention and pure multi-layer perceptrons (MLP) models for vision have shown great potential in achieving promising performance with fewer inductive biases.
Ranked #9 on Image Classification on Stanford Cars (using extra training data)
3 code implementations • 4 Apr 2020 • Yunlong Liang, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, Jie zhou
The aspect-based sentiment analysis (ABSA) task remains to be a long-standing challenge, which aims to extract the aspect term and then identify its sentiment orientation. In previous approaches, the explicit syntactic structure of a sentence, which reflects the syntax properties of natural language and hence is intuitively crucial for aspect term extraction and sentiment recognition, is typically neglected or insufficiently modeled.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3
1 code implementation • 19 May 2022 • Yunpeng Zhang, Zheng Zhu, Wenzhao Zheng, JunJie Huang, Guan Huang, Jie zhou, Jiwen Lu
Specifically, BEVerse first performs shared feature extraction and lifting to generate 4D BEV representations from multi-timestamp and multi-view images.
Ranked #15 on Robust Camera Only 3D Object Detection on nuScenes-C
3 code implementations • CVPR 2021 • Yunpeng Zhang, Jiwen Lu, Jie zhou
The precise localization of 3D objects from a single image without depth information is a highly challenging problem.
Ranked #8 on Monocular 3D Object Detection on KITTI Cars Moderate
1 code implementation • 24 Jul 2022 • Shuai Shen, Wanhua Li, Zheng Zhu, Yueqi Duan, Jie zhou, Jiwen Lu
Thus the facial radiance field can be flexibly adjusted to the new identity with few reference images.
1 code implementation • 11 Jan 2024 • Yuwen Xiong, Zhiqi Li, Yuntao Chen, Feng Wang, Xizhou Zhu, Jiapeng Luo, Wenhai Wang, Tong Lu, Hongsheng Li, Yu Qiao, Lewei Lu, Jie zhou, Jifeng Dai
The advancements in speed and efficiency of DCNv4, combined with its robust performance across diverse vision tasks, show its potential as a foundational building block for future vision models.
1 code implementation • CVPR 2020 • Cheng Ma, Zhenyu Jiang, Yongming Rao, Jiwen Lu, Jie zhou
In this paper, we propose a deep face super-resolution (FSR) method with iterative collaboration between two recurrent networks which focus on facial image recovery and landmark estimation respectively.
2 code implementations • 24 May 2019 • Fuli Luo, Peng Li, Jie zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, Xu sun
Therefore, in this paper, we propose a dual reinforcement learning framework to directly transfer the style of the text via a one-step mapping model, without any separation of content and style.
Ranked #1 on Unsupervised Text Style Transfer on GYAFC
1 code implementation • 7 Apr 2022 • Yi Wei, Linqing Zhao, Wenzhao Zheng, Zheng Zhu, Yongming Rao, Guan Huang, Jiwen Lu, Jie zhou
In this paper, we propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
1 code implementation • 21 Nov 2023 • Yuanhui Huang, Wenzhao Zheng, Borui Zhang, Jie zhou, Jiwen Lu
Our SelfOcc outperforms the previous best method SceneRF by 58. 7% using a single frame as input on SemanticKITTI and is the first self-supervised work that produces reasonable 3D occupancy for surround cameras on nuScenes.
1 code implementation • ACL 2019 • Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, Jie zhou
To properly train the utterance rewriter, we collect a new dataset with human annotations and introduce a Transformer-based utterance rewriting architecture using the pointer network.
2 code implementations • CVPR 2020 • Ziwei Wang, Ziyi Wu, Jiwen Lu, Jie zhou
Conventional network binarization methods directly quantize the weights and activations in one-stage or two-stage detectors with constrained representational capacity, so that the information redundancy in the networks causes numerous false positives and degrades the performance significantly.
2 code implementations • ACL 2019 • Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, LiFeng Wang, Changcheng Li, Maosong Sun
Fact verification (FV) is a challenging task which requires to retrieve relevant evidence from plain text and use the evidence to verify given claims.
Ranked #7 on Fact Verification on FEVER
2 code implementations • CVPR 2022 • Tianpei Gu, Guangyi Chen, Junlong Li, Chunze Lin, Yongming Rao, Jie zhou, Jiwen Lu
Human behavior has the nature of indeterminacy, which requires the pedestrian trajectory prediction system to model the multi-modality of future motion states.
1 code implementation • 18 Jan 2024 • Changyao Tian, Xizhou Zhu, Yuwen Xiong, Weiyun Wang, Zhe Chen, Wenhai Wang, Yuntao Chen, Lewei Lu, Tong Lu, Jie zhou, Hongsheng Li, Yu Qiao, Jifeng Dai
Developing generative models for interleaved image-text data has both research and practical value.
1 code implementation • CVPR 2023 • Muheng Li, Yueqi Duan, Jie zhou, Jiwen Lu
With the rising industrial attention to 3D virtual modeling technology, generating novel 3D content based on specified conditions (e. g. text) has become a hot issue.
2 code implementations • 22 Dec 2021 • Liang Pan, Tong Wu, Zhongang Cai, Ziwei Liu, Xumin Yu, Yongming Rao, Jiwen Lu, Jie zhou, Mingye Xu, Xiaoyuan Luo, Kexue Fu, Peng Gao, Manning Wang, Yali Wang, Yu Qiao, Junsheng Zhou, Xin Wen, Peng Xiang, Yu-Shen Liu, Zhizhong Han, Yuanjie Yan, Junyi An, Lifa Zhu, Changwei Lin, Dongrui Liu, Xin Li, Francisco Gómez-Fernández, Qinlong Wang, Yang Yang
Based on the MVP dataset, this paper reports methods and results in the Multi-View Partial Point Cloud Challenge 2021 on Completion and Registration.
2 code implementations • CVPR 2019 • Wenzhao Zheng, Zhaodong Chen, Jiwen Lu, Jie zhou
This paper presents a hardness-aware deep metric learning (HDML) framework.
Ranked #30 on Metric Learning on CUB-200-2011 (using extra training data)
1 code implementation • EMNLP 2020 • Xiaozhi Wang, Ziqi Wang, Xu Han, Wangyi Jiang, Rong Han, Zhiyuan Liu, Juanzi Li, Peng Li, Yankai Lin, Jie zhou
Most existing datasets exhibit the following issues that limit further development of ED: (1) Data scarcity.
1 code implementation • Findings (ACL) 2021 • Feilong Chen, Fandong Meng, Xiuyi Chen, Peng Li, Jie zhou
Visual dialogue is a challenging task since it needs to answer a series of coherent questions on the basis of understanding the visual environment.
1 code implementation • ICCV 2021 • Yongming Rao, Guangyi Chen, Jiwen Lu, Jie zhou
Unlike most existing methods that learn visual attention based on conventional likelihood, we propose to learn the attention with counterfactual causality, which provides a tool to measure the attention quality and a powerful supervisory signal to guide the learning process.
Ranked #8 on Vehicle Re-Identification on VehicleID Medium
3 code implementations • CVPR 2023 • Shiyi Zhang, Wenxun Dai, Sujia Wang, Xiangwei Shen, Jiwen Lu, Jie zhou, Yansong Tang
Action quality assessment (AQA) has become an emerging topic since it can be extensively applied in numerous scenarios.
2 code implementations • 10 Apr 2024 • Yijin Liu, Fandong Meng, Jie zhou
Recently, dynamic computation methods have shown notable acceleration for Large Language Models (LLMs) by skipping several layers of computations through elaborate heuristics or additional predictors.
2 code implementations • 11 Apr 2024 • Yijie Chen, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou
In this paper, we suggest that code comments are the natural logic pivot between natural language and code language and propose using comments to boost the code generation ability of code LLMs.
1 code implementation • EMNLP 2021 • Shuhuai Ren, Jinchao Zhang, Lei LI, Xu sun, Jie zhou
Data augmentation aims to enrich training samples for alleviating the overfitting issue in low-resource or class-imbalanced situations.
1 code implementation • 11 Apr 2024 • Chaoqun He, Renjie Luo, Shengding Hu, Yuanqian Zhao, Jie zhou, Hanghao Wu, Jiajie Zhang, Xu Han, Zhiyuan Liu, Maosong Sun
The rapid development of LLMs calls for a lightweight and easy-to-use framework for swift evaluation deployment.
1 code implementation • 4 Aug 2022 • Ziyi Wang, Xumin Yu, Yongming Rao, Jie zhou, Jiwen Lu
Nowadays, pre-training big models on large-scale datasets has become a crucial topic in deep learning.
Ranked #16 on 3D Point Cloud Classification on ScanObjectNN (using extra training data)
1 code implementation • 14 Jul 2017 • Fu Li, Chuang Gan, Xiao Liu, Yunlong Bian, Xiang Long, Yandong Li, Zhichao Li, Jie zhou, Shilei Wen
This paper describes our solution for the video recognition task of the Google Cloud and YouTube-8M Video Understanding Challenge that ranked the 3rd place.
1 code implementation • CVPR 2020 • Yongming Rao, Jiwen Lu, Jie zhou
Based on this hypothesis, we propose to learn point cloud representation by bidirectional reasoning between the local structures at different abstraction hierarchies and the global shape without human supervision.
1 code implementation • 23 May 2023 • Lean Wang, Lei LI, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie zhou, Xu sun
In-context learning (ICL) emerges as a promising capability of large language models (LLMs) by providing them with demonstration examples to perform diverse tasks.
1 code implementation • 3 Jul 2020 • Ganqu Cui, Jie zhou, Cheng Yang, Zhiyuan Liu
Experimental results show that AGE consistently outperforms state-of-the-art graph embedding methods considerably on these tasks.
Ranked #6 on Node Clustering on Cora
1 code implementation • EMNLP 2020 • Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, Jie zhou
We find that (i) while context is the main source to support the predictions, RE models also heavily rely on the information from entity mentions, most of which is type information, and (ii) existing datasets may leak shallow heuristics via entity mentions and thus contribute to the high performance on RE benchmarks.
Ranked #23 on Relation Extraction on TACRED
1 code implementation • 24 Mar 2021 • Shuai Shen, Wanhua Li, Zheng Zhu, Guan Huang, Dalong Du, Jiwen Lu, Jie zhou
To address the dilemma of large-scale training and efficient inference, we propose the STructure-AwaRe Face Clustering (STAR-FC) method.
1 code implementation • CVPR 2021 • Shuai Shen, Wanhua Li, Zheng Zhu, Guan Huang, Dalong Du, Jiwen Lu, Jie zhou
To address the dilemma of large-scale training and efficient inference, we propose the STructure-AwaRe Face Clustering (STAR-FC) method.
1 code implementation • 28 Mar 2022 • Yi Wei, Zibu Wei, Yongming Rao, Jiaxin Li, Jie zhou, Jiwen Lu
In this paper, we propose the LiDAR Distillation to bridge the domain gap induced by different LiDAR beams for 3D object detection.
3 code implementations • 17 Oct 2022 • Hui Jiang, Ziyao Lu, Fandong Meng, Chulun Zhou, Jie zhou, Degen Huang, Jinsong Su
Meanwhile we inject two types of perturbations into the retrieved pairs for robust training.
1 code implementation • 31 Aug 2023 • Sicheng Zuo, Wenzhao Zheng, Yuanhui Huang, Jie zhou, Jiwen Lu
To address this, we propose a cylindrical tri-perspective view to represent point clouds effectively and comprehensively and a PointOcc model to process them efficiently.
1 code implementation • ACL 2021 • Zekang Li, Jinchao Zhang, Zhengcong Fei, Yang Feng, Jie zhou
Nowadays, open-domain dialogue models can generate acceptable responses according to the historical context based on the large-scale pre-trained language models.
1 code implementation • CVPR 2022 • Jinglin Xu, Yongming Rao, Xumin Yu, Guangyi Chen, Jie zhou, Jiwen Lu
Most existing action quality assessment methods rely on the deep features of an entire video to predict the score, which is less reliable due to the non-transparent inference process and poor interpretability.
2 code implementations • ACL 2019 • Zekang Li, Cheng Niu, Fandong Meng, Yang Feng, Qian Li, Jie zhou
Document Grounded Conversations is a task to generate dialogue responses when chatting about the content of a given document.
1 code implementation • 11 Jan 2022 • Bin Xia, Yucheng Hang, Yapeng Tian, Wenming Yang, Qingmin Liao, Jie zhou
To demonstrate the effectiveness of ENLCA, we build an architecture called Efficient Non-Local Contrastive Network (ENLCN) by adding a few of our modules in a simple backbone.
1 code implementation • CVPR 2022 • Muheng Li, Lei Chen, Yueqi Duan, Zhilan Hu, Jianjiang Feng, Jie zhou, Jiwen Lu
The generated text prompts are paired with corresponding video clips, and together co-train the text encoder and the video encoder via a contrastive approach.
Ranked #4 on Action Segmentation on GTEA (using extra training data)
1 code implementation • Findings (ACL) 2022 • Zhengyan Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
In this work, we study the computational patterns of FFNs and observe that most inputs only activate a tiny ratio of neurons of FFNs.
1 code implementation • IJCNLP 2019 • Xiaozhi Wang, Ziqi Wang, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Maosong Sun, Jie zhou, Xiang Ren
Existing event extraction methods classify each argument role independently, ignoring the conceptual correlations between different argument roles.
1 code implementation • NAACL 2022 • Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, Lei Hou, Maosong Sun, Jie zhou
To explore whether we can improve PT via prompt transfer, we empirically investigate the transferability of soft prompts across different downstream tasks and PLMs in this work.
1 code implementation • ICCV 2021 • Wenliang Zhao, Yongming Rao, Ziyi Wang, Jiwen Lu, Jie zhou
Our method is model-agnostic, which can be applied to off-the-shelf backbone networks and metric learning methods.
Ranked #16 on Metric Learning on CUB-200-2011
1 code implementation • CVPR 2023 • Weijie Su, Xizhou Zhu, Chenxin Tao, Lewei Lu, Bin Li, Gao Huang, Yu Qiao, Xiaogang Wang, Jie zhou, Jifeng Dai
It has been proved that combining multiple pre-training strategies and data from various modalities/sources can greatly boost the training of large-scale models.
Ranked #2 on Semantic Segmentation on ADE20K (using extra training data)
1 code implementation • ACL 2021 • Yujia Qin, Yankai Lin, Ryuichi Takanobu, Zhiyuan Liu, Peng Li, Heng Ji, Minlie Huang, Maosong Sun, Jie zhou
Pre-trained Language Models (PLMs) have shown superior performance on various downstream Natural Language Processing (NLP) tasks.
1 code implementation • 5 May 2023 • Xiuwei Xu, Zhihao Sun, Ziwei Wang, Hongmin Liu, Jie zhou, Jiwen Lu
Specifically, we theoretically derive a dynamic spatial pruning (DSP) strategy to prune the redundant spatial representation of 3D scene in a cascade manner according to the distribution of objects.
1 code implementation • ACL 2021 • Ziqi Wang, Xiaozhi Wang, Xu Han, Yankai Lin, Lei Hou, Zhiyuan Liu, Peng Li, Juanzi Li, Jie zhou
Event extraction (EE) has considerably benefited from pre-trained language models (PLMs) by fine-tuning.
1 code implementation • 19 Mar 2024 • Zuyan Liu, Yuhao Dong, Yongming Rao, Jie zhou, Jiwen Lu
In the realm of vision-language understanding, the proficiency of models in interpreting and reasoning over visual content has become a cornerstone for numerous applications.
Ranked #41 on Visual Question Answering on MM-Vet
1 code implementation • 10 Nov 2022 • Xiaowei Hu, Min Shi, Weiyun Wang, Sitong Wu, Linjie Xing, Wenhai Wang, Xizhou Zhu, Lewei Lu, Jie zhou, Xiaogang Wang, Yu Qiao, Jifeng Dai
Our experiments on various tasks and an analysis of inductive bias show a significant performance boost due to advanced network-level and block-level designs, but performance differences persist among different STMs.
1 code implementation • ACL 2019 • Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, Jie zhou
Current state-of-the-art systems for sequence labeling are typically based on the family of Recurrent Neural Networks (RNNs).
Ranked #17 on Named Entity Recognition (NER) on CoNLL 2003 (English) (using extra training data)
1 code implementation • ACL 2022 • Weize Chen, Xu Han, Yankai Lin, Hexu Zhao, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Hyperbolic neural networks have shown great potential for modeling complex data.
1 code implementation • 12 Jan 2022 • Bin Xia, Yapeng Tian, Yucheng Hang, Wenming Yang, Qingmin Liao, Jie zhou
To improve matching efficiency, we design a novel Embedded PatchMacth scheme with random samples propagation, which involves end-to-end training with asymptotic linear computational cost to the input size.
1 code implementation • CVPR 2023 • Wenliang Zhao, Yongming Rao, Weikang Shi, Zuyan Liu, Jie zhou, Jiwen Lu
Unlike previous work that relies on carefully designed network architectures and loss functions to fuse the information from the source and target faces, we reformulate the face swapping as a conditional inpainting task, performed by a powerful diffusion model guided by the desired face attributes (e. g., identity and landmarks).
3 code implementations • 21 Jul 2016 • Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie zhou, Wei Xu
While question answering (QA) with neural network, i. e. neural QA, has achieved promising results in recent years, lacking of large scale real-word QA dataset is still a challenge for developing and evaluating neural QA system.
1 code implementation • ICCV 2021 • Guangyi Chen, Junlong Li, Jiwen Lu, Jie zhou
Most existing methods learn to predict future trajectories by behavior clues from history trajectories and interaction clues from environments.
1 code implementation • 13 Sep 2020 • Yucheng Hang, Qingmin Liao, Wenming Yang, Yupeng Chen, Jie zhou
The adaptive spatial attention branch (ASAB) and the adaptive channel attention branch (ACAB) constitute the adaptive dual attention module (ADAM), which can capture the long-range spatial and channel-wise contextual information to expand the receptive field and distinguish different types of information for more effective feature representations.
1 code implementation • 14 Nov 2022 • Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou
It contains 103, 193 event coreference chains, 1, 216, 217 temporal relations, 57, 992 causal relations, and 15, 841 subevent relations, which is larger than existing datasets of all the ERE tasks by at least an order of magnitude.
1 code implementation • CVPR 2021 • Yi Wei, Ziyi Wang, Yongming Rao, Jiwen Lu, Jie zhou
In this paper, we propose a Point-Voxel Recurrent All-Pairs Field Transforms (PV-RAFT) method to estimate scene flow from point clouds.
1 code implementation • 11 Oct 2021 • Zhen Yang, Fandong Meng, Yingxue Zhang, Ernan Li, Jie zhou
To break this limitation, we create a benchmark data set for TS, called \emph{WeTS}, which contains golden corpus annotated by expert translators on four translation directions.
1 code implementation • ICCV 2021 • Xumin Yu, Yongming Rao, Wenliang Zhao, Jiwen Lu, Jie zhou
Assessing action quality is challenging due to the subtle differences between videos and large variations in scores.
Ranked #2 on Action Quality Assessment on MTL-AQA
1 code implementation • 1 Feb 2020 • Zekang Li, Zongjia Li, Jinchao Zhang, Yang Feng, Cheng Niu, Jie zhou
Audio-Visual Scene-Aware Dialog (AVSD) is a task to generate responses when chatting about a given video, which is organized as a track of the 8th Dialog System Technology Challenge (DSTC8).
1 code implementation • NeurIPS 2023 • Yinan Liang, Ziwei Wang, Xiuwei Xu, Yansong Tang, Jie zhou, Jiwen Lu
Due to the high price and heavy energy consumption of GPUs, deploying deep models on IoT devices such as microcontrollers makes significant contributions for ecological AI.
1 code implementation • 10 Mar 2023 • Jie zhou, Xianshuai Cao, Wenhao Li, Lin Bo, Kun Zhang, Chuan Luo, Qian Yu
Multi-scenario & multi-task learning has been widely applied to many recommendation systems in industrial applications, wherein an effective and practical approach is to carry out multi-scenario transfer learning on the basis of the Mixture-of-Expert (MoE) architecture.
1 code implementation • NeurIPS 2021 • Deli Chen, Yankai Lin, Guangxiang Zhao, Xuancheng Ren, Peng Li, Jie zhou, Xu sun
The class imbalance problem, as an important issue in learning node representations, has drawn increasing attention from the community.
1 code implementation • 28 May 2023 • Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Huadong Wang, Deming Ye, Chaojun Xiao, Xu Han, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Experimental results on three knowledge-driven NLP tasks show that existing injection methods are not suitable for the new paradigm, while map-tuning effectively improves the performance of downstream models.
1 code implementation • 17 Nov 2022 • Haojun Jiang, Jianke Zhang, Rui Huang, Chunjiang Ge, Zanlin Ni, Jiwen Lu, Jie zhou, Shiji Song, Gao Huang
However, as pre-trained models are scaling up, fully fine-tuning them on text-video retrieval datasets has a high risk of overfitting.
1 code implementation • 26 Jul 2022 • Cheng Ma, Jingyi Zhang, Jie zhou, Jiwen Lu
On the other hand, we propose a parallel network which includes two branches of cascaded lookup tables which process different components of the input low-resolution images.
1 code implementation • 4 Sep 2021 • Zhengcong Fei, Zekang Li, Jinchao Zhang, Yang Feng, Jie zhou
Compared to previous dialogue tasks, MOD is much more challenging since it requires the model to understand the multimodal elements as well as the emotions behind them.
1 code implementation • 16 Nov 2022 • Yong Hu, Fandong Meng, Jie zhou
In fact, most of Chinese input is based on pinyin input method, so the study of spelling errors in this process is more practical and valuable.
1 code implementation • 11 Aug 2021 • Guangyi Chen, Tianpei Gu, Jiwen Lu, Jin-An Bao, Jie zhou
Experimental results demonstrate the superiority of our method, which outperforms the state-of-the-art methods by a large margin with limited computational cost.
Ranked #21 on Person Re-Identification on MSMT17
1 code implementation • ACL 2019 • Fuli Luo, Peng Li, Pengcheng Yang, Jie zhou, Yutong Tan, Baobao Chang, Zhifang Sui, Xu sun
In this paper, we focus on the task of fine-grained text sentiment transfer (FGST).
1 code implementation • 18 Dec 2020 • An Tao, Yueqi Duan, Yi Wei, Jiwen Lu, Jie zhou
Most existing point cloud instance and semantic segmentation methods rely heavily on strong supervision signals, which require point-level labels for every point in the scene.
1 code implementation • 22 Aug 2022 • Yunpeng Zhang, Wenzhao Zheng, Zheng Zhu, Guan Huang, Jie zhou, Jiwen Lu
First, we extract multi-scale features and generate the perspective object proposals on each monocular image.
1 code implementation • CVPR 2022 • Ziyi Wang, Yongming Rao, Xumin Yu, Jie zhou, Jiwen Lu
Conventional point cloud semantic segmentation methods usually employ an encoder-decoder architecture, where mid-level features are locally aggregated to extract geometric information.
2 code implementations • 9 May 2022 • Wenzhao Zheng, Chengkun Wang, Jie zhou, Jiwen Lu
This paper proposes an introspective deep metric learning (IDML) framework for uncertainty-aware comparisons of images.
2 code implementations • 11 Sep 2023 • Chengkun Wang, Wenzhao Zheng, Zheng Zhu, Jie zhou, Jiwen Lu
This paper proposes an introspective deep metric learning (IDML) framework for uncertainty-aware comparisons of images.
1 code implementation • Findings (ACL) 2022 • Xin Lv, Yankai Lin, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou
In recent years, pre-trained language models (PLMs) have been shown to capture factual knowledge from massive texts, which encourages the proposal of PLM-based knowledge graph completion (KGC) models.
1 code implementation • 4 Sep 2020 • Huan Lin, Fandong Meng, Jinsong Su, Yongjing Yin, Zhengyuan Yang, Yubin Ge, Jie zhou, Jiebo Luo
Particularly, we represent the input image with global and regional visual features, we introduce two parallel DCCNs to model multimodal context vectors with visual features at different granularities.
Ranked #3 on Multimodal Machine Translation on Multi30K
1 code implementation • ICCV 2021 • Wenzhao Zheng, Borui Zhang, Jiwen Lu, Jie zhou
This paper presents a deep relational metric learning (DRML) framework for image clustering and retrieval.
1 code implementation • ACL 2022 • Yunlong Liang, Fandong Meng, Jinan Xu, Yufeng Chen, Jie zhou
In this work, we introduce a new task named Multimodal Chat Translation (MCT), aiming to generate more accurate translations with the help of the associated dialogue history and visual context.
1 code implementation • 7 Mar 2023 • Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, Jie zhou
In detail, we regard ChatGPT as a human evaluator and give task-specific (e. g., summarization) and aspect-specific (e. g., relevance) instruction to prompt ChatGPT to evaluate the generated results of NLG models.
1 code implementation • 17 May 2021 • Yi Wei, Shang Su, Jiwen Lu, Jie zhou
To tackle this problem, we propose frustum-aware geometric reasoning (FGR) to detect vehicles in point clouds without any 3D annotations.
1 code implementation • EMNLP 2021 • Lei LI, Yankai Lin, Shuhuai Ren, Peng Li, Jie zhou, Xu sun
Knowledge distillation~(KD) has been proved effective for compressing large-scale pre-trained language models.
1 code implementation • CVPR 2020 • Yansong Tang, Zanlin Ni, Jiahuan Zhou, Danyang Zhang, Jiwen Lu, Ying Wu, Jie zhou
Assessing action quality from videos has attracted growing attention in recent years.
Ranked #4 on Action Quality Assessment on AQA-7
2 code implementations • ICCV 2021 • Yongming Rao, Benlin Liu, Yi Wei, Jiwen Lu, Cho-Jui Hsieh, Jie zhou
In particular, we propose to generate random layouts of a scene by making use of the objects in the synthetic CAD dataset and learn the 3D scene representation by applying object-level contrastive learning on two random scenes generated from the same set of synthetic objects.
2 code implementations • CVPR 2022 • Xiuwei Xu, Yifan Wang, Yu Zheng, Yongming Rao, Jie zhou, Jiwen Lu
In this paper, we propose a weakly-supervised approach for 3D object detection, which makes it possible to train a strong 3D detector with position-level annotations (i. e. annotations of object centers).
1 code implementation • 10 Jul 2023 • Jiali Zeng, Fandong Meng, Yongjing Yin, Jie zhou
Open-sourced large language models (LLMs) have demonstrated remarkable efficacy in various tasks with instruction tuning.
2 code implementations • NAACL 2022 • Yujia Qin, Yankai Lin, Jing Yi, Jiajie Zhang, Xu Han, Zhengyan Zhang, Yusheng Su, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
Specifically, we introduce a pre-training framework named "knowledge inheritance" (KI) and explore how could knowledge distillation serve as auxiliary supervision during pre-training to efficiently learn larger PLMs.
1 code implementation • COLING 2022 • Duzhen Zhang, Zhen Yang, Fandong Meng, Xiuyi Chen, Jie zhou
Causal Emotion Entailment (CEE) aims to discover the potential causes behind an emotion in a conversational utterance.
Ranked #3 on Causal Emotion Entailment on RECCON
1 code implementation • 10 Oct 2022 • Qingyi Si, Fandong Meng, Mingyu Zheng, Zheng Lin, Yuanxin Liu, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou
To overcome this limitation, we propose a new dataset that considers varying types of shortcuts by constructing different distribution shifts in multiple OOD test sets.
1 code implementation • ACL 2021 • Fusheng Wang, Jianhao Yan, Fandong Meng, Jie zhou
As an active research field in NMT, knowledge distillation is widely applied to enhance the model's performance by transferring teacher model's knowledge on each training sample.
1 code implementation • 5 Aug 2021 • Zhongjin Luo, Jie zhou, Heming Zhu, Dong Du, Xiaoguang Han, Hongbo Fu
In this work, we propose SimpModeling, a novel sketch-based system for helping users, especially amateur users, easily model 3D animalmorphic heads - a prevalent kind of heads in character design.
1 code implementation • 20 Nov 2023 • Xin Zhang, Yingze Song, Tingting Song, Degang Yang, Yichen Ye, Jie zhou, Liming Zhang
In response to the above questions, the Alterable Kernel Convolution (AKConv) is explored in this work, which gives the convolution kernel an arbitrary number of parameters and arbitrary sampled shapes to provide richer options for the trade-off between network overhead and performance.
1 code implementation • CVPR 2022 • Borui Zhang, Wenzhao Zheng, Jie zhou, Jiwen Lu
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images.
Ranked #3 on Metric Learning on CARS196 (using extra training data)
2 code implementations • IJCNLP 2019 • Qiu Ran, Yankai Lin, Peng Li, Jie zhou, Zhiyuan Liu
Numerical reasoning, such as addition, subtraction, sorting and counting is a critical skill in human's reading comprehension, which has not been well considered in existing machine reading comprehension (MRC) systems.
Ranked #10 on Question Answering on DROP Test
1 code implementation • 1 Aug 2023 • Bohao Fan, Siqi Wang, Wenxuan Guo, Wenzhao Zheng, Jianjiang Feng, Jie zhou
In this article, we propose Human-M3, an outdoor multi-modal multi-view multi-person human pose database which includes not only multi-view RGB videos of outdoor scenes but also corresponding pointclouds.
1 code implementation • 29 Sep 2020 • Yusheng Su, Xu Han, Zhengyan Zhang, Peng Li, Zhiyuan Liu, Yankai Lin, Jie zhou, Maosong Sun
In this paper, we propose a novel framework named Coke to dynamically select contextual knowledge and embed knowledge context according to textual context for PLMs, which can avoid the effect of redundant and ambiguous knowledge in KGs that cannot match the input text.
1 code implementation • CVPR 2021 • Wanhua Li, Xiaoke Huang, Jiwen Lu, Jianjiang Feng, Jie zhou
An ordinal distribution constraint is proposed to exploit the ordinal nature of regression.
Ranked #2 on Age Estimation on Adience
Aesthetics Quality Assessment Age And Gender Classification +3
2 code implementations • 11 Feb 2022 • Jiaan Wang, Fandong Meng, Ziyao Lu, Duo Zheng, Zhixu Li, Jianfeng Qu, Jie zhou
We present ClidSum, a benchmark dataset for building cross-lingual summarization systems on dialogue documents.
2 code implementations • CVPR 2023 • Chenxin Tao, Xizhou Zhu, Weijie Su, Gao Huang, Bin Li, Jie zhou, Yu Qiao, Xiaogang Wang, Jifeng Dai
Driven by these analysis, we propose Siamese Image Modeling (SiameseIM), which predicts the dense representations of an augmented view, based on another masked view from the same image but with different augmentations.
1 code implementation • ICCV 2023 • Chengkun Wang, Wenzhao Zheng, Zheng Zhu, Jie zhou, Jiwen Lu
The pretrain-finetune paradigm in modern computer vision facilitates the success of self-supervised learning, which tends to achieve better transferability than supervised learning.
1 code implementation • ACL 2020 • Yongjing Yin, Fandong Meng, Jinsong Su, Chulun Zhou, Zhengyuan Yang, Jie zhou, Jiebo Luo
Multi-modal neural machine translation (NMT) aims to translate source sentences into a target language paired with images.
1 code implementation • CVPR 2021 • Shuyan Li, Xiu Li, Jiwen Lu, Jie zhou
Most existing unsupervised video hashing methods are built on unidirectional models with less reliable training objectives, which underuse the correlations among frames and the similarity structure between videos.
1 code implementation • ACL 2022 • Pei Ke, Hao Zhou, Yankai Lin, Peng Li, Jie zhou, Xiaoyan Zhu, Minlie Huang
Existing reference-free metrics have obvious limitations for evaluating controlled text generation models.
1 code implementation • ICCV 2023 • Ziyi Wang, Xumin Yu, Yongming Rao, Jie zhou, Jiwen Lu
In this paper, we propose a novel 3D-to-2D generative pre-training method that is adaptable to any point cloud model.
Ranked #6 on 3D Part Segmentation on ShapeNet-Part
1 code implementation • Findings (ACL) 2022 • Yujia Qin, Jiajie Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou
We experiment ELLE with streaming data from 5 domains on BERT and GPT.
1 code implementation • 6 Jun 2022 • Wanhua Li, Xiaoke Huang, Zheng Zhu, Yansong Tang, Xiu Li, Jie zhou, Jiwen Lu
In this paper, we propose to learn the rank concepts from the rich semantic CLIP latent space.
Ranked #1 on Few-shot Age Estimation on MORPH Album2
1 code implementation • COLING 2020 • Jie zhou, Junfeng Tian, Rui Wang, Yuanbin Wu, Wenming Xiao, Liang He
However, due to the variety of users{'} emotional expressions across domains, fine-tuning the pre-trained models on the source domain tends to overfit, leading to inferior results on the target domain.
2 code implementations • IJCNLP 2019 • Yijin Liu, Fandong Meng, Jinchao Zhang, Jie zhou, Yufeng Chen, Jinan Xu
Spoken Language Understanding (SLU) mainly involves two tasks, intent detection and slot filling, which are generally modeled jointly in existing works.
Ranked #1 on Slot Filling on CAIS
1 code implementation • ACL 2021 • Hui Jiang, Chulun Zhou, Fandong Meng, Biao Zhang, Jie zhou, Degen Huang, Qingqiang Wu, Jinsong Su
Due to the great potential in facilitating software development, code generation has attracted increasing attention recently.
1 code implementation • NAACL 2022 • Linzhi Wu, Pengjun Xie, Jie zhou, Meishan Zhang, Chunping Ma, Guangwei Xu, Min Zhang
Prior research has mainly resorted to heuristic rule-based constraints to reduce the noise for specific self-augmentation methods individually.
1 code implementation • 21 Oct 2022 • Lanrui Wang, Jiangnan Li, Zheng Lin, Fandong Meng, Chenxu Yang, Weiping Wang, Jie zhou
We use a fine-grained encoding strategy which is more sensitive to the emotion dynamics (emotion flow) in the conversations to predict the emotion-intent characteristic of response.
1 code implementation • 11 Oct 2022 • Xiaofeng Zhang, Yikang Shen, Zeyu Huang, Jie zhou, Wenge Rong, Zhang Xiong
This paper proposes the Mixture of Attention Heads (MoA), a new architecture that combines multi-head attention with the MoE mechanism.
1 code implementation • 10 Feb 2023 • Jie zhou, Qian Yu, Chuan Luo, Jing Zhang
In recent years, thanks to the rapid development of deep learning (DL), DL-based multi-task learning (MTL) has made significant progress, and it has been successfully applied to recommendation systems (RS).
1 code implementation • 5 Dec 2023 • Wenxuan Guo, Zhiyu Pan, Yingping Liang, Ziheng Xi, Zhi Chen Zhong, Jianjiang Feng, Jie zhou
Camera-based person re-identification (ReID) systems have been widely applied in the field of public security.
1 code implementation • 17 Apr 2024 • Xin Li, Kun Yuan, Yajing Pei, Yiting Lu, Ming Sun, Chao Zhou, Zhibo Chen, Radu Timofte, Wei Sun, HaoNing Wu, ZiCheng Zhang, Jun Jia, Zhichao Zhang, Linhan Cao, Qiubo Chen, Xiongkuo Min, Weisi Lin, Guangtao Zhai, Jianhui Sun, Tianyi Wang, Lei LI, Han Kong, Wenxuan Wang, Bing Li, Cheng Luo, Haiqiang Wang, Xiangguang Chen, Wenhui Meng, Xiang Pan, Huiying Shi, Han Zhu, Xiaozhong Xu, Lei Sun, Zhenzhong Chen, Shan Liu, Fangyuan Kong, Haotian Fan, Yifang Xu, Haoran Xu, Mengduo Yang, Jie zhou, Jiaze Li, Shijie Wen, Mai Xu, Da Li, Shunyu Yao, Jiazhi Du, WangMeng Zuo, Zhibo Li, Shuai He, Anlong Ming, Huiyuan Fu, Huadong Ma, Yong Wu, Fie Xue, Guozhi Zhao, Lina Du, Jie Guo, Yu Zhang, huimin zheng, JunHao Chen, Yue Liu, Dulan Zhou, Kele Xu, Qisheng Xu, Tao Sun, Zhixiang Ding, Yuhang Hu
This paper reviews the NTIRE 2024 Challenge on Shortform UGC Video Quality Assessment (S-UGC VQA), where various excellent solutions are submitted and evaluated on the collected dataset KVQ from popular short-form video platform, i. e., Kuaishou/Kwai Platform.
1 code implementation • 9 Dec 2020 • Yunlong Liang, Fandong Meng, Ying Zhang, Jinan Xu, Yufeng Chen, Jie zhou
Firstly, we design a Heterogeneous Graph-Based Encoder to represent the conversation content (i. e., the dialogue history, its emotion flow, facial expressions, audio, and speakers' personalities) with a heterogeneous graph neural network, and then predict suitable emotions for feedback.
1 code implementation • 2 May 2022 • Jiangnan Li, Fandong Meng, Zheng Lin, Rui Liu, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou
Conversational Causal Emotion Entailment aims to detect causal utterances for a non-neutral targeted utterance from a conversation.
Ranked #1 on Causal Emotion Entailment on RECCON
1 code implementation • 18 Jul 2022 • Wanhua Li, Zhexuan Cao, Jianjiang Feng, Jie zhou, Jiwen Lu
As each sample is annotated with multiple attribute labels, these "words" will naturally form an unordered but meaningful "sentence", which depicts the semantic information of the corresponding sample.
1 code implementation • 31 Jan 2024 • Chujie Zheng, Fan Yin, Hao Zhou, Fandong Meng, Jie zhou, Kai-Wei Chang, Minlie Huang, Nanyun Peng
Prepending model inputs with safety prompts is a common practice for safeguarding large language models (LLMs) from complying with queries that contain harmful intents.
1 code implementation • 18 Dec 2019 • Feilong Chen, Fandong Meng, Jiaming Xu, Peng Li, Bo Xu, Jie zhou
Visual Dialog is a vision-language task that requires an AI agent to engage in a conversation with humans grounded in an image.
1 code implementation • EMNLP 2021 • Yuan YAO, Jiaju Du, Yankai Lin, Peng Li, Zhiyuan Liu, Jie zhou, Maosong Sun
Existing relation extraction (RE) methods typically focus on extracting relational facts between entity pairs within single sentences or documents.
1 code implementation • COLING 2022 • Zichun Yu, Tianyu Gao, Zhengyan Zhang, Yankai Lin, Zhiyuan Liu, Maosong Sun, Jie zhou
Prompting, which casts downstream applications as language modeling tasks, has shown to be sample efficient compared to standard fine-tuning with pre-trained models.
3 code implementations • ACL 2019 • Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, Jie zhou
Non-Autoregressive Transformer (NAT) aims to accelerate the Transformer model through discarding the autoregressive mechanism and generating target words independently, which fails to exploit the target sequential information.
1 code implementation • 21 Nov 2019 • Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, Jie zhou
Non-Autoregressive Neural Machine Translation (NAT) achieves significant decoding speedup through generating target words independently and simultaneously.
1 code implementation • CL (ACL) 2021 • Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Jie zhou
Non-Autoregressive Neural Machine Translation (NAT) removes the autoregressive mechanism and achieves significant decoding speedup through generating target words independently and simultaneously.
1 code implementation • ICCV 2021 • Ziwei Wang, Han Xiao, Jiwen Lu, Jie zhou
On the contrary, our GMPQ searches the mixed-quantization policy that can be generalized to largescale datasets with only a small amount of data, so that the search cost is significantly reduced without performance degradation.
1 code implementation • 24 Jan 2023 • Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie zhou, Wenge Rong, Zhang Xiong
Our method outperforms previous fine-tuning and HyperNetwork-based methods and achieves state-of-the-art performance for Sequential Model Editing (SME).
1 code implementation • 23 Mar 2023 • Xiaoke Huang, Yiji Cheng, Yansong Tang, Xiu Li, Jie zhou, Jiwen Lu
Moreover, only minutes of optimization is enough for plausible reconstruction results.
1 code implementation • IJCNLP 2019 • Yunlong Liang, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, Jie zhou
Aspect based sentiment analysis (ABSA) aims to identify the sentiment polarity towards the given aspect in a sentence, while previous models typically exploit an aspect-independent (weakly associative) encoder for sentence representation generation.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +1
1 code implementation • SEMEVAL 2020 • Qian Zhao, Siyu Tao, Jie zhou, LinLin Wang, Xin Lin, Liang He
As a result, this model performs quite well in both validation and explanation.
1 code implementation • 15 Jun 2021 • Ganqu Cui, Yufeng Du, Cheng Yang, Jie zhou, Liang Xu, Xing Zhou, Xingyi Cheng, Zhiyuan Liu
The recent emergence of contrastive learning approaches facilitates the application on graph representation learning (GRL), introducing graph contrastive learning (GCL) into the literature.
1 code implementation • 4 Jul 2021 • Linqing Zhao, Jiwen Lu, Jie zhou
To address this, we employ a late fusion strategy where we first learn the geometric and contextual similarities between the input and back-projected (from 2D pixels) point clouds and utilize them to guide the fusion of two modalities to further exploit complementary information.
Ranked #20 on Semantic Segmentation on ScanNet
1 code implementation • Findings (ACL) 2021 • Ying Zhang, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou
In this paper, we tackle the problem by transferring knowledge from three aspects, i. e., domain, language and task, and strengthening connections among them.
1 code implementation • ICCV 2021 • Ziwei Wang, Yunsong Wang, Ziyi Wu, Jiwen Lu, Jie zhou
In this paper, we propose an instance similarity learning (ISL) method for unsupervised feature representation.
1 code implementation • ACL 2021 • Wenkai Yang, Yankai Lin, Peng Li, Jie zhou, Xu sun
In this work, we point out a potential problem of current backdoor attacking research: its evaluation ignores the stealthiness of backdoor attacks, and most of existing backdoor attacking methods are not stealthy either to system deployers or to system users.
1 code implementation • CVPR 2022 • Han Xiao, Ziwei Wang, Zheng Zhu, Jie zhou, Jiwen Lu
Differentiable architecture search (DARTS) acquires the optimal architectures by optimizing the architecture parameters with gradient descent, which significantly reduces the search cost.
1 code implementation • 11 Oct 2022 • Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou
In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance.
1 code implementation • ICCV 2023 • Han Xiao, Wenzhao Zheng, Zheng Zhu, Jie zhou, Jiwen Lu
Data mixing strategies (e. g., CutMix) have shown the ability to greatly improve the performance of convolutional neural networks (CNNs).
1 code implementation • 23 Aug 2023 • Yijin Liu, Xianfeng Zeng, Fandong Meng, Jie zhou
Large language models (LLMs) are capable of performing conditional sequence generation tasks, such as translation or summarization, through instruction fine-tuning.
1 code implementation • EMNLP 2020 • Shuhao Gu, Jinchao Zhang, Fandong Meng, Yang Feng, Wanying Xie, Jie zhou, Dong Yu
The vanilla NMT model usually adopts trivial equal-weighted objectives for target tokens with different frequencies and tends to generate more high-frequency tokens and less low-frequency tokens compared with the golden token distribution.
1 code implementation • 29 Jul 2023 • Lean Wang, Wenkai Yang, Deli Chen, Hao Zhou, Yankai Lin, Fandong Meng, Jie zhou, Xu sun
As large language models (LLMs) generate texts with increasing fluency and realism, there is a growing need to identify the source of texts to prevent the abuse of LLMs.
1 code implementation • Findings (ACL) 2021 • Zekang Li, Jinchao Zhang, Zhengcong Fei, Yang Feng, Jie zhou
Employing human judges to interact with chatbots on purpose to check their capacities is costly and low-efficient, and difficult to get rid of subjective bias.
1 code implementation • CVPR 2021 • Wenzhao Zheng, Chengkun Wang, Jiwen Lu, Jie zhou
In this paper, we propose a deep compositional metric learning (DCML) framework for effective and generalizable similarity measurement between images.
1 code implementation • EMNLP 2021 • Wenkai Yang, Yankai Lin, Peng Li, Jie zhou, Xu sun
Motivated by this observation, we construct a word-based robustness-aware perturbation to distinguish poisoned samples from clean samples to defend against the backdoor attacks on natural language processing (NLP) models.
1 code implementation • 11 Oct 2022 • Lei LI, Yankai Lin, Xuancheng Ren, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun
We then design a Model Uncertainty--aware Knowledge Integration (MUKI) framework to recover the golden supervision for the student.
1 code implementation • ACL 2020 • Qiu Ran, Yankai Lin, Peng Li, Jie zhou
By dynamically determining segment length and deleting repetitive segments, RecoverSAT is capable of recovering from repetitive and missing token errors.
1 code implementation • EMNLP 2021 • Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou
Its core motivation is to simulate the inference scene during training by replacing ground-truth tokens with predicted tokens, thus bridging the gap between training and inference.
1 code implementation • ACL 2022 • Yunlong Liang, Fandong Meng, Chulun Zhou, Jinan Xu, Yufeng Chen, Jinsong Su, Jie zhou
The goal of the cross-lingual summarization (CLS) is to convert a document in one language (e. g., English) to a summary in another one (e. g., Chinese).
1 code implementation • ACL 2019 • Shuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li, Jie zhou, Xu sun
We propose a novel model to separate the generation into two stages: key fact prediction and surface realization.
1 code implementation • 24 Jan 2020 • Jiachen Xu, Jingyu Gong, Jie zhou, Xin Tan, Yuan Xie, Lizhuang Ma
Besides local features, global information plays an essential role in semantic segmentation, while recent works usually fail to explicitly extract the meaningful global information and make full use of it.
1 code implementation • ECCV 2020 • Wanhua Li, Yueqi Duan, Jiwen Lu, Jianjiang Feng, Jie zhou
Human beings are fundamentally sociable -- that we generally organize our social lives in terms of relations with other people.
Ranked #1 on Visual Social Relationship Recognition on PIPA
1 code implementation • 7 Sep 2023 • Chujie Zheng, Hao Zhou, Fandong Meng, Jie zhou, Minlie Huang
This work shows that modern LLMs are vulnerable to option position changes in MCQs due to their inherent "selection bias", namely, they prefer to select specific option IDs as answers (like "Option A").
1 code implementation • 20 Mar 2018 • Shaohui Liu, Yi Wei, Jiwen Lu, Jie zhou
Unlike most existing evaluation frameworks which transfer the representation of ImageNet inception model to map images onto the feature space, our framework uses a specialized encoder to acquire fine-grained domain-specific representation.
1 code implementation • 15 Oct 2021 • Yujia Qin, Xiaozhi Wang, Yusheng Su, Yankai Lin, Ning Ding, Jing Yi, Weize Chen, Zhiyuan Liu, Juanzi Li, Lei Hou, Peng Li, Maosong Sun, Jie zhou
In the experiments, we study diverse few-shot NLP tasks and surprisingly find that in a 250-dimensional subspace found with 100 tasks, by only tuning 250 free parameters, we can recover 97% and 83% of the full prompt tuning performance for 100 seen tasks (using different training data) and 20 unseen tasks, respectively, showing great generalization ability of the found intrinsic task subspace.
1 code implementation • CVPR 2023 • Chengkun Wang, Wenzhao Zheng, Junlong Li, Jie zhou, Jiwen Lu
Learning a generalizable and comprehensive similarity metric to depict the semantic discrepancies between images is the foundation of many computer vision tasks.
1 code implementation • 28 May 2023 • Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Chaojun Xiao, Xiaozhi Wang, Xu Han, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Jie zhou
In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes.
2 code implementations • 20 Nov 2023 • Bohao Fan, Wenzhao Zheng, Jianjiang Feng, Jie zhou
In recent years, point cloud perception tasks have been garnering increasing attention.
Ranked #1 on 3D Human Pose Estimation on SLOPER4D
1 code implementation • ACL 2022 • Songming Zhang, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jian Liu, Jie zhou
Token-level adaptive training approaches can alleviate the token imbalance problem and thus improve neural machine translation, through re-weighting the losses of different target tokens based on specific statistical metrics (e. g., token frequency or mutual information).
1 code implementation • 9 Oct 2022 • Siyu Lai, Zhen Yang, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou
Word alignment which aims to extract lexicon translation equivalents between source and target sentences, serves as a fundamental tool for natural language processing.
1 code implementation • 23 Feb 2024 • Shunyu Liu, Jie zhou, Qunxi Zhu, Qin Chen, Qingchun Bai, Jun Xiao, Liang He
Aspect-Based Sentiment Analysis (ABSA) stands as a crucial task in predicting the sentiment polarity associated with identified aspects within text.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +1
1 code implementation • Findings (ACL) 2021 • Jie Zhou, Shengding Hu, Xin Lv, Cheng Yang, Zhiyuan Liu, Wei Xu, Jie Jiang, Juanzi Li, Maosong Sun
Based on the datasets, we propose novel tasks such as multi-hop knowledge abstraction (MKA), multi-hop knowledge concretization (MKC) and then design a comprehensive benchmark.
1 code implementation • 14 Sep 2021 • Xueyao Zhang, Jinchao Zhang, Yao Qiu, Li Wang, Jie zhou
Experimental results reveal that compared to the existing methods, HAT owns a much better understanding of the structure and it can also improve the quality of generated music, especially in the form and texture.
1 code implementation • EMNLP 2021 • Shaopeng Lai, Ante Wang, Fandong Meng, Jie zhou, Yubin Ge, Jiali Zeng, Junfeng Yao, Degen Huang, Jinsong Su
Dominant sentence ordering models can be classified into pairwise ordering models and set-to-sequence models.
1 code implementation • NAACL 2022 • Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou
Firstly, we discover that the success of magnitude pruning can be attributed to the preserved pre-training performance, which correlates with the downstream transferability.
1 code implementation • 14 Feb 2024 • Cheng Qian, Bingxiang He, Zhong Zhuang, Jia Deng, Yujia Qin, Xin Cong, Zhong Zhang, Jie zhou, Yankai Lin, Zhiyuan Liu, Maosong Sun
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions.
1 code implementation • 17 Feb 2024 • Wenkai Yang, Xiaohan Bi, Yankai Lin, Sishuo Chen, Jie zhou, Xu sun
We first formulate a general framework of agent backdoor attacks, then we present a thorough analysis on the different forms of agent backdoor attacks.
1 code implementation • 9 Dec 2020 • Jun Wan, Zhihui Lai, Jing Li, Jie zhou, Can Gao
Recently, heatmap regression has been widely explored in facial landmark detection and obtained remarkable performance.
1 code implementation • 7 Feb 2021 • Yusheng Su, Xu Han, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, Peng Li, Jie zhou, Maosong Sun
We then perform contrastive semi-supervised learning on both the retrieved unlabeled and original labeled instances to help PLMs capture crucial task-related semantic features.