Search Results for author: Jie zhou

Found 457 papers, 249 papers with code

TAKE: Topic-shift Aware Knowledge sElection for Dialogue Generation

1 code implementation COLING 2022 Chenxu Yang, Zheng Lin, Jiangnan Li, Fandong Meng, Weiping Wang, Lanrui Wang, Jie zhou

The knowledge selector generally constructs a query based on the dialogue context and selects the most appropriate knowledge to help response generation.

Dialogue Generation Knowledge Distillation +1

CodRED: A Cross-Document Relation Extraction Dataset for Acquiring Knowledge in the Wild

1 code implementation EMNLP 2021 Yuan YAO, Jiaju Du, Yankai Lin, Peng Li, Zhiyuan Liu, Jie zhou, Maosong Sun

Existing relation extraction (RE) methods typically focus on extracting relational facts between entity pairs within single sentences or documents.

Relation Extraction

Constructing Emotional Consensus and Utilizing Unpaired Data for Empathetic Dialogue Generation

no code implementations Findings (EMNLP) 2021 Lei Shen, Jinchao Zhang, Jiao Ou, Xiaofang Zhao, Jie zhou

To address the above issues, we propose a dual-generative model, Dual-Emp, to simultaneously construct the emotional consensus and utilize some external unpaired data.

Dialogue Generation

RSGT: Relational Structure Guided Temporal Relation Extraction

no code implementations COLING 2022 Jie zhou, Shenpo Dong, Hongkui Tu, Xiaodong Wang, Yong Dou

In this paper, we propose RSGT: Relational Structure Guided Temporal Relation Extraction to extract the relational structure features that can fit for both inter-sentence and intra-sentence relations.

Natural Language Understanding Temporal Relation Classification

Do Pre-trained Models Benefit Knowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach

1 code implementation Findings (ACL) 2022 Xin Lv, Yankai Lin, Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou

In recent years, pre-trained language models (PLMs) have been shown to capture factual knowledge from massive texts, which encourages the proposal of PLM-based knowledge graph completion (KGC) models.

Knowledge Graph Completion Link Prediction

MovieChats: Chat like Humans in a Closed Domain

no code implementations EMNLP 2020 Hui Su, Xiaoyu Shen, Zhou Xiao, Zheng Zhang, Ernie Chang, Cheng Zhang, Cheng Niu, Jie zhou

In this work, we take a close look at the movie domain and present a large-scale high-quality corpus with fine-grained annotations in hope of pushing the limit of movie-domain chatbots.

Chatbot Retrieval

Temporal Coherence or Temporal Motion: Which is More Critical for Video-based Person Re-identification?

no code implementations ECCV 2020 Guangyi Chen, Yongming Rao, Jiwen Lu, Jie zhou

Specifically, we disentangle the video representation into the temporal coherence and motion parts and randomly change the scale of the temporal motion features as the adversarial noise.

Video-Based Person Re-Identification

Deep Credible Metric Learning for Unsupervised Domain Adaptation Person Re-identification

no code implementations ECCV 2020 Guangyi Chen, Yuhao Lu, Jiwen Lu, Jie Zhou

Experimental results demonstrate that our DCML method explores credible and valuable training data and improves the performance of unsupervised domain adaptation.

Metric Learning Person Re-Identification +2

Unsupervised Dependency Graph Network

1 code implementation ACL 2022 Yikang Shen, Shawn Tan, Alessandro Sordoni, Peng Li, Jie zhou, Aaron Courville

We introduce a new model, the Unsupervised Dependency Graph Network (UDGN), that can induce dependency structures from raw corpora and the masked language modeling task.

Language Modelling Masked Language Modeling +2

Divide and Denoise: Learning from Noisy Labels in Fine-Grained Entity Typing with Cluster-Wise Loss Correction

no code implementations ACL 2022 Kunyuan Pang, Haoyu Zhang, Jie zhou, Ting Wang

In this work, we propose a clustering-based loss correction framework named Feature Cluster Loss Correction (FCLC), to address these two problems.

Entity Typing

Deep Hashing with Active Pairwise Supervision

no code implementations ECCV 2020 Ziwei Wang, Quan Zheng, Jiwen Lu, Jie zhou

n this paper, we propose a Deep Hashing method with Active Pairwise Supervision(DH-APS).

Deep Hashing

BMInf: An Efficient Toolkit for Big Model Inference and Tuning

1 code implementation ACL 2022 Xu Han, Guoyang Zeng, Weilin Zhao, Zhiyuan Liu, Zhengyan Zhang, Jie zhou, Jun Zhang, Jia Chao, Maosong Sun

In recent years, large-scale pre-trained language models (PLMs) containing billions of parameters have achieved promising results on various NLP tasks.

Quantization Scheduling

Rotation-robust Intersection over Union for 3D Object Detection

no code implementations ECCV 2020 Yu Zheng, Danyang Zhang, Sinan Xie, Jiwen Lu, Jie zhou

In this paper, we propose a Rotation-robust Intersection over Union ($ extit{RIoU}$) for 3D object detection, which aims to jointly learn the overlap of rotated bounding boxes.

3D Object Detection object-detection

Structural Deep Metric Learning for Room Layout Estimation

no code implementations ECCV 2020 Wenzhao Zheng, Jiwen Lu, Jie zhou

We employ a metric model and a layout encoder to map the RGB images and the ground-truth layouts to the embedding space, respectively, and a layout decoder to map the embeddings to the corresponding layouts, where the whole framework is trained in an end-to-end manner.

Metric Learning Room Layout Estimation

欺骗类动词的句法语义研究(On the Syntax and Semantics of Verbs of Cheating)

no code implementations CCL 2021 Shan Wang, Jie zhou

“欺骗是一种常见的社会现象, 但对欺骗类动词的研究十分有限。本文筛选“欺骗”类动词的单句并对其进行大规模的句法依存和语义依存分析。研究显示,“欺骗”类动词在句中作为从属词时, 可作为不同的句法成分和语义角色, 同时此类动词在句法功能上表现出高度的相似性。作为支配词的“欺骗”类动词, 承担不同句法功能时, 表现出不同的句法共现模式。语义上, 本文详细描述、解释了该类动词在语义密度、主客体角色、情境角色和事件关系等维度的语义依存特点。“欺骗”类动词的句法语义虽具有多样性, 但主要的句型为主谓宾句式, 而该句式中最常用的语义搭配模式是施事对涉事进行欺骗行为, 并对涉事产生影响。本研究结合依存语法和框架语义学, 融合定量统计和定性分析探究欺骗类动词的句法语义, 深化了对欺骗行为言语线索以及言说动词的研究。”

Bridging the Gap between Prior and Posterior Knowledge Selection for Knowledge-Grounded Dialogue Generation

no code implementations EMNLP 2020 Xiuyi Chen, Fandong Meng, Peng Li, Feilong Chen, Shuang Xu, Bo Xu, Jie zhou

Here, we deal with these issues on two aspects: (1) We enhance the prior selection module with the necessary posterior information obtained from the specially designed Posterior Information Prediction Module (PIPM); (2) We propose a Knowledge Distillation Based Training Strategy (KDBTS) to train the decoder with the knowledge selected from the prior distribution, removing the exposure bias of knowledge selection.

Dialogue Generation Knowledge Distillation

Fingerprint Matching with Localized Deep Representation

no code implementations30 Nov 2023 Yongjie Duan, Zhiyu Pan, Jianjiang Feng, Jie zhou

The matching scores produced by LDRF also exhibit intuitive statistical characteristics, which led us to propose a matching score normalization technique to mitigate the uncertainty in the cases of very small overlapping area.

SelfOcc: Self-Supervised Vision-Based 3D Occupancy Prediction

1 code implementation21 Nov 2023 Yuanhui Huang, Wenzhao Zheng, Borui Zhang, Jie zhou, Jiwen Lu

Our SelfOcc outperforms the previous best method SceneRF by 58. 7% using a single frame as input on SemanticKITTI and is the first self-supervised work that produces reasonable 3D occupancy for surround cameras on nuScenes.

Autonomous Driving Monocular Depth Estimation

AKConv: Convolutional Kernel with Arbitrary Sampled Shapes and Arbitrary Number of Parameters

1 code implementation20 Nov 2023 Xin Zhang, Yingze Song, Tingting Song, Degang Yang, Yichen Ye, Jie zhou, Liming Zhang

In response to the above questions, the Alterable Kernel Convolution (AKConv) is explored in this work, which gives the convolution kernel an arbitrary number of parameters and arbitrary sampled shapes to provide richer options for the trade-off between network overhead and performance.

object-detection Object Detection

LiDAR-HMR: 3D Human Mesh Recovery from LiDAR

2 code implementations20 Nov 2023 Bohao Fan, Wenzhao Zheng, Jianjiang Feng, Jie zhou

In recent years, point cloud perception tasks have been garnering increasing attention.

Human Mesh Recovery

MAVEN-Arg: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation

no code implementations15 Nov 2023 Xiaozhi Wang, Hao Peng, Yong Guan, Kaisheng Zeng, Jianhui Chen, Lei Hou, Xu Han, Yankai Lin, Zhiyuan Liu, Ruobing Xie, Jie zhou, Juanzi Li

Understanding events in texts is a core objective of natural language understanding, which requires detecting event occurrences, extracting event arguments, and analyzing inter-event relationships.

Event Argument Extraction Event Detection +3

Enabling Large Language Models to Learn from Rules

no code implementations15 Nov 2023 Wenkai Yang, Yankai Lin, Jie zhou, JiRong Wen

That is, humans can grasp the new tasks or knowledge quickly and generalize well given only a detailed rule and a few optional examples.

RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge

no code implementations14 Nov 2023 Yi Liu, Lianzhe Huang, Shicheng Li, Sishuo Chen, Hao Zhou, Fandong Meng, Jie zhou, Xu sun

Therefore, to evaluate the ability of LLMs to discern the reliability of external knowledge, we create a benchmark from existing knowledge bases.

counterfactual Knowledge Graphs +2

Eval-GCSC: A New Metric for Evaluating ChatGPT's Performance in Chinese Spelling Correction

1 code implementation14 Nov 2023 Kunting Li, Yong Hu, Shaolei Wang, Hanhan Ma, Liang He, Fandong Meng, Jie zhou

However, in the Chinese Spelling Correction (CSC) task, we observe a discrepancy: while ChatGPT performs well under human evaluation, it scores poorly according to traditional metrics.

Semantic Similarity Semantic Textual Similarity +1

TEAL: Tokenize and Embed ALL for Multi-modal Large Language Models

no code implementations8 Nov 2023 Zhen Yang, Yingxue Zhang, Fandong Meng, Jie zhou

Specifically, for the input from any modality, TEAL first discretizes it into a token sequence with the off-the-shelf tokenizer and embeds the token sequence into a joint embedding space with a learnable embedding matrix.

Improving Machine Translation with Large Language Models: A Preliminary Study with Cooperative Decoding

no code implementations6 Nov 2023 Jiali Zeng, Fandong Meng, Yongjing Yin, Jie zhou

Contemporary translation engines built upon the encoder-decoder framework have reached a high level of development, while the emergence of Large Language Models (LLMs) has disrupted their position by offering the potential for achieving superior translation quality.

Machine Translation NMT +1

Universal Multi-modal Multi-domain Pre-trained Recommendation

no code implementations3 Nov 2023 Wenqi Sun, Ruobing Xie, Shuqing Bian, Wayne Xin Zhao, Jie zhou

There is a rapidly-growing research interest in modeling user preferences via pre-training multi-domain interactions for recommender systems.

Recommendation Systems

Plot Retrieval as an Assessment of Abstract Semantic Association

no code implementations3 Nov 2023 Shicheng Xu, Liang Pang, Jiangnan Li, Mo Yu, Fandong Meng, HuaWei Shen, Xueqi Cheng, Jie zhou

Readers usually only give an abstract and vague description as the query based on their own understanding, summaries, or speculations of the plot, which requires the retrieval model to have a strong ability to estimate the abstract semantic associations between the query and candidate plots.

Information Retrieval Retrieval

Exploring Unified Perspective For Fast Shapley Value Estimation

1 code implementation2 Nov 2023 Borui Zhang, Baotong Tian, Wenzhao Zheng, Jie zhou, Jiwen Lu

Shapley values have emerged as a widely accepted and trustworthy tool, grounded in theoretical axioms, for addressing challenges posed by black-box models like deep neural networks.

MCUFormer: Deploying Vision Tranformers on Microcontrollers with Limited Memory

1 code implementation NeurIPS 2023 Yinan Liang, Ziwei Wang, Xiuwei Xu, Yansong Tang, Jie zhou, Jiwen Lu

Due to the high price and heavy energy consumption of GPUs, deploying deep models on IoT devices such as microcontrollers makes significant contributions for ecological AI.

Image Classification

Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules

1 code implementation24 Oct 2023 Chaojun Xiao, Yuqi Luo, Wenbin Zhang, Pengle Zhang, Xu Han, Yankai Lin, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Pre-trained language models (PLMs) have achieved remarkable results on NLP tasks but at the expense of huge parameter sizes and the consequent computational costs.

Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language

no code implementations20 Oct 2023 Zekai Qu, Ruobing Xie, Chaojun Xiao, Yuan YAO, Zhiyuan Liu, Fengzong Lian, Zhanhui Kang, Jie zhou

With the thriving of pre-trained language model (PLM) widely verified in various of NLP tasks, pioneer efforts attempt to explore the possible cooperation of the general textual information in PLM with the personalized behavioral information in user historical behavior sequences to enhance sequential recommendation (SR).

Informativeness Language Modelling +1

Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models

no code implementations19 Oct 2023 Weize Chen, Xiaoyue Xu, Xu Han, Yankai Lin, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Parameter-shared pre-trained language models (PLMs) have emerged as a successful approach in resource-constrained environments, enabling substantial reductions in model storage and memory costs without significant performance compromise.

DCRNN: A Deep Cross approach based on RNN for Partial Parameter Sharing in Multi-task Learning

no code implementations18 Oct 2023 Jie zhou, Qian Yu

The model has three innovations: 1) It adopts the idea of cross network and uses RNN network to cross-process the features, thereby effectively improves the expressive ability of the model; 2) It innovatively proposes the structure of partial parameter sharing; 3) It can effectively capture the potential correlation between different tasks to optimize the efficiency and methods for learning different tasks.

Multi-Task Learning Recommendation Systems

XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners

1 code implementation9 Oct 2023 Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Fang Guo, Qinglin Qi, Jie zhou, Yue Zhang

During the selection of unlabeled data, we combine the predictive uncertainty of the encoder and the explanation score of the decoder to acquire informative data for annotation.

Active Learning text-classification +1

C^2M-DoT: Cross-modal consistent multi-view medical report generation with domain transfer network

no code implementations9 Oct 2023 Ruizhi Wang, Xiangtao Wang, Jie zhou, Thomas Lukasiewicz, Zhenghua Xu

In addition, word-level optimization based on numbers ignores the semantics of reports and medical images, and the generated reports often cannot achieve good performance.

Contrastive Learning Medical Report Generation

Enhancing Argument Structure Extraction with Efficient Leverage of Contextual Information

1 code implementation8 Oct 2023 Yun Luo, Zhen Yang, Fandong Meng, Yingjie Li, Jie zhou, Yue Zhang

However, we observe that merely concatenating sentences in a contextual window does not fully utilize contextual information and can sometimes lead to excessive attention on less informative sentences.

Skip-Plan: Procedure Planning in Instructional Videos via Condensed Action Space Learning

no code implementations ICCV 2023 Zhiheng Li, Wenjia Geng, Muheng Li, Lei Chen, Yansong Tang, Jiwen Lu, Jie zhou

By this means, our model explores all sorts of reliable sub-relations within an action sequence in the condensed action space.

TCOVIS: Temporally Consistent Online Video Instance Segmentation

1 code implementation ICCV 2023 Junlong Li, Bingyao Yu, Yongming Rao, Jie zhou, Jiwen Lu

The core of our method consists of a global instance assignment strategy and a spatio-temporal enhancement module, which improve the temporal consistency of the features from two aspects.

Instance Segmentation Semantic Segmentation +1

Introspective Deep Metric Learning

2 code implementations11 Sep 2023 Chengkun Wang, Wenzhao Zheng, Zheng Zhu, Jie zhou, Jiwen Lu

This paper proposes an introspective deep metric learning (IDML) framework for uncertainty-aware comparisons of images.

Image Retrieval Metric Learning

AMLP:Adaptive Masking Lesion Patches for Self-supervised Medical Image Segmentation

no code implementations8 Sep 2023 Xiangtao Wang, Ruizhi Wang, Jie zhou, Thomas Lukasiewicz, Zhenghua Xu

The proposed strategies effectively address limitations in applying masked modeling to medical images, tailored to capturing fine lesion details vital for segmentation tasks.

Image Segmentation Medical Image Segmentation +3

Large Language Models Are Not Robust Multiple Choice Selectors

no code implementations7 Sep 2023 Chujie Zheng, Hao Zhou, Fandong Meng, Jie zhou, Minlie Huang

Through extensive empirical analyses with 20 LLMs on three benchmarks, we pinpoint that this behavioral bias primarily stems from LLMs' token bias, where the model a priori assigns more probabilistic mass to specific option ID tokens (e. g., A/B/C/D) when predicting answers from the option IDs.

Multiple-choice Selection bias

Exploring the Robustness of Human Parsers Towards Common Corruptions

no code implementations2 Sep 2023 Sanyi Zhang, Xiaochun Cao, Rui Wang, Guo-Jun Qi, Jie zhou

The experimental results show that the proposed method demonstrates good universality which can improve the robustness of the human parsing models and even the semantic segmentation models when facing various image common corruptions.

Data Augmentation Human Parsing +1

PointOcc: Cylindrical Tri-Perspective View for Point-based 3D Semantic Occupancy Prediction

1 code implementation31 Aug 2023 Sicheng Zuo, Wenzhao Zheng, Yuanhui Huang, Jie zhou, Jiwen Lu

To address this, we propose a cylindrical tri-perspective view to represent point clouds effectively and comprehensively and a PointOcc model to process them efficiently.

Autonomous Driving Segmentation +1

Improving Translation Faithfulness of Large Language Models via Augmenting Instructions

1 code implementation24 Aug 2023 Yijie Chen, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou

The experimental results demonstrate significant improvements in translation performance with SWIE based on BLOOMZ-3b, particularly in zero-shot and long text translations due to reduced instruction forgetting risk.

Instruction Following Machine Translation +2

Instruction Position Matters in Sequence Generation with Large Language Models

1 code implementation23 Aug 2023 Yijin Liu, Xianfeng Zeng, Fandong Meng, Jie zhou

Large language models (LLMs) are capable of performing conditional sequence generation tasks, such as translation or summarization, through instruction fine-tuning.

Instruction Following Translation

AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors

1 code implementation21 Aug 2023 Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

Autonomous agents empowered by Large Language Models (LLMs) have undergone significant improvements, enabling them to generalize across a broad spectrum of tasks.

An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning

1 code implementation17 Aug 2023 Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie zhou, Yue Zhang

Moreover, we find that ALPACA can maintain more knowledge and capacity compared with LLAMA during the continual fine-tuning, which implies that general instruction tuning can help mitigate the forgetting phenomenon of LLMs in the further fine-tuning process.

Reading Comprehension

Towards Multiple References Era -- Addressing Data Leakage and Limited Reference Diversity in NLG Evaluation

1 code implementation6 Aug 2023 Xianfeng Zeng, Yijin Liu, Fandong Meng, Jie zhou

To address this issue, we propose to utilize \textit{multiple references} to enhance the consistency between these metrics and human evaluations.

Text Generation

EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education

1 code implementation5 Aug 2023 Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie zhou, Liang He, Xipeng Qiu

Currently, EduChat is available online as an open-source project, with its code, data, and model parameters available on platforms (e. g., GitHub https://github. com/icalk-nlp/EduChat, Hugging Face https://huggingface. co/ecnu-icalk ).

Chatbot Language Modelling +1

Human-M3: A Multi-view Multi-modal Dataset for 3D Human Pose Estimation in Outdoor Scenes

1 code implementation1 Aug 2023 Bohao Fan, Siqi Wang, Wenxuan Guo, Wenzhao Zheng, Jianjiang Feng, Jie zhou

In this article, we propose Human-M3, an outdoor multi-modal multi-view multi-person human pose database which includes not only multi-view RGB videos of outdoor scenes but also corresponding pointclouds.

3D Human Pose Estimation

ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs

1 code implementation31 Jul 2023 Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, Maosong Sun

Based on ToolBench, we fine-tune LLaMA to obtain an LLM ToolLLaMA, and equip it with a neural API retriever to recommend appropriate APIs for each instruction.

Towards Codable Watermarking for Injecting Multi-bit Information to LLM

no code implementations29 Jul 2023 Lean Wang, Wenkai Yang, Deli Chen, Hao Zhou, Yankai Lin, Fandong Meng, Jie zhou, Xu sun

As large language models (LLMs) generate texts with increasing fluency and realism, there is a growing need to identify the source of texts to prevent the abuse of LLMs.

TIM: Teaching Large Language Models to Translate with Comparison

1 code implementation10 Jul 2023 Jiali Zeng, Fandong Meng, Yongjing Yin, Jie zhou

Open-sourced large language models (LLMs) have demonstrated remarkable efficacy in various tasks with instruction tuning.


Soft Language Clustering for Multilingual Model Pre-training

no code implementations13 Jun 2023 Jiali Zeng, Yufan Jiang, Yongjing Yin, Yi Jing, Fandong Meng, Binghuai Lin, Yunbo Cao, Jie zhou

Multilingual pre-trained language models have demonstrated impressive (zero-shot) cross-lingual transfer abilities, however, their performance is hindered when the target language has distant typology from source languages or when pre-training data is limited in size.

Clustering Question Answering +4

Towards Accurate Data-free Quantization for Diffusion Models

no code implementations30 May 2023 Changyuan Wang, Ziwei Wang, Xiuwei Xu, Yansong Tang, Jie zhou, Jiwen Lu

On the contrary, we design group-wise quantization functions for activation discretization in different timesteps and sample the optimal timestep for informative calibration image generation, so that our quantized diffusion model can reduce the discretization errors with negligible computational overhead.

Data Free Quantization Image Generation

Emergent Modularity in Pre-trained Transformers

1 code implementation28 May 2023 Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Chaojun Xiao, Xiaozhi Wang, Xu Han, Zhiyuan Liu, Ruobing Xie, Maosong Sun, Jie zhou

In analogy to human brains, we consider two main characteristics of modularity: (1) functional specialization of neurons: we evaluate whether each neuron is mainly specialized in a certain function, and find that the answer is yes.

Stochastic Bridges as Effective Regularizers for Parameter-Efficient Tuning

1 code implementation28 May 2023 Weize Chen, Xu Han, Yankai Lin, Zhiyuan Liu, Maosong Sun, Jie zhou

Since it is non-trivial to directly model the intermediate states and design a running cost function, we propose to use latent stochastic bridges to regularize the intermediate states and use the regularization as the running cost of PETs.

Plug-and-Play Knowledge Injection for Pre-trained Language Models

1 code implementation28 May 2023 Zhengyan Zhang, Zhiyuan Zeng, Yankai Lin, Huadong Wang, Deming Ye, Chaojun Xiao, Xu Han, Zhiyuan Liu, Peng Li, Maosong Sun, Jie zhou

Experimental results on three knowledge-driven NLP tasks show that existing injection methods are not suitable for the new paradigm, while map-tuning effectively improves the performance of downstream models.

Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning

no code implementations23 May 2023 Lean Wang, Lei LI, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie zhou, Xu sun

In-context learning (ICL) emerges as a promising capability of large language models (LLMs) by providing them with demonstration examples to perform diverse tasks.

A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition

1 code implementation21 May 2023 Limao Xiong, Jie zhou, Qunxi Zhu, Xiao Wang, Yuanbin Wu, Qi Zhang, Tao Gui, Xuanjing Huang, Jin Ma, Ying Shan

Particularly, we propose a Confidence-based Partial Label Learning (CPLL) method to integrate the prior confidence (given by annotators) and posterior confidences (learned by models) for crowd-annotated NER.

named-entity-recognition Named Entity Recognition +2

GFDC: A Granule Fusion Density-Based Clustering with Evidential Reasoning

no code implementations20 May 2023 Mingjie Cai, Zhishan Wu, Qingguo Li, Feng Xu, Jie zhou

Further, three novel granule fusion strategies are utilized to combine granules into stable cluster structures, helping to detect clusters with arbitrary shapes.


Mitigating Catastrophic Forgetting in Task-Incremental Continual Learning with Adaptive Classification Criterion

no code implementations20 May 2023 Yun Luo, Xiaotian Lin, Zhen Yang, Fandong Meng, Jie zhou, Yue Zhang

It is seldom considered to adapt the decision boundary for new representations and in this paper we propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning (SCCL), In our method, a contrastive loss is used to directly learn representations for different tasks and a limited number of data samples are saved as the classification criterion.

Classification Continual Learning +1

Personality Understanding of Fictional Characters during Book Reading

1 code implementation17 May 2023 Mo Yu, Jiangnan Li, Shunyu Yao, Wenjie Pang, Xiaochen Zhou, Zhou Xiao, Fandong Meng, Jie zhou

As readers engage with a story, their understanding of a character evolves based on new events and information; and multiple fine-grained aspects of personalities can be perceived.

Towards Unifying Multi-Lingual and Cross-Lingual Summarization

no code implementations16 May 2023 Jiaan Wang, Fandong Meng, Duo Zheng, Yunlong Liang, Zhixu Li, Jianfeng Qu, Jie zhou

In this paper, we aim to unify MLS and CLS into a more general setting, i. e., many-to-many summarization (M2MS), where a single model could process documents in any language and generate their summaries also in any language.

Language Modelling Text Summarization

Recyclable Tuning for Continual Pre-training

1 code implementation15 May 2023 Yujia Qin, Cheng Qian, Xu Han, Yankai Lin, Huadong Wang, Ruobing Xie, Zhiyuan Liu, Maosong Sun, Jie zhou

In pilot studies, we find that after continual pre-training, the upgraded PLM remains compatible with the outdated adapted weights to some extent.

RC3: Regularized Contrastive Cross-lingual Cross-modal Pre-training

no code implementations13 May 2023 Chulun Zhou, Yunlong Liang, Fandong Meng, Jinan Xu, Jinsong Su, Jie zhou

In this paper, we propose Regularized Contrastive Cross-lingual Cross-modal (RC^3) pre-training, which further exploits more abundant weakly-aligned multilingual image-text pairs.

Contrastive Learning Machine Translation

WeLayout: WeChat Layout Analysis System for the ICDAR 2023 Competition on Robust Layout Segmentation in Corporate Documents

no code implementations11 May 2023 Mingliang Zhang, Zhen Cao, Juntao Liu, LiQiang Niu, Fandong Meng, Jie zhou

Our approach effectively demonstrates the benefits of combining query-based and anchor-free models for achieving robust layout segmentation in corporate documents.

Bayesian Optimization Segmentation

Investigating Forgetting in Pre-Trained Representations Through Continual Learning

no code implementations10 May 2023 Yun Luo, Zhen Yang, Xuefeng Bai, Fandong Meng, Jie zhou, Yue Zhang

Intuitively, the representation forgetting can influence the general knowledge stored in pre-trained language models (LMs), but the concrete effect is still unclear.

Continual Learning General Knowledge

Diffusion Theory as a Scalpel: Detecting and Purifying Poisonous Dimensions in Pre-trained Language Models Caused by Backdoor or Bias

no code implementations8 May 2023 Zhiyuan Zhang, Deli Chen, Hao Zhou, Fandong Meng, Jie zhou, Xu sun

To settle this issue, we propose the Fine-purifying approach, which utilizes the diffusion theory to study the dynamic process of fine-tuning for finding potentially poisonous dimensions.

Attacking Pre-trained Recommendation

1 code implementation6 May 2023 Yiqing Wu, Ruobing Xie, Zhao Zhang, Yongchun Zhu, Fuzhen Zhuang, Jie zhou, Yongjun Xu, Qing He

Recently, a series of pioneer studies have shown the potency of pre-trained models in sequential recommendation, illuminating the path of building an omniscient unified pre-trained recommendation model for different downstream recommendation tasks.

Sequential Recommendation

DSPDet3D: Dynamic Spatial Pruning for 3D Small Object Detection

1 code implementation5 May 2023 Xiuwei Xu, Zhihao Sun, Ziwei Wang, Hongmin Liu, Jie zhou, Jiwen Lu

We organize two benchmarks on ScanNet and TO-SCENE dataset to evaluate the ability of fine-grained 3D object detection, where our DSPDet3D improves the detection performance of small objects to a new level while achieving leading inference speed compared with existing 3D object detection methods.

3D Object Detection object-detection +1

BranchNorm: Robustly Scaling Extremely Deep Transformers

no code implementations4 May 2023 Yijin Liu, Xianfeng Zeng, Fandong Meng, Jie zhou

Recently, DeepNorm scales Transformers into extremely deep (i. e., 1000 layers) and reveals the promising potential of deep scaling.

Unified Model Learning for Various Neural Machine Translation

no code implementations4 May 2023 Yunlong Liang, Fandong Meng, Jinan Xu, Jiaan Wang, Yufeng Chen, Jie zhou

Specifically, we propose a ``versatile'' model, i. e., the Unified Model Learning for NMT (UMLNMT) that works with data from different tasks, and can translate well in multiple settings simultaneously, and theoretically it can be as many as possible.

Document Translation Machine Translation +2

Learning Accurate Performance Predictors for Ultrafast Automated Model Compression

1 code implementation13 Apr 2023 Ziwei Wang, Jiwen Lu, Han Xiao, Shengyu Liu, Jie zhou

On the contrary, we obtain the optimal efficient networks by directly optimizing the compression policy with an accurate performance predictor, where the ultrafast automated model compression for various computational cost constraint is achieved without complex compression policy search and evaluation.

Image Classification Model Compression +3

Triple Sequence Learning for Cross-domain Recommendation

no code implementations11 Apr 2023 Haokai Ma, Ruobing Xie, Lei Meng, Xin Chen, Xu Zhang, Leyu Lin, Jie zhou

To address this issue, we present a novel framework, termed triple sequence learning for cross-domain recommendation (Tri-CDR), which jointly models the source, target, and mixed behavior sequences to highlight the global and target preference and precisely model the triple correlation in CDR.

Contrastive Learning

Binarizing Sparse Convolutional Networks for Efficient Point Cloud Analysis

no code implementations CVPR 2023 Xiuwei Xu, Ziwei Wang, Jie zhou, Jiwen Lu

In this paper, we propose binary sparse convolutional networks called BSC-Net for efficient point cloud analysis.

Binarization Quantization

A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models

no code implementations18 Mar 2023 Junjie Ye, Xuanting Chen, Nuo Xu, Can Zu, Zekai Shao, Shichun Liu, Yuhan Cui, Zeyang Zhou, Chao Gong, Yang shen, Jie zhou, Siming Chen, Tao Gui, Qi Zhang, Xuanjing Huang

GPT series models, such as GPT-3, CodeX, InstructGPT, ChatGPT, and so on, have gained considerable attention due to their exceptional natural language processing capabilities.

Natural Language Understanding

SurroundOcc: Multi-Camera 3D Occupancy Prediction for Autonomous Driving

2 code implementations ICCV 2023 Yi Wei, Linqing Zhao, Wenzhao Zheng, Zheng Zhu, Jie zhou, Jiwen Lu

Towards a more comprehensive perception of a 3D scene, in this paper, we propose a SurroundOcc method to predict the 3D occupancy with multi-camera images.

3D Object Detection Autonomous Driving +2

Precise Facial Landmark Detection by Reference Heatmap Transformer

no code implementations14 Mar 2023 Jun Wan, Jun Liu, Jie zhou, Zhihui Lai, Linlin Shen, Hang Sun, Ping Xiong, Wenwen Min

Most facial landmark detection methods predict landmarks by mapping the input facial appearance features to landmark heatmaps and have achieved promising results.

Facial Landmark Detection

HiNet: Novel Multi-Scenario & Multi-Task Learning with Hierarchical Information Extraction

1 code implementation10 Mar 2023 Jie zhou, Xianshuai Cao, Wenhao Li, Lin Bo, Kun Zhang, Chuan Luo, Qian Yu

Multi-scenario & multi-task learning has been widely applied to many recommendation systems in industrial applications, wherein an effective and practical approach is to carry out multi-scenario transfer learning on the basis of the Mixture-of-Expert (MoE) architecture.

Multi-Task Learning Recommendation Systems

Is ChatGPT a Good NLG Evaluator? A Preliminary Study

1 code implementation7 Mar 2023 Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, Jie zhou

In detail, we regard ChatGPT as a human evaluator and give task-specific (e. g., summarization) and aspect-specific (e. g., relevance) instruction to prompt ChatGPT to evaluate the generated results of NLG models.

Story Generation

Unleashing Text-to-Image Diffusion Models for Visual Perception

2 code implementations ICCV 2023 Wenliang Zhao, Yongming Rao, Zuyan Liu, Benlin Liu, Jie zhou, Jiwen Lu

In this paper, we propose VPD (Visual Perception with a pre-trained Diffusion model), a new framework that exploits the semantic information of a pre-trained text-to-image diffusion model in visual perception tasks.

Ranked #4 on Monocular Depth Estimation on NYU-Depth V2 (using extra training data)

Denoising Image Segmentation +3

How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks

no code implementations1 Mar 2023 Xuanting Chen, Junjie Ye, Can Zu, Nuo Xu, Rui Zheng, Minlong Peng, Jie zhou, Tao Gui, Qi Zhang, Xuanjing Huang

The GPT-3. 5 models have demonstrated impressive performance in various Natural Language Processing (NLP) tasks, showcasing their strong understanding and reasoning capabilities.

Natural Language Inference Natural Language Understanding +1

Zero-Shot Cross-Lingual Summarization via Large Language Models

no code implementations28 Feb 2023 Jiaan Wang, Yunlong Liang, Fandong Meng, Beiqi Zou, Zhixu Li, Jianfeng Qu, Jie zhou

Given a document in a source language, cross-lingual summarization (CLS) aims to generate a summary in a different target language.


A Flexible Multi-view Multi-modal Imaging System for Outdoor Scenes

no code implementations21 Feb 2023 Meng Zhang, Wenxuan Guo, Bohao Fan, Yifan Chen, Jianjiang Feng, Jie zhou

The experimental results show that multi-view point clouds greatly improve 3D object detection and tracking accuracy regardless of complex and various outdoor environments.

3D Object Detection object-detection

Tri-Perspective View for Vision-Based 3D Semantic Occupancy Prediction

2 code implementations CVPR 2023 Yuanhui Huang, Wenzhao Zheng, Yunpeng Zhang, Jie zhou, Jiwen Lu

To lift image features to the 3D TPV space, we further propose a transformer-based TPV encoder (TPVFormer) to obtain the TPV features effectively.

3D Semantic Scene Completion Autonomous Driving

Feature Decomposition for Reducing Negative Transfer: A Novel Multi-task Learning Method for Recommender System

1 code implementation10 Feb 2023 Jie zhou, Qian Yu, Chuan Luo, Jing Zhang

In recent years, thanks to the rapid development of deep learning (DL), DL-based multi-task learning (MTL) has made significant progress, and it has been successfully applied to recommendation systems (RS).

Multi-Task Learning Recommendation Systems

A Multi-task Multi-stage Transitional Training Framework for Neural Chat Translation

no code implementations27 Jan 2023 Chulun Zhou, Yunlong Liang, Fandong Meng, Jie zhou, Jinan Xu, Hongji Wang, Min Zhang, Jinsong Su

To address these issues, in this paper, we propose a multi-task multi-stage transitional (MMT) training framework, where an NCT model is trained using the bilingual chat translation dataset and additional monolingual dialogues.

NMT Translation

Integrating Local Real Data with Global Gradient Prototypes for Classifier Re-Balancing in Federated Long-Tailed Learning

no code implementations25 Jan 2023 Wenkai Yang, Deli Chen, Hao Zhou, Fandong Meng, Jie zhou, Xu sun

Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively in a data privacy-preserving manner.

Federated Learning Privacy Preserving

When to Trust Aggregated Gradients: Addressing Negative Client Sampling in Federated Learning

no code implementations25 Jan 2023 Wenkai Yang, Yankai Lin, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun

Federated Learning has become a widely-used framework which allows learning a global model on decentralized local datasets under the condition of protecting local data privacy.

Federated Learning text-classification +1

Transformer-Patcher: One Mistake worth One Neuron

1 code implementation24 Jan 2023 Zeyu Huang, Yikang Shen, Xiaofeng Zhang, Jie zhou, Wenge Rong, Zhang Xiong

Our method outperforms previous fine-tuning and HyperNetwork-based methods and achieves state-of-the-art performance for Sequential Model Editing (SME).

Model Editing

AdaPoinTr: Diverse Point Cloud Completion with Adaptive Geometry-Aware Transformers

1 code implementation11 Jan 2023 Xumin Yu, Yongming Rao, Ziyi Wang, Jiwen Lu, Jie zhou

In this paper, we present a new method that reformulates point cloud completion as a set-to-set translation problem and design a new model, called PoinTr, which adopts a Transformer encoder-decoder architecture for point cloud completion.

Denoising Inductive Bias +1

DiffTalk: Crafting Diffusion Models for Generalized Audio-Driven Portraits Animation

1 code implementation CVPR 2023 Shuai Shen, Wenliang Zhao, Zibin Meng, Wanhua Li, Zheng Zhu, Jie zhou, Jiwen Lu

In this way, the proposed DiffTalk is capable of producing high-quality talking head videos in synchronization with the source audio, and more importantly, it can be naturally generalized across different identities without any further fine-tuning.

Denoising Talking Head Generation

CLIP-Cluster: CLIP-Guided Attribute Hallucination for Face Clustering

no code implementations ICCV 2023 Shuai Shen, Wanhua Li, Xiaobing Wang, Dafeng Zhang, Zhezhu Jin, Jie zhou, Jiwen Lu

Furthermore, we develop a neighbor-aware proxy generator that fuses the features describing various attributes into a proxy feature to build a bridge among different sub-clusters and reduce the intra-class variance.

Clustering Face Clustering

Deep Factorized Metric Learning

1 code implementation CVPR 2023 Chengkun Wang, Wenzhao Zheng, Junlong Li, Jie zhou, Jiwen Lu

Learning a generalizable and comprehensive similarity metric to depict the semantic discrepancies between images is the foundation of many computer vision tasks.

Image Classification Metric Learning

DiffSwap: High-Fidelity and Controllable Face Swapping via 3D-Aware Masked Diffusion

1 code implementation CVPR 2023 Wenliang Zhao, Yongming Rao, Weikang Shi, Zuyan Liu, Jie zhou, Jiwen Lu

Unlike previous work that relies on carefully designed network architectures and loss functions to fuse the information from the source and target faces, we reformulate the face swapping as a conditional inpainting task, performed by a powerful diffusion model guided by the desired face attributes (e. g., identity and landmarks).

Face Swapping

Deep learning for size-agnostic inverse design of random-network 3D printed mechanical metamaterials

no code implementations22 Dec 2022 Helda Pahlavani, Kostas Tsifoutis-Kazolis, Prerak Mody, Jie zhou, Mohammad J. Mirzaali, Amir A. Zadpoor

Practical applications of mechanical metamaterials often involve solving inverse problems where the objective is to find the (multiple) microarchitectures that give rise to a given set of properties.

Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint

1 code implementation18 Dec 2022 Borui Zhang, Wenzhao Zheng, Jie zhou, Jiwen Lu

Deep learning has revolutionized human society, yet the black-box nature of deep neural networks hinders further application to reliability-demanded industries.

Summary-Oriented Vision Modeling for Multimodal Abstractive Summarization

1 code implementation15 Dec 2022 Yunlong Liang, Fandong Meng, Jinan Xu, Jiaan Wang, Yufeng Chen, Jie zhou

However, less attention has been paid to the visual features from the perspective of the summary, which may limit the model performance, especially in the low- and zero-resource scenarios.

Abstractive Text Summarization

Understanding Translationese in Cross-Lingual Summarization

no code implementations14 Dec 2022 Jiaan Wang, Fandong Meng, Yunlong Liang, Tingyi Zhang, Jiarong Xu, Zhixu Li, Jie zhou

In detail, we find that (1) the translationese in documents or summaries of test sets might lead to the discrepancy between human judgment and automatic evaluation; (2) the translationese in training sets would harm model performance in real-world applications; (3) though machine-translated documents involve translationese, they are very useful for building CLS systems on low-resource languages under specific training strategies.

FLAG3D: A 3D Fitness Activity Dataset with Language Instruction

no code implementations CVPR 2023 Yansong Tang, Jinpeng Liu, Aoyang Liu, Bin Yang, Wenxun Dai, Yongming Rao, Jiwen Lu, Jie zhou, Xiu Li

With the continuously thriving popularity around the world, fitness activity analytic has become an emerging research topic in computer vision.

Action Generation Action Recognition +2

DC-MBR: Distributional Cooling for Minimum Bayesian Risk Decoding

no code implementations8 Dec 2022 Jianhao Yan, Jin Xu, Fandong Meng, Jie zhou, Yue Zhang

In this work, we show that the issue arises from the un-consistency of label smoothing on the token-level and sequence-level distributions.

Machine Translation NMT

Diffusion-SDF: Text-to-Shape via Voxelized Diffusion

1 code implementation CVPR 2023 Muheng Li, Yueqi Duan, Jie zhou, Jiwen Lu

With the rising industrial attention to 3D virtual modeling technology, generating novel 3D content based on specified conditions (e. g. text) has become a hot issue.

Rephrasing the Reference for Non-Autoregressive Machine Translation

no code implementations30 Nov 2022 Chenze Shao, Jinchao Zhang, Jie zhou, Yang Feng

In response to this problem, we introduce a rephraser to provide a better training target for NAT by rephrasing the reference sentence according to the NAT output.

Machine Translation Translation

Findings of the WMT 2022 Shared Task on Translation Suggestion

no code implementations30 Nov 2022 Zhen Yang, Fandong Meng, Yingxue Zhang, Ernan Li, Jie zhou

We report the result of the first edition of the WMT shared task on Translation Suggestion (TS).

Machine Translation Translation

AutoCAD: Automatically Generating Counterfactuals for Mitigating Shortcut Learning

1 code implementation29 Nov 2022 Jiaxin Wen, Yeshuang Zhu, Jinchao Zhang, Jie zhou, Minlie Huang

Recent studies have shown the impressive efficacy of counterfactually augmented data (CAD) for reducing NLU models' reliance on spurious features and improving their generalizability.

Summer: WeChat Neural Machine Translation Systems for the WMT22 Biomedical Translation Task

no code implementations28 Nov 2022 Ernan Li, Fandong Meng, Jie zhou

This paper introduces WeChat's participation in WMT 2022 shared biomedical translation task on Chinese to English.

Machine Translation Translation

SGCE-Font: Skeleton Guided Channel Expansion for Chinese Font Generation

no code implementations26 Nov 2022 Jie zhou, Yefei Wang, Yiyang Yuan, Qing Huang, Jinshan Zeng

Numerical results show that the mode collapse issue suffered by the known CycleGAN can be effectively alleviated by equipping with the proposed SGCE module, and the CycleGAN equipped with SGCE outperforms the state-of-the-art models in terms of four important evaluation metrics and visualization quality.

Font Generation

Reconstructing high-order sequence features of dynamic functional connectivity networks based on diversified covert attention patterns for Alzheimer's disease classification

no code implementations19 Nov 2022 Zhixiang Zhang, Biao Jie, Zhengdong Wang, Jie zhou, Yang Yang

Recent studies have applied deep learning methods such as convolutional recurrent neural networks (CRNs) and Transformers to brain disease classification based on dynamic functional connectivity networks (dFCNs), such as Alzheimer's disease (AD), achieving better performance than traditional machine learning methods.


Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information

1 code implementation CVPR 2023 Weijie Su, Xizhou Zhu, Chenxin Tao, Lewei Lu, Bin Li, Gao Huang, Yu Qiao, Xiaogang Wang, Jie zhou, Jifeng Dai

It has been proved that combining multiple pre-training strategies and data from various modalities/sources can greatly boost the training of large-scale models.

Ranked #2 on Object Detection on LVIS v1.0 minival (using extra training data)

Image Classification Long-tailed Object Detection +3

Cross-Modal Adapter for Text-Video Retrieval

1 code implementation17 Nov 2022 Haojun Jiang, Jianke Zhang, Rui Huang, Chunjiang Ge, Zanlin Ni, Jiwen Lu, Jie zhou, Shiji Song, Gao Huang

However, as pre-trained models are scaling up, fully fine-tuning them on text-video retrieval datasets has a high risk of overfitting.

Retrieval Video Retrieval

CSCD-IME: Correcting Spelling Errors Generated by Pinyin IME

1 code implementation16 Nov 2022 Yong Hu, Fandong Meng, Jie zhou

In fact, most of Chinese input is based on pinyin input method, so the study of spelling errors in this process is more practical and valuable.

Spelling Correction

Probabilistic Deep Metric Learning for Hyperspectral Image Classification

1 code implementation15 Nov 2022 Chengkun Wang, Wenzhao Zheng, Xian Sun, Jiwen Lu, Jie zhou

We propose to learn a global probabilistic distribution for each pixel in the patch and a probabilistic metric to model the distance between distributions.

Classification Hyperspectral Image Classification +1

MAVEN-ERE: A Unified Large-scale Dataset for Event Coreference, Temporal, Causal, and Subevent Relation Extraction

1 code implementation14 Nov 2022 Xiaozhi Wang, Yulin Chen, Ning Ding, Hao Peng, Zimu Wang, Yankai Lin, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Peng Li, Jie zhou

It contains 103, 193 event coreference chains, 1, 216, 217 temporal relations, 57, 992 causal relations, and 15, 841 subevent relations, which is larger than existing datasets of all the ERE tasks by at least an order of magnitude.

Event Relation Extraction Relation Extraction

Demystify Transformers & Convolutions in Modern Image Deep Networks

1 code implementation10 Nov 2022 Xiaowei Hu, Min Shi, Weiyun Wang, Sitong Wu, Linjie Xing, Wenhai Wang, Xizhou Zhu, Lewei Lu, Jie zhou, Xiaogang Wang, Yu Qiao, Jifeng Dai

Our experiments on various tasks and an analysis of inductive bias show a significant performance boost due to advanced network-level and block-level designs, but performance differences persist among different STMs.

Image Deep Networks Spatial Token Mixer

Few-Shot Character Understanding in Movies as an Assessment to Meta-Learning of Theory-of-Mind

1 code implementation9 Nov 2022 Mo Yu, Yisi Sang, Kangsheng Pu, Zekai Wei, Han Wang, Jing Li, Yue Yu, Jie zhou

When reading a story, humans can rapidly understand new fictional characters with a few observations, mainly by drawing analogy to fictional and real people they met before in their lives.

Meta-Learning Metric Learning

Counterfactual Data Augmentation via Perspective Transition for Open-Domain Dialogues

1 code implementation30 Oct 2022 Jiao Ou, Jinchao Zhang, Yang Feng, Jie zhou

The dialogue data admits a wide variety of responses for a given dialogue history, especially responses with different semantics.

counterfactual Counterfactual Inference +1

Question-Interlocutor Scope Realized Graph Modeling over Key Utterances for Dialogue Reading Comprehension

no code implementations26 Oct 2022 Jiangnan Li, Mo Yu, Fandong Meng, Zheng Lin, Peng Fu, Weiping Wang, Jie zhou

Although these tasks are effective, there are still urging problems: (1) randomly masking speakers regardless of the question cannot map the speaker mentioned in the question to the corresponding speaker in the dialogue, and ignores the speaker-centric nature of utterances.

Reading Comprehension

Exploring Mode Connectivity for Pre-trained Language Models

1 code implementation25 Oct 2022 Yujia Qin, Cheng Qian, Jing Yi, Weize Chen, Yankai Lin, Xu Han, Zhiyuan Liu, Maosong Sun, Jie zhou

(3) How does the PLM's task knowledge change along the path connecting two minima?

Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning

1 code implementation24 Oct 2022 Jing Yi, Weize Chen, Yujia Qin, Yankai Lin, Ning Ding, Xu Han, Zhiyuan Liu, Maosong Sun, Jie zhou

To fathom the mystery, we hypothesize that the adaptations of different DETs could all be reparameterized as low-dimensional optimizations in a unified optimization subspace, which could be found by jointly decomposing independent solutions of different DETs.

Empathetic Dialogue Generation via Sensitive Emotion Recognition and Sensible Knowledge Selection

1 code implementation21 Oct 2022 Lanrui Wang, Jiangnan Li, Zheng Lin, Fandong Meng, Chenxu Yang, Weiping Wang, Jie zhou

We use a fine-grained encoding strategy which is more sensitive to the emotion dynamics (emotion flow) in the conversations to predict the emotion-intent characteristic of response.

Dialogue Generation Emotion Recognition +2

ROSE: Robust Selective Fine-tuning for Pre-trained Language Models

1 code implementation18 Oct 2022 Lan Jiang, Hao Zhou, Yankai Lin, Peng Li, Jie zhou, Rui Jiang

Even though the large-scale language models have achieved excellent performances, they suffer from various adversarial attacks.

Adversarial Robustness

Towards Robust k-Nearest-Neighbor Machine Translation

3 code implementations17 Oct 2022 Hui Jiang, Ziyao Lu, Fandong Meng, Chulun Zhou, Jie zhou, Degen Huang, Jinsong Su

Meanwhile we inject two types of perturbations into the retrieved pairs for robust training.

Machine Translation NMT +1

Cerebrovascular Segmentation via Vessel Oriented Filtering Network

no code implementations17 Oct 2022 Zhanqiang Guo, Yao Luan, Jianjiang Feng, Wangsheng Lu, Yin Yin, Guangming Yang, Jie zhou

Accurate cerebrovascular segmentation from Magnetic Resonance Angiography (MRA) and Computed Tomography Angiography (CTA) is of great significance in diagnosis and treatment of cerebrovascular pathology.


Dynamics-aware Adversarial Attack of Adaptive Neural Networks

1 code implementation15 Oct 2022 An Tao, Yueqi Duan, Yingqi Wang, Jiwen Lu, Jie zhou

To address this issue, we propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.

Adversarial Attack

Token-Label Alignment for Vision Transformers

1 code implementation ICCV 2023 Han Xiao, Wenzhao Zheng, Zheng Zhu, Jie zhou, Jiwen Lu

Data mixing strategies (e. g., CutMix) have shown the ability to greatly improve the performance of convolutional neural networks (CNNs).

Image Classification Semantic Segmentation +1

A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

1 code implementation11 Oct 2022 Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance.

Natural Language Understanding

Mixture of Attention Heads: Selecting Attention Heads Per Token

1 code implementation11 Oct 2022 Xiaofeng Zhang, Yikang Shen, Zeyu Huang, Jie zhou, Wenge Rong, Zhang Xiong

This paper proposes the Mixture of Attention Heads (MoA), a new architecture that combines multi-head attention with the MoE mechanism.

Language Modelling Machine Translation +1

From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models

1 code implementation11 Oct 2022 Lei LI, Yankai Lin, Xuancheng Ren, Guangxiang Zhao, Peng Li, Jie zhou, Xu sun

We then design a Model Uncertainty--aware Knowledge Integration (MUKI) framework to recover the golden supervision for the student.

OPERA: Omni-Supervised Representation Learning with Hierarchical Supervisions

1 code implementation ICCV 2023 Chengkun Wang, Wenzhao Zheng, Zheng Zhu, Jie zhou, Jiwen Lu

The pretrain-finetune paradigm in modern computer vision facilitates the success of self-supervised learning, which tends to achieve better transferability than supervised learning.

Image Classification object-detection +3

Towards Robust Visual Question Answering: Making the Most of Biased Samples via Contrastive Learning

1 code implementation10 Oct 2022 Qingyi Si, Yuanxin Liu, Fandong Meng, Zheng Lin, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

However, these models reveal a trade-off that the improvements on OOD data severely sacrifice the performance on the in-distribution (ID) data (which is dominated by the biased samples).

Contrastive Learning Question Answering +1

Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA

1 code implementation10 Oct 2022 Qingyi Si, Fandong Meng, Mingyu Zheng, Zheng Lin, Yuanxin Liu, Peng Fu, Yanan Cao, Weiping Wang, Jie zhou

To overcome this limitation, we propose a new dataset that considers varying types of shortcuts by constructing different distribution shifts in multiple OOD test sets.

Question Answering Test +1

Cross-Align: Modeling Deep Cross-lingual Interactions for Word Alignment

1 code implementation9 Oct 2022 Siyu Lai, Zhen Yang, Fandong Meng, Yufeng Chen, Jinan Xu, Jie zhou

Word alignment which aims to extract lexicon translation equivalents between source and target sentences, serves as a fundamental tool for natural language processing.

Language Modelling Translation +1

Causal Intervention-based Prompt Debiasing for Event Argument Extraction

no code implementations4 Oct 2022 Jiaju Lin, Jie zhou, Qin Chen

Prompt-based methods have become increasingly popular among information extraction tasks, especially in low-data scenarios.

Event Argument Extraction

An Information Minimization Based Contrastive Learning Model for Unsupervised Sentence Embeddings Learning

1 code implementation COLING 2022 Shaobin Chen, Jie zhou, Yuling Sun, Liang He

To address this problem, we present an information minimization based contrastive learning (InforMin-CL) model to retain the useful information and discard the redundant information by maximizing the mutual information and minimizing the information entropy between positive instances meanwhile for unsupervised sentence representation learning.

Contrastive Learning Semantic Textual Similarity +2