Search Results for author: Xingxing Zhang

Found 36 papers, 11 papers with code

HVS-Inspired Signal Degradation Network for Just Noticeable Difference Estimation

1 code implementation16 Aug 2022 Jian Jin, Yuan Xue, Xingxing Zhang, Lili Meng, Yao Zhao, Weisi Lin

However, they have a major drawback that the generated JND is assessed in the real-world signal domain instead of in the perceptual domain in the human brain.

Diagnosing Ensemble Few-Shot Classifiers

no code implementations9 Jun 2022 Weikai Yang, Xi Ye, Xingxing Zhang, Lanxi Xiao, Jiazhi Xia, Zhongyuan Wang, Jun Zhu, Hanspeter Pfister, Shixia Liu

The base learners and labeled samples (shots) in an ensemble few-shot classifier greatly affect the model performance.

Neural Label Search for Zero-Shot Multi-Lingual Extractive Summarization

no code implementations ACL 2022 Ruipeng Jia, Xingxing Zhang, Yanan Cao, Shi Wang, Zheng Lin, Furu Wei

In zero-shot multilingual extractive text summarization, a model is typically trained on English summarization dataset and then applied on summarization datasets of other languages.

Extractive Summarization Extractive Text Summarization

Memory Replay with Data Compression for Continual Learning

1 code implementation ICLR 2022 Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, Lanqing Hong, Shifeng Zhang, Zhenguo Li, Yi Zhong, Jun Zhu

In this work, we propose memory replay with data compression (MRDC) to reduce the storage cost of old training samples and thus increase their amount that can be stored in the memory buffer.

Autonomous Driving class-incremental learning +5

Unsupervised Summarization with Customized Granularities

no code implementations29 Jan 2022 Ming Zhong, Yang Liu, Suyu Ge, Yuning Mao, Yizhu Jiao, Xingxing Zhang, Yichong Xu, Chenguang Zhu, Michael Zeng, Jiawei Han

We take events as the basic semantic units of the source documents and propose to rank these events by their salience.

Abstractive Text Summarization

Auto-Weighted Layer Representation Based View Synthesis Distortion Estimation for 3-D Video Coding

no code implementations7 Jan 2022 Jian Jin, Xingxing Zhang, Lili Meng, Weisi Lin, Jie Liang, Huaxiang Zhang, Yao Zhao

Experimental results show that the VSD can be accurately estimated with the weights learnt by the nonlinear mapping function once its associated S-VSDs are available.

Sequence Level Contrastive Learning for Text Summarization

no code implementations8 Sep 2021 Shusheng Xu, Xingxing Zhang, Yi Wu, Furu Wei

In this paper, we propose a contrastive learning model for supervised abstractive text summarization, where we view a document, its gold summary and its model generated summaries as different views of the same mean representation and maximize the similarities between them during training.

Abstractive Text Summarization Contrastive Learning +2

Double Low-Rank Representation With Projection Distance Penalty for Clustering

no code implementations CVPR 2021 Zhiqiang Fu, Yao Zhao, Dongxia Chang, Xingxing Zhang, Yiming Wang

This paper presents a novel, simple yet robust self-representation method, i. e., Double Low-Rank Representation with Projection Distance penalty (DLRRPD) for clustering.

Attention Temperature Matters in Abstractive Summarization Distillation

1 code implementation ACL 2022 Shengqiang Zhang, Xingxing Zhang, Hangbo Bao, Furu Wei

In this paper, we find simply manipulating attention temperatures in Transformers can make pseudo labels easier to learn for student models.

Abstractive Text Summarization

Auto-weighted low-rank representation for clustering

no code implementations26 Apr 2021 Zhiqiang Fu, Yao Zhao, Dongxia Chang, Xingxing Zhang, Yiming Wang

In this paper, a novel unsupervised low-rank representation model, i. e., Auto-weighted Low-Rank Representation (ALRR), is proposed to construct a more favorable similarity graph (SG) for clustering.

Representation Learning

Just Noticeable Difference for Deep Machine Vision

no code implementations16 Feb 2021 Jian Jin, Xingxing Zhang, Xin Fu, huan zhang, Weisi Lin, Jian Lou, Yao Zhao

Experimental results on image classification demonstrate that we successfully find the JND for deep machine vision.

Image Classification Neural Network Security +1

Unsupervised Fine-tuning for Text Clustering

no code implementations COLING 2020 Shaohan Huang, Furu Wei, Lei Cui, Xingxing Zhang, Ming Zhou

Fine-tuning with pre-trained language models (e. g. BERT) has achieved great success in many language understanding tasks in supervised settings (e. g. text classification).

text-classification Text Classification +1

Improving the Efficiency of Grammatical Error Correction with Erroneous Span Detection and Correction

no code implementations EMNLP 2020 Mengyun Chen, Tao Ge, Xingxing Zhang, Furu Wei, Ming Zhou

We propose a novel language-independent approach to improve the efficiency for Grammatical Error Correction (GEC) by dividing the task into two subtasks: Erroneous Span Detection (ESD) and Erroneous Span Correction (ESC).

Grammatical Error Correction

Taking Modality-free Human Identification as Zero-shot Learning

no code implementations2 Oct 2020 Zhizhe Liu, Xingxing Zhang, Zhenfeng Zhu, Shuai Zheng, Yao Zhao, Jian Cheng

There have been numerous methods proposed for human identification, such as face identification, person re-identification, and gait identification.

Event Detection Face Identification +3

Pre-training for Abstractive Document Summarization by Reinstating Source Text

no code implementations EMNLP 2020 Yanyan Zou, Xingxing Zhang, Wei Lu, Furu Wei, Ming Zhou

The main idea is that, given an input text artificially constructed from a document, a model is pre-trained to reinstate the original document.

Abstractive Text Summarization Document Summarization

From Anchor Generation to Distribution Alignment: Learning a Discriminative Embedding Space for Zero-Shot Recognition

no code implementations10 Feb 2020 Fuzhen Li, Zhenfeng Zhu, Xingxing Zhang, Jian Cheng, Yao Zhao

In zero-shot learning (ZSL), the samples to be classified are usually projected into side information templates such as attributes.

Zero-Shot Learning

To See in the Dark: N2DGAN for Background Modeling in Nighttime Scene

no code implementations12 Dec 2019 Zhenfeng Zhu, Yingying Meng, Deqiang Kong, Xingxing Zhang, Yandong Guo, Yao Zhao

Due to the deteriorated conditions of \mbox{illumination} lack and uneven lighting, nighttime images have lower contrast and higher noise than their daytime counterparts of the same scene, which limits seriously the performances of conventional background modeling methods.

Distribution-induced Bidirectional Generative Adversarial Network for Graph Representation Learning

1 code implementation CVPR 2020 Shuai Zheng, Zhenfeng Zhu, Xingxing Zhang, Zhizhe Liu, Jian Cheng, Yao Zhao

Graph representation learning aims to encode all nodes of a graph into low-dimensional vectors that will serve as input of many compute vision tasks.

Graph Representation Learning

Understand Dynamic Regret with Switching Cost for Online Decision Making

no code implementations28 Nov 2019 Yawei Zhao, Qian Zhao, Xingxing Zhang, En Zhu, Xinwang Liu, Jianping Yin

We provide a new theoretical analysis framework, which shows an interesting observation, that is, the relation between the switching cost and the dynamic regret is different for settings of OA and OCO.

Decision Making

DualVD: An Adaptive Dual Encoding Model for Deep Visual Understanding in Visual Dialogue

1 code implementation17 Nov 2019 Xiaoze Jiang, Jing Yu, Zengchang Qin, Yingying Zhuang, Xingxing Zhang, Yue Hu, Qi Wu

More importantly, we can tell which modality (visual or semantic) has more contribution in answering the current question by visualizing the gate values.

Question Answering Visual Dialog +1

Defensive Few-shot Adversarial Learning

no code implementations16 Nov 2019 Wenbin Li, Lei Wang, Xingxing Zhang, Jing Huo, Yang Gao, Jiebo Luo

In this paper, instead of assuming such a distribution consistency, we propose to make this assumption at a task-level in the episodic training paradigm in order to better transfer the defense knowledge.

Adversarial Defense Few-Shot Learning

ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries

no code implementations24 Oct 2019 Xingxing Zhang, Shupeng Gui, Zhenfeng Zhu, Yao Zhao, Ji Liu

In this paper, we take an initial attempt, and propose a generic formulation to provide a systematical solution (named ATZSL) for learning a robust ZSL model.

Image Captioning Object Recognition +1

Hierarchical Prototype Learning for Zero-Shot Recognition

no code implementations24 Oct 2019 Xingxing Zhang, Shupeng Gui, Zhenfeng Zhu, Yao Zhao, Ji Liu

Specifically, HPL is able to obtain discriminability on both seen and unseen class domains by learning visual prototypes respectively under the transductive setting.

Image Captioning Object Recognition +1

ProLFA: Representative Prototype Selection for Local Feature Aggregation

1 code implementation24 Oct 2019 Xingxing Zhang, Zhenfeng Zhu, Yao Zhao

Given a set of hand-crafted local features, acquiring a global representation via aggregation is a promising technique to boost computational efficiency and improve task performance.

Prototype Selection

Convolutional Prototype Learning for Zero-Shot Recognition

no code implementations22 Oct 2019 Zhizhe Liu, Xingxing Zhang, Zhenfeng Zhu, Shuai Zheng, Yao Zhao, Jian Cheng

The key to ZSL is to transfer knowledge from the seen to the unseen classes via auxiliary class attribute vectors.

Image Captioning Object Recognition +1

HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization

no code implementations ACL 2019 Xingxing Zhang, Furu Wei, Ming Zhou

Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods.

Document Summarization Extractive Summarization +1

Neural Latent Extractive Document Summarization

no code implementations EMNLP 2018 Xingxing Zhang, Mirella Lapata, Furu Wei, Ming Zhou

Extractive summarization models require sentence-level labels, which are usually created heuristically (e. g., with rule-based methods) given that most summarization datasets only have document-summary pairs.

Document Summarization Extractive Document Summarization +2

Dependency Parsing as Head Selection

1 code implementation EACL 2017 Xingxing Zhang, Jianpeng Cheng, Mirella Lapata

Conventional graph-based dependency parsers guarantee a tree structure both during training and inference.

Dependency Parsing

Top-down Tree Long Short-Term Memory Networks

1 code implementation NAACL 2016 Xingxing Zhang, Liang Lu, Mirella Lapata

Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have been successfully applied to a variety of sequence modeling tasks.

Dependency Parsing Sentence Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.