Search Results for author: Yong Jiang

Found 78 papers, 36 papers with code

Adaptive Frequency Learning in Two-branch Face Forgery Detection

no code implementations27 Mar 2022 Neng Wang, Yang Bai, Kun Yu, Yong Jiang, Shu-Tao Xia, Yan Wang

Face forgery has attracted increasing attention in recent applications of computer vision.

CausPref: Causal Preference Learning for Out-of-Distribution Recommendation

1 code implementation8 Feb 2022 Yue He, Zimu Wang, Peng Cui, Hao Zou, Yafeng Zhang, Qiang Cui, Yong Jiang

In spite of the tremendous development of recommender system owing to the progressive capability of machine learning recently, the current recommender system is still vulnerable to the distribution shift of users and items in realistic scenarios, leading to the sharp decline of performance in testing environments.

Recommendation Systems

Few-Shot Backdoor Attacks on Visual Object Tracking

1 code implementation ICLR 2022 Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, Shu-Tao Xia

Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems.

Autonomous Driving Backdoor Attack +1

ITA: Image-Text Alignments for Multi-Modal Named Entity Recognition

1 code implementation13 Dec 2021 Xinyu Wang, Min Gui, Yong Jiang, Zixia Jia, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu

As text representations take the most important role in MNER, in this paper, we propose {\bf I}mage-{\bf t}ext {\bf A}lignments (ITA) to align image features into the textual space, so that the attention mechanism in transformer-based pretrained textual embeddings can be better utilized.

named-entity-recognition Named Entity Recognition

Defending against Model Stealing via Verifying Embedded External Features

1 code implementation ICML Workshop AML 2021 Yiming Li, Linghui Zhu, Xiaojun Jia, Yong Jiang, Shu-Tao Xia, Xiaochun Cao

In this paper, we explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified \emph{external features}.

Style Transfer

Clustering Effect of (Linearized) Adversarial Robust Models

1 code implementation25 Nov 2021 Yang Bai, Xin Yan, Yong Jiang, Shu-Tao Xia, Yisen Wang

Adversarial robustness has received increasing attention along with the study of adversarial examples.

Adversarial Robustness Domain Adaptation

Maximize the Exploration of Congeneric Semantics for Weakly Supervised Semantic Segmentation

no code implementations8 Oct 2021 Ke Zhang, Sihong Chen, Qi Ju, Yong Jiang, Yucong Li, Xin He

The graph network that is established with patches as the nodes can maximize the mutual learning of similar objects.

Weakly-Supervised Semantic Segmentation

Deep Dirichlet Process Mixture Models

no code implementations29 Sep 2021 Naiqi Li, Wenjie Li, Yong Jiang, Shu-Tao Xia

In this paper we propose the deep Dirichlet process mixture (DDPM) model, which is an unsupervised method that simultaneously performs clustering and feature learning.

MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity Representations

1 code implementation EMNLP 2021 Xinyin Ma, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Weiming Lu

Entity retrieval, which aims at disambiguating mentions to canonical entities from massive KBs, is essential for many tasks in natural language processing.

Entity Linking Entity Retrieval +1

DGEM: A New Dual-modal Graph Embedding Method in Recommendation System

no code implementations9 Aug 2021 Huimin Zhou, Qing Li, Yong Jiang, Rongwei Yang, Zhuyun Qi

In the current deep learning based recommendation system, the embedding method is generally employed to complete the conversion from the high-dimensional sparse feature vector to the low-dimensional dense feature vector.

Graph Embedding

Risk Minimization for Zero-shot Sequence Labeling

no code implementations ACL 2021 Zechuan Hu, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu

In this paper, we propose a novel unified framework for zero-shot sequence labeling with minimum risk training and design a new decomposable risk function that models the relations between the predicted labels from the source models and the true labels.

Multi-View Cross-Lingual Structured Prediction with Minimum Supervision

no code implementations ACL 2021 Zechuan Hu, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu

In structured prediction problems, cross-lingual transfer learning is an efficient way to train quality models for low-resource languages, and further improvement can be obtained by learning from multiple source languages.

Cross-Lingual Transfer Structured Prediction +1

WeClick: Weakly-Supervised Video Semantic Segmentation with Click Annotations

no code implementations7 Jul 2021 Peidong Liu, Zibin He, Xiyu Yan, Yong Jiang, Shutao Xia, Feng Zheng, Maowei Hu

In this work, we propose an effective weakly-supervised video semantic segmentation pipeline with click annotations, called WeClick, for saving laborious annotating effort by segmenting an instance of the semantic class with only a single click.

Knowledge Distillation Model Compression +2

Towards Emotional Support Dialog Systems

1 code implementation ACL 2021 Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, Minlie Huang

Emotional support is a crucial ability for many conversation scenarios, including social interactions, mental health support, and customer service chats.

Diversifying Dialog Generation via Adaptive Label Smoothing

1 code implementation ACL 2021 Yida Wang, Yinhe Zheng, Yong Jiang, Minlie Huang

Neural dialogue generation models trained with the one-hot target distribution suffer from the over-confidence issue, which leads to poor generation diversity as widely reported in the literature.

Dialogue Generation

Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning

1 code implementation ACL 2021 Xinyu Wang, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu

We find empirically that the contextual representations computed on the retrieval-based input view, constructed through the concatenation of a sentence and its external contexts, can achieve significantly improved performance compared to the original input view based only on the sentence.

named-entity-recognition Named Entity Recognition +1

Backdoor Attack in the Physical World

no code implementations6 Apr 2021 Yiming Li, Tongqing Zhai, Yong Jiang, Zhifeng Li, Shu-Tao Xia

We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.

Backdoor Attack

Unsupervised Natural Language Parsing (Introductory Tutorial)

no code implementations EACL 2021 Kewei Tu, Yong Jiang, Wenjuan Han, Yanpeng Zhao

Unsupervised parsing learns a syntactic parser from training sentences without parse tree annotations.

A Benchmark and Comprehensive Survey on Knowledge Graph Entity Alignment via Representation Learning

1 code implementation28 Mar 2021 Rui Zhang, Bayu Distiawan Trisedy, Miao Li, Yong Jiang, Jianzhong Qi

In the last few years, the interest in knowledge bases has grown exponentially in both the research community and the industry due to their essential role in AI applications.

Entity Alignment Representation Learning

Improving Adversarial Robustness via Channel-wise Activation Suppressing

1 code implementation ICLR 2021 Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, Yisen Wang

The study of adversarial examples and their activation has attracted significant attention for secure and robust learning with deep neural networks (DNNs).

Adversarial Robustness

Hidden Backdoor Attack against Semantic Segmentation Models

no code implementations6 Mar 2021 Yiming Li, YanJie Li, Yalei Lv, Yong Jiang, Shu-Tao Xia

Deep neural networks (DNNs) are vulnerable to the \emph{backdoor attack}, which intends to embed hidden backdoors in DNNs by poisoning training data.

Autonomous Driving Backdoor Attack +1

Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search

1 code implementation ICLR 2021 Peidong Liu, Gengwei Zhang, Bochao Wang, Hang Xu, Xiaodan Liang, Yong Jiang, Zhenguo Li

For object detection, the well-established classification and regression loss functions have been carefully designed by considering diverse learning challenges.

object-detection Object Detection

FenceBox: A Platform for Defeating Adversarial Examples with Data Augmentation Techniques

1 code implementation3 Dec 2020 Han Qiu, Yi Zeng, Tianwei Zhang, Yong Jiang, Meikang Qiu

With more and more advanced adversarial attack methods have been developed, a quantity of corresponding defense solutions were designed to enhance the robustness of DNN models.

Adversarial Attack Data Augmentation

Stochastic Deep Gaussian Processes over Graphs

1 code implementation NeurIPS 2020 Naiqi Li, Wenjie Li, Jifeng Sun, Yinghua Gao, Yong Jiang, Shu-Tao Xia

In this paper we propose Stochastic Deep Gaussian Processes over Graphs (DGPG), which are deep structure models that learn the mappings between input and output signals in graph domains.

Gaussian Processes Variational Inference

Optimistic Dual Extrapolation for Coherent Non-monotone Variational Inequalities

no code implementations NeurIPS 2020 Chaobing Song, Zhengyuan Zhou, Yichao Zhou, Yong Jiang, Yi Ma

The optimization problems associated with training generative adversarial neural networks can be largely reduced to certain {\em non-monotone} variational inequality problems (VIPs), whereas existing convergence results are mostly based on monotone or strongly monotone assumptions.

Neural Latent Dependency Model for Sequence Labeling

no code implementations10 Nov 2020 Yang Zhou, Yong Jiang, Zechuan Hu, Kewei Tu

One limitation of linear chain CRFs is their inability to model long-range dependencies between labels.

Natural Language Processing

Reducing the Annotation Effort for Video Object Segmentation Datasets

no code implementations2 Nov 2020 Paul Voigtlaender, Lishu Luo, Chun Yuan, Yong Jiang, Bastian Leibe

We use a deep convolutional network to automatically create pseudo-labels on a pixel level from much cheaper bounding box annotations and investigate how far such pseudo-labels can carry us for training state-of-the-art VOS approaches.

Semantic Segmentation Video Object Segmentation +1

Second-Order Unsupervised Neural Dependency Parsing

1 code implementation COLING 2020 Songlin Yang, Yong Jiang, Wenjuan Han, Kewei Tu

Inspired by second-order supervised dependency parsing, we proposed a second-order extension of unsupervised neural dependency models that incorporate grandparent-child or sibling information.

Dependency Grammar Induction

Backdoor Attack against Speaker Verification

1 code implementation22 Oct 2020 Tongqing Zhai, Yiming Li, Ziqi Zhang, Baoyuan Wu, Yong Jiang, Shu-Tao Xia

We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.

Backdoor Attack Speaker Verification

Open-sourced Dataset Protection via Backdoor Watermarking

1 code implementation12 Oct 2020 Yiming Li, Ziqi Zhang, Jiawang Bai, Baoyuan Wu, Yong Jiang, Shu-Tao Xia

Based on the proposed backdoor-based watermarking, we use a hypothesis test guided method for dataset verification based on the posterior probability generated by the suspicious third-party model of the benign samples and their correspondingly watermarked samples ($i. e.$, images with trigger) on the target class.

Image Classification

Adversarial Attack and Defense of Structured Prediction Models

1 code implementation EMNLP 2020 Wenjuan Han, Liwen Zhang, Yong Jiang, Kewei Tu

To address these problems, we propose a novel and unified framework that learns to attack a structured prediction model using a sequence-to-sequence model with feedbacks from multiple reference models of the same structured prediction task.

Adversarial Attack Dependency Parsing +3

Improving Query Efficiency of Black-box Adversarial Attack

1 code implementation ECCV 2020 Yang Bai, Yuyuan Zeng, Yong Jiang, Yisen Wang, Shu-Tao Xia, Weiwei Guo

Deep neural networks (DNNs) have demonstrated excellent performance on various tasks, however they are under the risk of adversarial examples that can be easily generated when the target model is accessible to an attacker (white-box setting).

Adversarial Attack

Rectified Decision Trees: Exploring the Landscape of Interpretable and Effective Machine Learning

no code implementations21 Aug 2020 Yiming Li, Jiawang Bai, Jiawei Li, Xue Yang, Yong Jiang, Shu-Tao Xia

Interpretability and effectiveness are two essential and indispensable requirements for adopting machine learning methods in reality.

Knowledge Distillation

Neural Network-based Automatic Factor Construction

no code implementations14 Aug 2020 Jie Fang, Jian-Wu Lin, Shu-Tao Xia, Yong Jiang, Zhikang Xia, Xiang Liu

This paper proposes Neural Network-based Automatic Factor Construction (NNAFC), a tailored neural network framework that can automatically construct diversified financial factors based on financial domain knowledge and a variety of neural network structures.

Time Series

A Large-Scale Chinese Short-Text Conversation Dataset

3 code implementations10 Aug 2020 Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, Minlie Huang

The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling.

Dialogue Generation Short-Text Conversation

Backdoor Learning: A Survey

1 code implementation17 Jul 2020 Yiming Li, Yong Jiang, Zhifeng Li, Shu-Tao Xia

Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs), so that the attacked models perform well on benign samples, whereas their predictions will be maliciously changed if the hidden backdoor is activated by attacker-specified triggers.

Backdoor Attack Data Poisoning

An Empirical Comparison of Unsupervised Constituency Parsing Methods

no code implementations ACL 2020 Jun Li, Yifan Cao, Jiong Cai, Yong Jiang, Kewei Tu

Unsupervised constituency parsing aims to learn a constituency parser from a training corpus without parse tree annotations.

Constituency Parsing

Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization

no code implementations NeurIPS 2020 Chaobing Song, Yong Jiang, Yi Ma

Meanwhile, VRADA matches the lower bound of the general convex setting up to a $\log\log n$ factor and matches the lower bounds in both regimes $n\le \Theta(\kappa)$ and $n\gg \kappa$ of the strongly convex setting, where $\kappa$ denotes the condition number.

Rethinking the Trigger of Backdoor Attack

no code implementations9 Apr 2020 Yiming Li, Tongqing Zhai, Baoyuan Wu, Yong Jiang, Zhifeng Li, Shu-Tao Xia

Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs), such that the prediction of the infected model will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger, while it performs well on benign samples.

Backdoor Attack

Toward Adversarial Robustness via Semi-supervised Robust Training

1 code implementation16 Mar 2020 Yiming Li, Baoyuan Wu, Yan Feng, Yanbo Fan, Yong Jiang, Zhifeng Li, Shu-Tao Xia

In this work, we propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_{stand}$ and $R_{rob}$), which is with respect to the benign example and its neighborhoods respectively.

Adversarial Defense Adversarial Robustness

Alpha Discovery Neural Network based on Prior Knowledge

no code implementations26 Dec 2019 Jie Fang, Shu-Tao Xia, Jian-Wu Lin, Zhikang Xia, Xiang Liu, Yong Jiang

This paper proposes Alpha Discovery Neural Network (ADNN), a tailored neural network structure which can automatically construct diversified financial technical indicators based on prior knowledge.

Time Series

Automatic Financial Feature Construction

no code implementations8 Dec 2019 Jie Fang, Shu-Tao Xia, Jian-Wu Lin, Yong Jiang

According to neural network universal approximation theorem, pre-training can conduct a more effective and explainable evolution process.

Data Augmentation Time Series

Visual Privacy Protection via Mapping Distortion

1 code implementation5 Nov 2019 Yiming Li, Peidong Liu, Yong Jiang, Shu-Tao Xia

To a large extent, the privacy of visual classification data is mainly in the mapping between the image and its corresponding label, since this relation provides a great amount of information and can be used in other scenarios.

Deep Flow Collaborative Network for Online Visual Tracking

no code implementations5 Nov 2019 Peidong Liu, Xiyu Yan, Yong Jiang, Shu-Tao Xia

The deep learning-based visual tracking algorithms such as MDNet achieve high performance leveraging to the feature extraction ability of a deep neural network.

Optical Flow Estimation Visual Tracking

A Regularization-based Framework for Bilingual Grammar Induction

no code implementations IJCNLP 2019 Yong Jiang, Wenjuan Han, Kewei Tu

Grammar induction aims to discover syntactic structures from unannotated sentences.

Multilingual Grammar Induction with Continuous Language Identification

no code implementations IJCNLP 2019 Wenjuan Han, Ge Wang, Yong Jiang, Kewei Tu

The key to multilingual grammar induction is to couple grammar parameters of different languages together by exploiting the similarity between languages.

Language Identification

Adversarial Defense via Local Flatness Regularization

no code implementations27 Oct 2019 Jia Xu, Yiming Li, Yong Jiang, Shu-Tao Xia

In this paper, we define the local flatness of the loss surface as the maximum value of the chosen norm of the gradient regarding to the input within a neighborhood centered on the benign sample, and discuss the relationship between the local flatness and adversarial vulnerability.

Adversarial Defense

Bidirectional Transition-Based Dependency Parsing

1 code implementation AAAI 2019 Yunzhe Yuan, Yong Jiang, Kewei Tu

Traditionally, a transitionbased dependency parser processes an input sentence and predicts a sequence of parsing actions in a left-to-right manner.

Transition-Based Dependency Parsing

$t$-$k$-means: A Robust and Stable $k$-means Variant

1 code implementation17 Jul 2019 Yiming Li, Yang Zhang, Qingtao Tang, Weipeng Huang, Yong Jiang, Shu-Tao Xia

$k$-means algorithm is one of the most classical clustering methods, which has been widely and successfully used in signal processing.

Enhancing Unsupervised Generative Dependency Parser with Contextual Information

no code implementations ACL 2019 Wenjuan Han, Yong Jiang, Kewei Tu

In this paper, we propose a novel probabilistic model called discriminative neural dependency model with valence (D-NDMV) that generates a sentence and its parse from a continuous latent representation, which encodes global contextual information of the generated sentence.

Constituency Grammar Induction Dependency Grammar Induction +1

DAL: Dual Adversarial Learning for Dialogue Generation

no code implementations WS 2019 Shaobo Cui, Rongzhong Lian, Di Jiang, Yuanfeng Song, Siqi Bao, Yong Jiang

DAL is the first work to innovatively utilizes the duality between query generation and response generation to avoid safe responses and increase the diversity of the generated responses.

Dialogue Generation Response Generation

Unified Acceleration of High-Order Algorithms under Hölder Continuity and Uniform Convexity

no code implementations3 Jun 2019 Chaobing Song, Yong Jiang, Yi Ma

In this general convex setting, we propose a concise unified acceleration framework (UAF), which reconciles the two different high-order acceleration approaches, one by Nesterov and Baes [29, 3, 33] and one by Monteiro and Svaiter [25].

Rectified Decision Trees: Towards Interpretability, Compression and Empirical Soundness

no code implementations14 Mar 2019 Jiawang Bai, Yiming Li, Jiawei Li, Yong Jiang, Shu-Tao Xia

How to obtain a model with good interpretability and performance has always been an important research topic.

Knowledge Distillation

Multinomial Random Forest: Toward Consistency and Privacy-Preservation

no code implementations10 Mar 2019 Yiming Li, Jiawang Bai, Jiawei Li, Xue Yang, Yong Jiang, Chun Li, Shu-Tao Xia

Despite the impressive performance of random forests (RF), its theoretical properties have not been thoroughly understood.

General Classification

Fully Implicit Online Learning

no code implementations25 Sep 2018 Chaobing Song, Ji Liu, Han Liu, Yong Jiang, Tong Zhang

Regularized online learning is widely used in machine learning applications.

online learning

Generative Stock Question Answering

no code implementations21 Apr 2018 Zhaopeng Tu, Yong Jiang, Xiaojiang Liu, Lei Shu, Shuming Shi

We study the problem of stock related question answering (StockQA): automatically generating answers to stock related questions, just like professional stock analysts providing action recommendations to stocks upon user's requests.

Question Answering

Semi-supervised Structured Prediction with Neural CRF Autoencoder

1 code implementation EMNLP 2017 Xiao Zhang, Yong Jiang, Hao Peng, Kewei Tu, Dan Goldwasser

In this paper we propose an end-to-end neural CRF autoencoder (NCRF-AE) model for semi-supervised learning of sequential structured prediction problems.

Part-Of-Speech Tagging POS +1

Maximum A Posteriori Inference in Sum-Product Networks

no code implementations16 Aug 2017 Jun Mei, Yong Jiang, Kewei Tu

For the theoretical part, we reduce general MAP inference to its special case without evidence and hidden variables; we also show that it is NP-hard to approximate the MAP problem to $2^{n^\epsilon}$ for fixed $0 \leq \epsilon < 1$, where $n$ is the input size.

CRF Autoencoder for Unsupervised Dependency Parsing

1 code implementation EMNLP 2017 Jiong Cai, Yong Jiang, Kewei Tu

The encoder part of our model is discriminative and globally normalized which allows us to use rich features as well as universal linguistic priors.

Dependency Grammar Induction Unsupervised Dependency Parsing

Dependency Grammar Induction with Neural Lexicalization and Big Training Data

no code implementations EMNLP 2017 Wenjuan Han, Yong Jiang, Kewei Tu

We study the impact of big models (in terms of the degree of lexicalization) and big data (in terms of the training corpus size) on dependency grammar induction.

Dependency Grammar Induction

Latent Dependency Forest Models

no code implementations8 Sep 2016 Shanbo Chu, Yong Jiang, Kewei Tu

Probabilistic modeling is one of the foundations of modern machine learning and artificial intelligence.

Cannot find the paper you are looking for? You can Submit a new open access paper.