Search Results for author: Shuai Zhang

Found 108 papers, 42 papers with code

ClusterFormer: Neural Clustering Attention for Efficient and Effective Transformer

no code implementations ACL 2022 Ningning Wang, Guobing Gan, Peng Zhang, Shuai Zhang, Junqiu Wei, Qun Liu, Xin Jiang

Other sparse methods use clustering patterns to select words, but the clustering process is separate from the training process of the target task, which causes a decrease in effectiveness.

Clustering Machine Translation +4

Facilitating NSFW Text Detection in Open-Domain Dialogue Systems via Knowledge Distillation

1 code implementation18 Sep 2023 Huachuan Qiu, Shuai Zhang, Hongliang He, Anqi Li, Zhenzhong Lan

NSFW (Not Safe for Work) content, in the context of a dialogue, can have severe side effects on users in open-domain dialogue systems.

Chatbot Knowledge Distillation +1

InfeRE: Step-by-Step Regex Generation via Chain of Inference

1 code implementation8 Aug 2023 Shuai Zhang, Xiaodong Gu, Yuting Chen, Beijun Shen

Particularly, InfeRE outperforms the popular tree-based generation approach by 18. 1% and 11. 3% on both datasets, respectively, in terms of DFA@5 accuracy.

Text Matching

A Benchmark for Understanding Dialogue Safety in Mental Health Support

1 code implementation31 Jul 2023 Huachuan Qiu, Tong Zhao, Anqi Li, Shuai Zhang, Hongliang He, Zhenzhong Lan

Our study reveals that ChatGPT struggles to detect safety categories with detailed safety definitions in a zero- and few-shot paradigm, whereas the fine-tuned model proves to be more suitable.

MAS: Towards Resource-Efficient Federated Multiple-Task Learning

1 code implementation ICCV 2023 Weiming Zhuang, Yonggang Wen, Lingjuan Lyu, Shuai Zhang

Then, we present our new approach, MAS (Merge and Split), to optimize the performance of training multiple simultaneous FL tasks.

Federated Learning

Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output Robustness of Large Language Models

1 code implementation17 Jul 2023 Huachuan Qiu, Shuai Zhang, Anqi Li, Hongliang He, Zhenzhong Lan

We present a systematic analysis of the safety and robustness of LLMs regarding the position of explicit normal instructions, word replacements (verbs in explicit normal instructions, target groups in malicious instructions, cue words for explicit normal instructions), and instruction replacements (different explicit normal instructions).

Non-Convex Optimizations for Machine Learning with Theoretical Guarantee: Robust Matrix Completion and Neural Network Learning

no code implementations28 Jun 2023 Shuai Zhang

Despite the recent development in machine learning, most learning systems are still under the concept of "black box", where the performance cannot be understood and derived.

Low-Rank Matrix Completion

A Semi-Paired Approach For Label-to-Image Translation

no code implementations23 Jun 2023 George Eskandar, Shuai Zhang, Mohamed Abdelsamad, Mark Youssef, Diandian Guo, Bin Yang

Data efficiency, or the ability to generalize from a few labeled data, remains a major challenge in deep learning.

Image-to-Image Translation Translation

Co-design Hardware and Algorithm for Vector Search

no code implementations19 Jun 2023 Wenqi Jiang, Shigang Li, Yu Zhu, Johannes De Fine Licht, Zhenhao He, Runbin Shi, Cedric Renggli, Shuai Zhang, Theodoros Rekatsinas, Torsten Hoefler, Gustavo Alonso

Vector search has emerged as the foundation for large-scale information retrieval and machine learning systems, with search engines like Google and Bing processing tens of thousands of queries per second on petabyte-scale document datasets by evaluating vector similarities between encoded query texts and web documents.

Information Retrieval Retrieval

Rethinking Document-Level Relation Extraction: A Reality Check

no code implementations15 Jun 2023 Jing Li, Yequan Wang, Shuai Zhang, Min Zhang

Recently, numerous efforts have continued to push up performance boundaries of document-level relation extraction (DocRE) and have claimed significant progress in DocRE.

Document-level Relation Extraction

Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks

1 code implementation7 Jun 2023 Mohammed Nowaz Rabbani Chowdhury, Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen

In deep learning, mixture-of-experts (MoE) activates one or few experts (sub-networks) on a per-sample or per-token basis, resulting in significant computation reduction.

Learning Music Sequence Representation from Text Supervision

no code implementations31 May 2023 Tianyu Chen, Yuan Xie, Shuai Zhang, Shaohan Huang, Haoyi Zhou, JianXin Li

Music representation learning is notoriously difficult for its complex human-related concepts contained in the sequence of numerical signals.

Contrastive Learning Representation Learning

SMILE: Single-turn to Multi-turn Inclusive Language Expansion via ChatGPT for Mental Health Support

1 code implementation30 Apr 2023 Huachuan Qiu, Hongliang He, Shuai Zhang, Anqi Li, Zhenzhong Lan

There has been an increasing research interest in developing specialized dialogue systems that can offer mental health support.

MobileInst: Video Instance Segmentation on the Mobile

no code implementations30 Mar 2023 Renhong Zhang, Tianheng Cheng, Shusheng Yang, Haoyi Jiang, Shuai Zhang, Jiancheng Lyu, Xin Li, Xiaowen Ying, Dashan Gao, Wenyu Liu, Xinggang Wang

To address those issues, we present MobileInst, a lightweight and mobile-friendly framework for video instance segmentation on mobile devices.

Instance Segmentation Semantic Segmentation +1

Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks

no code implementations6 Feb 2023 Shuai Zhang, Meng Wang, Pin-Yu Chen, Sijia Liu, Songtao Lu, Miao Liu

Due to the significant computational challenge of training large-scale graph neural networks (GNNs), various sparse learning techniques have been exploited to reduce memory and storage costs.

Sparse Learning

SKDBERT: Compressing BERT via Stochastic Knowledge Distillation

no code implementations26 Nov 2022 Zixiang Ding, Guoqing Jiang, Shuai Zhang, Lin Guo, Wei Lin

In this paper, we propose Stochastic Knowledge Distillation (SKD) to obtain compact BERT-style language model dubbed SKDBERT.

Knowledge Distillation Language Modelling

Neural Methods for Logical Reasoning Over Knowledge Graphs

1 code implementation ICLR 2022 Alfonso Amayuelas, Shuai Zhang, Susie Xi Rao, Ce Zhang

We introduce a set of models that use Neural Networks to create one-point vector embeddings to answer the queries.

Benchmarking Knowledge Graphs +1

Profiling Television Watching Behaviour Using Bayesian Hierarchical Joint Models for Time-to-Event and Count Data

1 code implementation6 Sep 2022 Rafael A. Moral, Zhi Chen, Shuai Zhang, Sally McClean, Gabriel R. Palma, Brahim Allan, Ian Kegel

The model drastically reduces the dimensionality of the data from thousands of observations per customer to 11 customer-level parameter estimates and random effects.


Smart Multi-tenant Federated Learning

no code implementations9 Jul 2022 Weiming Zhuang, Yonggang Wen, Shuai Zhang

In this work, we propose a smart multi-tenant FL system, MuFL, to effectively coordinate and execute simultaneous training activities.

Federated Learning

Learning and generalization of one-hidden-layer neural networks, going beyond standard Gaussian data

no code implementations7 Jul 2022 Hongkang Li, Shuai Zhang, Meng Wang

In addition, for the first time, this paper characterizes the impact of the input distributions on the sample complexity and the learning rate.

Chat-to-Design: AI Assisted Personalized Fashion Design

no code implementations3 Jul 2022 Weiming Zhuang, Chongjie Ye, Ying Xu, Pengzhi Mao, Shuai Zhang

In this demo, we present Chat-to-Design, a new multimodal interaction system for personalized fashion design.

Natural Language Understanding Retrieval

Optimizing Performance of Federated Person Re-identification: Benchmarking and Analysis

2 code implementations24 May 2022 Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang

Based on these insights, we propose three optimization approaches: (1) We adopt knowledge distillation to facilitate the convergence of FedReID by better transferring knowledge from clients to the server; (2) We introduce client clustering to improve the performance of large datasets by aggregating clients with similar data distributions; (3) We propose cosine distance weight to elevate performance by dynamically updating the weights for aggregation depending on how well models are trained in clients.

Benchmarking Federated Learning +2

A Fine-grained Interpretability Evaluation Benchmark for Neural NLP

no code implementations23 May 2022 Lijie Wang, Yaozong Shen, Shuyuan Peng, Shuai Zhang, Xinyan Xiao, Hao liu, Hongxuan Tang, Ying Chen, Hua Wu, Haifeng Wang

Based on this benchmark, we conduct experiments on three typical models with three saliency methods, and unveil their strengths and weakness in terms of interpretability.

Reading Comprehension Sentiment Analysis

Modelling graph dynamics in fraud detection with "Attention"

1 code implementation22 Apr 2022 Susie Xi Rao, Clémence Lanfranchi, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Mo Cheng, Yinan Shan, Yang Zhao, Ce Zhang

At online retail platforms, detecting fraudulent accounts and transactions is crucial to improve customer experience, minimize loss, and avoid unauthorized transactions.

Fraud Detection

Federated Unsupervised Domain Adaptation for Face Recognition

no code implementations9 Apr 2022 Weiming Zhuang, Xin Gan, Yonggang Wen, Xuesen Zhang, Shuai Zhang, Shuai Yi

To address this problem, we propose federated unsupervised domain adaptation for face recognition, FedFR.

Clustering Face Recognition +2

Divergence-aware Federated Self-Supervised Learning

1 code implementation ICLR 2022 Weiming Zhuang, Yonggang Wen, Shuai Zhang

Using the framework, our study uncovers unique insights of FedSSL: 1) stop-gradient operation, previously reported to be essential, is not always necessary in FedSSL; 2) retaining local knowledge of clients in FedSSL is particularly beneficial for non-IID data.

Federated Learning Federated Unsupervised Learning +1

Reducing language context confusion for end-to-end code-switching automatic speech recognition

no code implementations28 Jan 2022 Shuai Zhang, Jiangyan Yi, Zhengkun Tian, JianHua Tao, Yu Ting Yeung, Liqun Deng

We propose a language-related attention mechanism to reduce multilingual context confusion for the E2E code-switching ASR model based on the Equivalence Constraint (EC) Theory.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis

no code implementations21 Jan 2022 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.

An Efficient Pruning Process with Locality Aware Exploration and Dynamic Graph Editing for Subgraph Matching

no code implementations22 Dec 2021 Zite Jiang, Boxiao Liu, Shuai Zhang, Xingzhong Hou, Mengting Yuan, Haihang You

Subgraph matching is a NP-complete problem that extracts isomorphic embeddings of a query graph $q$ in a data graph $G$.

Self-Instantiated Recurrent Units with Dynamic Soft Recursion

no code implementations NeurIPS 2021 Aston Zhang, Yi Tay, Yikang Shen, Alvin Chan Guo Wei, Shuai Zhang

On the other hand, the extent of the Self-IRU recursion is controlled by gates whose values are between 0 and 1 and may vary across the temporal dimension of sequences, enabling dynamic soft recursion depth at each time step.

Inductive Bias

POLLA: Enhancing the Local Structure Awareness in Long Sequence Spatial-temporal Modeling

1 code implementation TIST 2021 2021 Haoyi Zhou, Hao Peng, Jieqi Peng, Shuai Zhang, JianXin Li

Extensive experiments are conducted on five large-scale datasets, which demonstrate that our method achieves state-of-the-art performance and validates the effectiveness brought by local structure information.

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks

no code implementations12 Oct 2021 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer.

How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis

no code implementations ICLR 2022 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.

Joint Optimization in Edge-Cloud Continuum for Federated Unsupervised Person Re-identification

1 code implementation14 Aug 2021 Weiming Zhuang, Yonggang Wen, Shuai Zhang

We present FedUReID, a federated unsupervised person ReID system to learn person ReID models without any labels while preserving privacy.

Federated Learning Unsupervised Person Re-Identification

Collaborative Unsupervised Visual Representation Learning from Decentralized Data

1 code implementation ICCV 2021 Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang, Shuai Yi

In this framework, each party trains models from unlabeled data independently using contrastive learning with an online network and a target network.

Contrastive Learning Federated Learning +3

Knowledge Router: Learning Disentangled Representations for Knowledge Graphs

no code implementations NAACL 2021 Shuai Zhang, Xi Rao, Yi Tay, Ce Zhang

To this end, this paper proposes to learn disentangled representations of KG entities - a new method that disentangles the inner latent properties of KG entities.

Knowledge Graphs Representation Learning

A Sequence-to-Set Network for Nested Named Entity Recognition

1 code implementation19 May 2021 Zeqi Tan, Yongliang Shen, Shuai Zhang, Weiming Lu, Yueting Zhuang

We utilize a non-autoregressive decoder to predict the final set of entities in one pass, in which we are able to capture dependencies between entities.

named-entity-recognition Named Entity Recognition +2

EasyFL: A Low-code Federated Learning Platform For Dummies

1 code implementation17 May 2021 Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang

However, these platforms are complex to use and require a deep understanding of FL, which imposes high barriers to entry for beginners, limits the productivity of researchers, and compromises deployment efficiency.

Federated Learning Privacy Preserving

Towards Unsupervised Domain Adaptation for Deep Face Recognition under Privacy Constraints via Federated Learning

no code implementations17 May 2021 Weiming Zhuang, Xin Gan, Yonggang Wen, Xuesen Zhang, Shuai Zhang, Shuai Yi

To this end, FedFR forms an end-to-end training pipeline: (1) pre-train in the source domain; (2) predict pseudo labels by clustering in the target domain; (3) conduct domain-constrained federated learning across two domains.

Clustering Face Recognition +2

Locate and Label: A Two-stage Identifier for Nested Named Entity Recognition

1 code implementation ACL 2021 Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, Weiming Lu

Although these methods have the innate ability to handle nested NER, they suffer from high computational cost, ignorance of boundary information, under-utilization of the spans that partially match with entities, and difficulties in long entity recognition.

Chinese Named Entity Recognition named-entity-recognition +3

FSR: Accelerating the Inference Process of Transducer-Based Models by Applying Fast-Skip Regularization

no code implementations7 Apr 2021 Zhengkun Tian, Jiangyan Yi, Ye Bai, JianHua Tao, Shuai Zhang, Zhengqi Wen

It takes a lot of computation and time to predict the blank tokens, but only the non-blank tokens will appear in the final output sequence.

speech-recognition Speech Recognition

TSNAT: Two-Step Non-Autoregressvie Transformer Models for Speech Recognition

1 code implementation4 Apr 2021 Zhengkun Tian, Jiangyan Yi, JianHua Tao, Ye Bai, Shuai Zhang, Zhengqi Wen, Xuefei Liu

To address these two problems, we propose a new model named the two-step non-autoregressive transformer(TSNAT), which improves the performance and accelerating the convergence of the NAR model by learning prior knowledge from a parameters-sharing AR model.

speech-recognition Speech Recognition +1

Switch Spaces: Learning Product Spaces with Sparse Gating

no code implementations17 Feb 2021 Shuai Zhang, Yi Tay, Wenqi Jiang, Da-Cheng Juan, Ce Zhang

In order for learned representations to be effective and efficient, it is ideal that the geometric inductive bias aligns well with the underlying structure of the data.

Inductive Bias Knowledge Graph Completion +1

Learning One-hidden-layer Neural Networks on Gaussian Mixture Models with Guaranteed Generalizability

no code implementations1 Jan 2021 Hongkang Li, Shuai Zhang, Meng Wang

Instead of following the conventional and restrictive assumption in the literature that the input features follow the standard Gaussian distribution, this paper, for the first time, analyzes a more general and practical scenario that the input features follow a Gaussian mixture model of a finite number of Gaussian distributions of various mean and variance.

Binary Classification

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks

no code implementations NeurIPS 2021 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Moreover, as the algorithm for training a sparse neural network is specified as (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned model weights in the hidden layer.

Suspicious Massive Registration Detection via Dynamic Heterogeneous Graph Neural Networks

no code implementations20 Dec 2020 Susie Xi Rao, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Mo Cheng, Yinan Shan, Yang Zhao, Ce Zhang

Massive account registration has raised concerns on risk management in e-commerce companies, especially when registration increases rapidly within a short time frame.


xFraud: Explainable Fraud Transaction Detection

1 code implementation24 Nov 2020 Susie Xi Rao, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Zhiyao Chen, Yinan Shan, Yang Zhao, Ce Zhang

At online retail platforms, it is crucial to actively detect the risks of transactions to improve customer experience and minimize financial loss.

Explainable Models Fraud Detection +1

Learning User Representations with Hypercuboids for Recommender Systems

3 code implementations11 Nov 2020 Shuai Zhang, Huoyu Liu, Aston Zhang, Yue Hu, Ce Zhang, Yumeng Li, Tanchao Zhu, Shaojian He, Wenwu Ou

Furthermore, we present two variants of hypercuboids to enhance the capability in capturing the diversities of user interests.

Collaborative Filtering Recommendation Systems

Decoupling Pronunciation and Language for End-to-end Code-switching Automatic Speech Recognition

no code implementations28 Oct 2020 Shuai Zhang, Jiangyan Yi, Zhengkun Tian, Ye Bai, JianHua Tao, Zhengqi Wen

In this paper, we propose a decoupled transformer model to use monolingual paired data and unpaired text data to alleviate the problem of code-switching data shortage.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

One In A Hundred: Select The Best Predicted Sequence from Numerous Candidates for Streaming Speech Recognition

no code implementations28 Oct 2020 Zhengkun Tian, Jiangyan Yi, Ye Bai, JianHua Tao, Shuai Zhang, Zhengqi Wen

Inspired by the success of two-pass end-to-end models, we introduce a transformer decoder and the two-stage inference method into the streaming CTC model.

Language Modelling speech-recognition +1

MicroRec: Efficient Recommendation Inference by Hardware and Data Structure Solutions

no code implementations12 Oct 2020 Wenqi Jiang, Zhenhao He, Shuai Zhang, Thomas B. Preußer, Kai Zeng, Liang Feng, Jiansong Zhang, Tongxuan Liu, Yong Li, Jingren Zhou, Ce Zhang, Gustavo Alonso

MicroRec accelerates recommendation inference by (1) redesigning the data structures involved in the embeddings to reduce the number of lookups needed and (2) taking advantage of the availability of High-Bandwidth Memory (HBM) in FPGA accelerators to tackle the latency by enabling parallel lookups.

Recommendation Systems

Improving Network Slimming with Nonconvex Regularization

1 code implementation3 Oct 2020 Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin

Network slimming with T$\ell_1$ regularization also outperforms the latest Bayesian modification of network slimming in compressing a CNN architecture in terms of memory storage while preserving its model accuracy after channel pruning.

Image Classification object-detection +3

Clustering COVID-19 Lung Scans

no code implementations5 Sep 2020 Jacob Householder, Andrew Householder, John Paul Gomez-Reed, Fredrick Park, Shuai Zhang

While tests do exist for COVID-19, the goal of our research is to explore other methods of identifying infected individuals.


A Practical Chinese Dependency Parser Based on A Large-scale Dataset

2 code implementations2 Sep 2020 Shuai Zhang, Lijie Wang, Ke Sun, Xinyan Xiao

DDParser is extended on the graph-based biaffine parser to accommodate to the characteristics of Chinese dataset.

Dependency Parsing

Exploring particle dynamics during self-organization processes via rotationally invariant latent representations

no code implementations2 Sep 2020 Sergei V. Kalinin, Shuai Zhang, Mani Valleti, Harley Pyles, David Baker, James J. De Yoreo, Maxim Ziatdinov

The dynamic of complex ordering systems with active rotational degrees of freedom exemplified by protein self-assembly is explored using a machine learning workflow that combines deep learning-based semantic segmentation and rotationally invariant variational autoencoder-based analysis of orientation and shape evolution.

Soft Condensed Matter

Performance Optimization for Federated Person Re-identification via Benchmark Analysis

2 code implementations26 Aug 2020 Weiming Zhuang, Yonggang Wen, Xuesen Zhang, Xin Gan, Daiying Yin, Dongzhan Zhou, Shuai Zhang, Shuai Yi

Then we propose two optimization methods: (1) To address the unbalanced weight problem, we propose a new method to dynamically change the weights according to the scale of model changes in clients in each training round; (2) To facilitate convergence, we adopt knowledge distillation to refine the server model with knowledge generated from client models on a public dataset.

Federated Learning Knowledge Distillation +2

TensorCoder: Dimension-Wise Attention via Tensor Representation for Natural Language Modeling

no code implementations28 Jul 2020 Shuai Zhang, Peng Zhang, Xindian Ma, Junqiu Wei, Ningning Wang, Qun Liu

Transformer has been widely-used in many Natural Language Processing (NLP) tasks and the scaled dot-product attention between tokens is a core module of Transformer.

Language Modelling Machine Translation +2

Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case

no code implementations ICML 2020 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

In this paper, we provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.

Binary Classification General Classification +1

Spike-Triggered Non-Autoregressive Transformer for End-to-End Speech Recognition

no code implementations16 May 2020 Zhengkun Tian, Jiangyan Yi, Jian-Hua Tao, Ye Bai, Shuai Zhang, Zhengqi Wen

To address this problem and improve the inference speed, we propose a spike-triggered non-autoregressive transformer model for end-to-end speech recognition, which introduces a CTC module to predict the length of the target sequence and accelerate the convergence.

Machine Translation speech-recognition +2

TRP: Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation30 Apr 2020 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss, thus eliminating the fine-tuning process after low rank decomposition.

Rnn-transducer with language bias for end-to-end Mandarin-English code-switching speech recognition

no code implementations19 Feb 2020 Shuai Zhang, Jiangyan Yi, Zhengkun Tian, Jian-Hua Tao, Ye Bai

Recently, language identity information has been utilized to improve the performance of end-to-end code-switching (CS) speech recognition.

Language Identification speech-recognition +1

$\ell_0$ Regularized Structured Sparsity Convolutional Neural Networks

no code implementations17 Dec 2019 Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin

Deepening and widening convolutional neural networks (CNNs) significantly increases the number of trainable weight parameters by adding more convolutional layers and feature maps per layer, respectively.

Synchronous Transformers for End-to-End Speech Recognition

no code implementations6 Dec 2019 Zhengkun Tian, Jiangyan Yi, Ye Bai, Jian-Hua Tao, Shuai Zhang, Zhengqi Wen

Once a fixed-length chunk of the input sequence is processed by the encoder, the decoder begins to predict symbols immediately.

speech-recognition Speech Recognition

Integrating Knowledge into End-to-End Speech Recognition from External Text-Only Data

no code implementations4 Dec 2019 Ye Bai, Jiangyan Yi, Jian-Hua Tao, Zhengqi Wen, Zhengkun Tian, Shuai Zhang

To alleviate the above two issues, we propose a unified method called LST (Learn Spelling from Teachers) to integrate knowledge into an AED model from the external text-only data and leverage the whole context in a sentence.

Language Modelling Sequence-To-Sequence Speech Recognition +1

Weakly-Supervised Degree of Eye-Closeness Estimation

no code implementations24 Oct 2019 Eyasu Mequanint, Shuai Zhang, Bijan Forutanpour, Yingyong Qi, Ning Bi

To alleviate this issue, we propose a weakly-supervised method which utilizes the accurate annotation from the synthetic data set, to learn accurate degree of eye openness, and the weakly labeled (open or closed) real world eye data set to control the domain shift.

DeGNN: Characterizing and Improving Graph Neural Networks with Graph Decomposition

no code implementations10 Oct 2019 Xupeng Miao, Nezihe Merve Gürel, Wentao Zhang, Zhichao Han, Bo Li, Wei Min, Xi Rao, Hansheng Ren, Yinan Shan, Yingxia Shao, Yujie Wang, Fan Wu, Hui Xue, Yaming Yang, Zitao Zhang, Yang Zhao, Shuai Zhang, Yujing Wang, Bin Cui, Ce Zhang

Despite the wide application of Graph Convolutional Network (GCN), one major limitation is that it does not benefit from the increasing depth and suffers from the oversmoothing problem.

Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation9 Oct 2019 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations.

Holographic Factorization Machines for Recommendation

1 code implementation AAAI 2019 Yi Tay, Shuai Zhang, Anh Tuan Luu, Siu Cheung Hui, Lina Yao, Tran Dang Quang Vinh

Factorization Machines (FMs) are a class of popular algorithms that have been widely adopted for collaborative filtering and recommendation tasks.

Collaborative Filtering Retrieval

A Tensorized Transformer for Language Modeling

1 code implementation NeurIPS 2019 Xindian Ma, Peng Zhang, Shuai Zhang, Nan Duan, Yuexian Hou, Dawei Song, Ming Zhou

In this paper, based on the ideas of tensor decomposition and parameters sharing, we propose a novel self-attention model (namely Multi-linear attention) with Block-Term Tensor Decomposition (BTD).

Language Modelling Machine Translation +2

Fully Decoupled Neural Network Learning Using Delayed Gradients

1 code implementation21 Jun 2019 Huiping Zhuang, Yi Wang, Qinglai Liu, Shuai Zhang, Zhiping Lin

Training neural networks with back-propagation (BP) requires a sequential passing of activations and gradients, which forces the network modules to work in a synchronous fashion.

Quaternion Collaborative Filtering for Recommendation

no code implementations6 Jun 2019 Shuai Zhang, Lina Yao, Lucas Vinh Tran, Aston Zhang, Yi Tay

All in all, we conduct extensive experiments on six real-world datasets, demonstrating the effectiveness of Quaternion algebra in recommender systems.

Collaborative Filtering Inductive Bias +2

DeepRec: An Open-source Toolkit for Deep Learning based Recommendation

4 code implementations25 May 2019 Shuai Zhang, Yi Tay, Lina Yao, Bin Wu, Aixin Sun

In this toolkit, we have implemented a number of deep learning based recommendation algorithms using Python and the widely used deep learning package - Tensorflow.

Sequential Recommendation

Quaternion Knowledge Graph Embeddings

1 code implementation NeurIPS 2019 Shuai Zhang, Yi Tay, Lina Yao, Qi Liu

In this work, we move beyond the traditional complex-valued representations, introducing more expressive hypercomplex representations to model entities and relations for knowledge graph embeddings.

Knowledge Graph Embedding Knowledge Graph Embeddings +1

Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets

no code implementations ICLR 2019 Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin

We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss.

AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks

no code implementations24 Jan 2019 Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin

In addition, we found experimentally that the standard convex relaxation of permutation matrices into stochastic matrices leads to poor performance.

Graph Matching

DAC: Data-free Automatic Acceleration of Convolutional Networks

1 code implementation20 Dec 2018 Xin Li, Shuai Zhang, Bolan Jiang, Yingyong Qi, Mooi Choo Chuah, Ning Bi

A complex deep learning model with high accuracy runs slowly on resource-limited devices, while a light-weight model that runs much faster loses accuracy.

Image Classification Multi-Person Pose Estimation +2

Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation6 Dec 2018 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

We propose Trained Rank Pruning (TRP), which iterates low rank approximation and training.


DNQ: Dynamic Network Quantization

no code implementations6 Dec 2018 Yuhui Xu, Shuai Zhang, Yingyong Qi, Jiaxian Guo, Weiyao Lin, Hongkai Xiong

Network quantization is an effective method for the deployment of neural networks on memory and energy constrained mobile devices.


Next Item Recommendation with Self-Attention

no code implementations20 Aug 2018 Shuai Zhang, Yi Tay, Lina Yao, Aixin Sun

In this paper, we propose a novel sequence-aware recommendation model.

Metric Learning

Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks

no code implementations15 Aug 2018 Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin

We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks.

Binarization Quantization

GrCAN: Gradient Boost Convolutional Autoencoder with Neural Decision Forest

no code implementations21 Jun 2018 Manqing Dong, Lina Yao, Xianzhi Wang, Boualem Benatallah, Shuai Zhang

We develop a gradient boost module and embed it into the proposed convolutional autoencoder with neural decision forest to improve the performance.

Self-Attentive Neural Collaborative Filtering

no code implementations17 Jun 2018 Yi Tay, Shuai Zhang, Luu Anh Tuan, Siu Cheung Hui

This paper has been withdrawn as we discovered a bug in our tensorflow implementation that involved accidental mixing of vectors across batches.

Collaborative Filtering

NeuRec: On Nonlinear Transformation for Personalized Ranking

no code implementations8 May 2018 Shuai Zhang, Lina Yao, Aixin Sun, Sen Wang, Guodong Long, Manqing Dong

Modeling user-item interaction patterns is an important task for personalized recommendations.

Recommendation Systems

Metric Factorization: Recommendation beyond Matrix Factorization

2 code implementations13 Feb 2018 Shuai Zhang, Lina Yao, Yi Tay, Xiwei Xu, Xiang Zhang, Liming Zhu

In the past decade, matrix factorization has been extensively researched and has become one of the most popular techniques for personalized recommendations.

BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights

2 code implementations19 Jan 2018 Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin

We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights.


Stacked Kernel Network

no code implementations25 Nov 2017 Shuai Zhang, Jian-Xin Li, Pengtao Xie, Yingchun Zhang, Minglai Shao, Haoyi Zhou, Mengyi Yan

Similar to DNNs, a SKN is composed of multiple layers of hidden units, but each parameterized by a RKHS function rather than a finite-dimensional vector.

Deep Learning based Recommender System: A Survey and New Perspectives

8 code implementations24 Jul 2017 Shuai Zhang, Lina Yao, Aixin Sun, Yi Tay

This article aims to provide a comprehensive review of recent research efforts on deep learning based recommender systems.

Information Retrieval Recommendation Systems +1

Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection

no code implementations19 Dec 2016 Penghang Yin, Shuai Zhang, Yingyong Qi, Jack Xin

We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs).

object-detection Object Detection +1

Mimicing the Kane-Mele type spin orbit interaction by spin-flexual phonon coupling in graphene devices

no code implementations28 Mar 2015 Zhanbin Bai, Rui Wang, Yazhou Zhou, Tianru Wu, Jianlei Ge, Jing Li, Yuyuan Qin, Fucong Fei, Lu Cao, Xuefeng Wang, Xinran Wang, Shuai Zhang, Liling Sun, You Song, Fengqi Song

On the efforts of enhancing the spin orbit interaction (SOI) of graphene for seeking the dissipationless quantum spin Hall devices, unique Kane-Mele type SOI and high mobility samples are desired.

Mesoscale and Nanoscale Physics

Cannot find the paper you are looking for? You can Submit a new open access paper.