Search Results for author: Shuai Zhang

Found 154 papers, 53 papers with code

De-Bias for Generative Extraction in Unified NER Task

no code implementations ACL 2022 Shuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, Weiming Lu

Named entity recognition (NER) is a fundamental task to recognize specific types of entities from a given sentence.

Attribute Data Augmentation +4

ClusterFormer: Neural Clustering Attention for Efficient and Effective Transformer

no code implementations ACL 2022 Ningning Wang, Guobing Gan, Peng Zhang, Shuai Zhang, Junqiu Wei, Qun Liu, Xin Jiang

Other sparse methods use clustering patterns to select words, but the clustering process is separate from the training process of the target task, which causes a decrease in effectiveness.

Clustering Machine Translation +4

Beyond Examples: High-level Automated Reasoning Paradigm in In-Context Learning via MCTS

no code implementations27 Nov 2024 Jinyang Wu, Mingkuan Feng, Shuai Zhang, Feihu Che, Zengqi Wen, JianHua Tao

In-context Learning (ICL) enables large language models (LLMs) to tackle downstream tasks through sophisticated prompting and high-quality demonstrations.

In-Context Learning Math +1

Multimodal Instruction Tuning with Hybrid State Space Models

no code implementations13 Nov 2024 Jianing Zhou, Han Li, Shuai Zhang, Ning Xie, Ruijie Wang, Xiaohan Nie, Sheng Liu, Lingyun Wang

Remarkably, our model enhances inference efficiency for high-resolution images and high-frame-rate videos by about 4 times compared to current models, with efficiency gains increasing as image resolution or video frames rise.

Mamba State Space Models

Unraveling the Gradient Descent Dynamics of Transformers

no code implementations12 Nov 2024 Bingqing Song, Boran Han, Shuai Zhang, Jie Ding, Mingyi Hong

While the Transformer architecture has achieved remarkable success across various domains, a thorough theoretical foundation explaining its optimization dynamics is yet to be fully developed.

PalmBench: A Comprehensive Benchmark of Compressed Large Language Models on Mobile Platforms

no code implementations5 Oct 2024 Yilong Li, Jingyu Liu, Hao Zhang, M Badri Narayanan, Utkarsh Sharma, Shuai Zhang, Pan Hu, Yijing Zeng, Jayaram Raghuram, Suman Banerjee

Deploying large language models (LLMs) locally on mobile devices is advantageous in scenarios where transmitting data to remote cloud servers is either undesirable due to privacy concerns or impractical due to network connection.

Benchmarking Quantization

CausalVE: Face Video Privacy Encryption via Causal Video Prediction

no code implementations28 Sep 2024 Yubo Huang, Wenhao Feng, Xin Lai, Zixi Wang, Jingzehua Xu, Shuai Zhang, Hongjie He, Fan Chen

We obtain cover images by adopting a diffusion model to achieve face swapping with face guidance and use the speech sequence features and spatiotemporal sequence features of the secret video for dynamic video inference and prediction to obtain a cover video with the same number of frames as the secret video.

Face Swapping Recommendation Systems +1

Dynamic 2D Gaussians: Geometrically accurate radiance fields for dynamic objects

1 code implementation21 Sep 2024 Shuai Zhang, Guanjun Wu, Xinggang Wang, Bin Feng, Wenyu Liu

In this paper, we propose a novel representation that can reconstruct accurate meshes from sparse image input, named Dynamic 2D Gaussians (D-2DGS).

LHQ-SVC: Lightweight and High Quality Singing Voice Conversion Modeling

no code implementations13 Sep 2024 Yubo Huang, Xin Lai, Muyang Ye, Anran Zhu, Zixi Wang, Jingzehua Xu, Shuai Zhang, Zhiyuan Zhou, Weijie Niu

Singing Voice Conversion (SVC) has emerged as a significant subfield of Voice Conversion (VC), enabling the transformation of one singer's voice into another while preserving musical elements such as melody, rhythm, and timbre.

Voice Conversion

Enhancing Information Freshness: An AoI Optimized Markov Decision Process Dedicated In the Underwater Task

1 code implementation4 Sep 2024 Jingzehua Xu, Yimian Ding, Yiyuan Yang, Guanwen Xie, Shuai Zhang

However, underwater tasks have mostly failed due to the observation delay caused by acoustic communication in the Internet of underwater things.

Decision Making Reinforcement Learning (RL)

Large Language Models as Efficient Reward Function Searchers for Custom-Environment Multi-Objective Reinforcement Learning

no code implementations4 Sep 2024 Guanwen Xie, Jingzehua Xu, Yiyuan Yang, Yimian Ding, Shuai Zhang

Achieving the effective design and improvement of reward functions in reinforcement learning (RL) tasks with complex custom environments and multiple requirements presents considerable challenges.

Long-Context Understanding Multi-Objective Reinforcement Learning +1

Pandora's Box or Aladdin's Lamp: A Comprehensive Analysis Revealing the Role of RAG Noise in Large Language Models

no code implementations24 Aug 2024 Jinyang Wu, Feihu Che, Chuyuan Zhang, JianHua Tao, Shuai Zhang, Pengpeng Shao

Retrieval-Augmented Generation (RAG) has emerged as a crucial method for addressing hallucinations in large language models (LLMs).

RAG Retrieval

CDFGNN: a Systematic Design of Cache-based Distributed Full-Batch Graph Neural Network Training with Communication Reduction

no code implementations1 Aug 2024 Shuai Zhang, Zite Jiang, Haihang You

Combined with communication quantization and hierarchical GP algorithm, CDFGNN outperforms the state-of-the-art distributed full-batch training frameworks by 30. 39% in our experiments.

Graph Neural Network Quantization

AFIDAF: Alternating Fourier and Image Domain Adaptive Filters as an Efficient Alternative to Attention in ViTs

no code implementations16 Jul 2024 Yunling Zheng, Zeyi Xu, Fanghui Xue, Biao Yang, Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin

We propose and demonstrate an alternating Fourier and image domain filtering approach for feature extraction as an efficient alternative to build a vision backbone without using the computationally intensive attention.

object-detection Object Detection

ASRRL-TTS: Agile Speaker Representation Reinforcement Learning for Text-to-Speech Speaker Adaptation

no code implementations7 Jul 2024 Ruibo Fu, Xin Qi, Zhengqi Wen, JianHua Tao, Tao Wang, Chunyu Qiang, Zhiyong Wang, Yi Lu, Xiaopeng Wang, Shuchen Shi, Yukun Liu, Xuefei Liu, Shuai Zhang

The results indicate that the ASRRL method significantly outperforms traditional fine-tuning approaches, achieving higher speaker similarity and better overall speech quality with limited reference speeches.

Sentence Text to Speech

Fake News Detection and Manipulation Reasoning via Large Vision-Language Models

no code implementations2 Jul 2024 Ruihan Jin, Ruibo Fu, Zhengqi Wen, Shuai Zhang, Yukun Liu, JianHua Tao

To support the research, we introduce a benchmark for fake news detection and manipulation reasoning, referred to as Human-centric and Fact-related Fake News (HFFN).

Binary Classification Fake News Detection +1

Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis

no code implementations24 Jun 2024 Hongkang Li, Meng Wang, Shuai Zhang, Sijia Liu, Pin-Yu Chen

Efficient training and inference algorithms, such as low-rank adaption and model pruning, have shown impressive performance for learning Transformer-based large foundation models.

MINT: a Multi-modal Image and Narrative Text Dubbing Dataset for Foley Audio Content Planning and Generation

1 code implementation15 Jun 2024 Ruibo Fu, Shuchen Shi, Hongming Guo, Tao Wang, Chunyu Qiang, Zhengqi Wen, JianHua Tao, Xin Qi, Yi Lu, Xiaopeng Wang, Zhiyong Wang, Yukun Liu, Xuefei Liu, Shuai Zhang, Guanjun Li

Despite advancements in AIGC technologies for text and image generation, the foley audio dubbing remains rudimentary due to difficulties in cross-modal scene matching and content correlation.

AudioCaps Image Generation

Transferring Knowledge from Large Foundation Models to Small Downstream Models

no code implementations11 Jun 2024 Shikai Qiu, Boran Han, Danielle C. Maddix, Shuai Zhang, Yuyang Wang, Andrew Gordon Wilson

Furthermore, AFT reliably translates improvement in pre-trained models into improvement in downstream performance, even if the downstream model is over $50\times$ smaller, and can effectively transfer complementary information learned by multiple pre-trained models.

Transfer Learning

Discovering Bias in Latent Space: An Unsupervised Debiasing Approach

no code implementations5 Jun 2024 Dyah Adila, Shuai Zhang, Boran Han, Yuyang Wang

The question-answering (QA) capabilities of foundation models are highly sensitive to prompt variations, rendering their performance susceptible to superficial, non-meaning-altering changes.

Question Answering

ADR-BC: Adversarial Density Weighted Regression Behavior Cloning

no code implementations28 May 2024 Ziqi Zhang, Zifeng Zhuang, Donglin Wang, Jingzehua Xu, Miao Liu, Shuai Zhang

Meanwhile, as a one-step behavior cloning framework, ADR-BC avoids the cumulative bias associated with multi-step RL frameworks.

Imitation Learning Q-Learning +2

SF-DQN: Provable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning

no code implementations24 May 2024 Shuai Zhang, Heshan Devaka Fernando, Miao Liu, Keerthiram Murugesan, Songtao Lu, Pin-Yu Chen, Tianyi Chen, Meng Wang

This paper studies the transfer reinforcement learning (RL) problem where multiple RL problems have different reward functions but share the same underlying transition dynamics.

Deep Reinforcement Learning Q-Learning +2

Can large language models understand uncommon meanings of common words?

no code implementations9 May 2024 Jinyang Wu, Feihu Che, Xinxin Zheng, Shuai Zhang, Ruihan Jin, Shuai Nie, Pengpeng Shao, JianHua Tao

Large language models (LLMs) like ChatGPT have shown significant advancements across diverse natural language understanding (NLU) tasks, including intelligent dialogue and autonomous agents.

Natural Language Understanding

FedSC: Provable Federated Self-supervised Learning with Spectral Contrastive Objective over Non-i.i.d. Data

no code implementations7 May 2024 Shusen Jing, Anlan Yu, Shuai Zhang, Songyang Zhang

One unique challenge of federated self-supervised learning (FedSSL) is that the global objective of FedSSL usually does not equal the weighted sum of local SSL objectives.

Federated Learning Self-Supervised Learning

CoMM: Collaborative Multi-Agent, Multi-Reasoning-Path Prompting for Complex Problem Solving

1 code implementation26 Apr 2024 Pei Chen, Boran Han, Shuai Zhang

Specifically, we prompt LLMs to play different roles in a problem-solving team, and encourage different role-play agents to collaboratively solve the target task.

KS-LLM: Knowledge Selection of Large Language Models with Evidence Document for Question Answering

no code implementations24 Apr 2024 Xinxin Zheng, Feihu Che, Jinyang Wu, Shuai Zhang, Shuai Nie, Kang Liu, JianHua Tao

Large language models (LLMs) suffer from the hallucination problem and face significant challenges when applied to knowledge-intensive tasks.

Hallucination Question Answering +2

Gaussian Pancakes: Geometrically-Regularized 3D Gaussian Splatting for Realistic Endoscopic Reconstruction

1 code implementation9 Apr 2024 Sierra Bonilla, Shuai Zhang, Dimitrios Psychogyios, Danail Stoyanov, Francisco Vasconcelos, Sophia Bano

Within colorectal cancer diagnostics, conventional colonoscopy techniques face critical limitations, including a limited field of view and a lack of depth information, which can impede the detection of precancerous lesions.

Novel View Synthesis Simultaneous Localization and Mapping +1

Jump Self-attention: Capturing High-order Statistics in Transformers

no code implementations journal 2024 Haoyi Zhou, Siyang Xiao, Shanghang Zhang, Jieqi Peng, Shuai Zhang, JianXin Li

However, the strong assumption that elements are directly attentive to each other limits the performance of tasks with high-order dependencies such as natural language understanding and Image captioning.

Image Captioning Natural Language Understanding

Bridging Remote Sensors with Multisensor Geospatial Foundation Models

1 code implementation CVPR 2024 Boran Han, Shuai Zhang, Xingjian Shi, Markus Reichstein

A key discovery of our research is that representations derived from natural images are not always compatible with the distinct characteristics of geospatial remote sensors, underscoring the limitations of existing representations in this field.

Cloud Removal Diversity +1

TOGS: Gaussian Splatting with Temporal Opacity Offset for Real-Time 4D DSA Rendering

1 code implementation28 Mar 2024 Shuai Zhang, Huangxuan Zhao, Zhenghong Zhou, Guanjun Wu, Chuansheng Zheng, Xinggang Wang, Wenyu Liu

To overcome these limitations, we propose TOGS, a Gaussian splatting method with opacity offset over time, which can effectively improve the rendering quality and speed of 4D DSA.

Facilitating Pornographic Text Detection for Open-Domain Dialogue Systems via Knowledge Distillation of Large Language Models

1 code implementation20 Mar 2024 Huachuan Qiu, Shuai Zhang, Hongliang He, Anqi Li, Zhenzhong Lan

Pornographic content occurring in human-machine interaction dialogues can cause severe side effects for users in open-domain dialogue systems.

Chatbot Knowledge Distillation +1

PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design

no code implementations8 Mar 2024 Wenqi Jiang, Shuai Zhang, Boran Han, Jie Wang, Bernie Wang, Tim Kraska

Retrieval-augmented generation (RAG) can enhance the generation quality of large language models (LLMs) by incorporating external token databases.

RAG Retrieval

Understanding the Therapeutic Relationship between Counselors and Clients in Online Text-based Counseling using LLMs

no code implementations19 Feb 2024 Anqi Li, Yu Lu, Nirui Song, Shuai Zhang, Lizhi Ma, Zhenzhong Lan

Through further LLM-based evaluations on additional conversations, our findings underscore the challenges counselors face in cultivating strong online relationships with clients.

Unveiling the Secrets of Engaging Conversations: Factors that Keep Users Hooked on Role-Playing Dialog Agents

no code implementations18 Feb 2024 Shuai Zhang, Yu Lu, Junwen Liu, JIA YU, Huachuan Qiu, Yuming Yan, Zhenzhong Lan

With the growing humanlike nature of dialog agents, people are now engaging in extended conversations that can stretch from brief moments to substantial periods of time.

Foundation Models for Recommender Systems: A Survey and New Perspectives

no code implementations17 Feb 2024 Chengkai Huang, Tong Yu, Kaige Xie, Shuai Zhang, Lina Yao, Julian McAuley

Recently, Foundation Models (FMs), with their extensive knowledge bases and complex architectures, have offered unique opportunities within the realm of recommender systems (RSs).

Recommendation Systems Representation Learning

MolTC: Towards Molecular Relational Modeling In Language Models

1 code implementation6 Feb 2024 Junfeng Fang, Shuai Zhang, Chang Wu, Zhengyi Yang, Zhiyuan Liu, Sihang Li, Kun Wang, Wenjie Du, Xiang Wang

Molecular Relational Learning (MRL), aiming to understand interactions between molecular pairs, plays a pivotal role in advancing biochemical research.

Relational Reasoning

Multi-agent Reinforcement Learning for Energy Saving in Multi-Cell Massive MIMO Systems

no code implementations5 Feb 2024 Tianzhang Cai, Qichen Wang, Shuai Zhang, Özlem Tuğfe Demir, Cicek Cavdar

We develop a multi-agent reinforcement learning (MARL) algorithm to minimize the total energy consumption of multiple massive MIMO (multiple-input multiple-output) base stations (BSs) in a multi-cell network while preserving the overall quality-of-service (QoS) by making decisions on the multi-level advanced sleep modes (ASMs) and antenna switching of these BSs.

Multi-agent Reinforcement Learning

Context-Former: Stitching via Latent Conditioned Sequence Modeling

no code implementations29 Jan 2024 Ziqi Zhang, Jingzehua Xu, Jinxin Liu, Zifeng Zhuang, Donglin Wang, Miao Liu, Shuai Zhang

Offline reinforcement learning (RL) algorithms can learn better decision-making compared to behavior policies by stitching the suboptimal trajectories to derive more optimal ones.

D4RL Imitation Learning +2

CaMML: Context-Aware Multimodal Learner for Large Models

1 code implementation6 Jan 2024 Yixin Chen, Shuai Zhang, Boran Han, Tong He, Bo Li

In this work, we introduce Context-Aware MultiModal Learner (CaMML), for tuning large multimodal models (LMMs).

Visual Question Answering

A dynamical clipping approach with task feedback for Proximal Policy Optimization

1 code implementation12 Dec 2023 Ziqi Zhang, Jingzehua Xu, Zifeng Zhuang, Hongyin Zhang, Jinxin Liu, Donglin Wang, Shuai Zhang

Unlike previous clipping approaches, we propose a bi-level proximal policy optimization objective that can dynamically adjust the clipping bound to better reflect the preference (maximizing Return) of these RL tasks.

Language Modelling Large Language Model +1

Quality and Quantity: Unveiling a Million High-Quality Images for Text-to-Image Synthesis in Fashion Design

no code implementations19 Nov 2023 JIA YU, Lichao Zhang, Zijie Chen, Fayu Pan, Miaomiao Wen, Yuming Yan, Fangsheng Weng, Shuai Zhang, Lili Pan, Zhenzhong Lan

Moreover, to foster standardization in the T2I-based fashion design field, we propose a new benchmark comprising multiple datasets for evaluating the performance of fashion design models.

Image Generation

Optimizing rgb-d semantic segmentation through multi-modal interaction and pooling attention

no code implementations19 Nov 2023 Shuai Zhang, Minghong Xie

Semantic segmentation of RGB-D images involves understanding the appearance and spatial relationships of objects within a scene, which requires careful consideration of various factors.

Decoder Segmentation +1

ConceptPsy:A Benchmark Suite with Conceptual Comprehensiveness in Psychology

no code implementations16 Nov 2023 Junlei Zhang, Hongliang He, Nirui Song, Zhanchao Zhou, Shuyuan He, Shuai Zhang, Huachuan Qiu, Anqi Li, Yong Dai, Lizhi Ma, Zhenzhong Lan

The critical field of psychology necessitates a comprehensive benchmark to enhance the evaluation and development of domain-specific Large Language Models (LLMs).

MMLU Multiple-choice

Facilitating NSFW Text Detection in Open-Domain Dialogue Systems via Knowledge Distillation

1 code implementation18 Sep 2023 Huachuan Qiu, Shuai Zhang, Hongliang He, Anqi Li, Zhenzhong Lan

NSFW (Not Safe for Work) content, in the context of a dialogue, can have severe side effects on users in open-domain dialogue systems.

Chatbot Knowledge Distillation +1

InfeRE: Step-by-Step Regex Generation via Chain of Inference

1 code implementation8 Aug 2023 Shuai Zhang, Xiaodong Gu, Yuting Chen, Beijun Shen

Particularly, InfeRE outperforms the popular tree-based generation approach by 18. 1% and 11. 3% on both datasets, respectively, in terms of DFA@5 accuracy.

Text Matching

A Benchmark for Understanding Dialogue Safety in Mental Health Support

1 code implementation31 Jul 2023 Huachuan Qiu, Tong Zhao, Anqi Li, Shuai Zhang, Hongliang He, Zhenzhong Lan

Our study reveals that ChatGPT struggles to detect safety categories with detailed safety definitions in a zero- and few-shot paradigm, whereas the fine-tuned model proves to be more suitable.

MAS: Towards Resource-Efficient Federated Multiple-Task Learning

no code implementations ICCV 2023 Weiming Zhuang, Yonggang Wen, Lingjuan Lyu, Shuai Zhang

Then, we present our new approach, MAS (Merge and Split), to optimize the performance of training multiple simultaneous FL tasks.

Federated Learning

Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output Robustness of Large Language Models

1 code implementation17 Jul 2023 Huachuan Qiu, Shuai Zhang, Anqi Li, Hongliang He, Zhenzhong Lan

We present a systematic analysis of the safety and robustness of LLMs regarding the position of explicit normal instructions, word replacements (verbs in explicit normal instructions, target groups in malicious instructions, cue words for explicit normal instructions), and instruction replacements (different explicit normal instructions).

Non-Convex Optimizations for Machine Learning with Theoretical Guarantee: Robust Matrix Completion and Neural Network Learning

no code implementations28 Jun 2023 Shuai Zhang

Despite the recent development in machine learning, most learning systems are still under the concept of "black box", where the performance cannot be understood and derived.

Low-Rank Matrix Completion

A Semi-Paired Approach For Label-to-Image Translation

no code implementations23 Jun 2023 George Eskandar, Shuai Zhang, Mohamed Abdelsamad, Mark Youssef, Diandian Guo, Bin Yang

Data efficiency, or the ability to generalize from a few labeled data, remains a major challenge in deep learning.

Image-to-Image Translation Translation

Co-design Hardware and Algorithm for Vector Search

1 code implementation19 Jun 2023 Wenqi Jiang, Shigang Li, Yu Zhu, Johannes De Fine Licht, Zhenhao He, Runbin Shi, Cedric Renggli, Shuai Zhang, Theodoros Rekatsinas, Torsten Hoefler, Gustavo Alonso

Vector search has emerged as the foundation for large-scale information retrieval and machine learning systems, with search engines like Google and Bing processing tens of thousands of queries per second on petabyte-scale document datasets by evaluating vector similarities between encoded query texts and web documents.

Information Retrieval Retrieval

Rethinking Document-Level Relation Extraction: A Reality Check

no code implementations15 Jun 2023 Jing Li, Yequan Wang, Shuai Zhang, Min Zhang

Recently, numerous efforts have continued to push up performance boundaries of document-level relation extraction (DocRE) and have claimed significant progress in DocRE.

Document-level Relation Extraction Relation

Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks

1 code implementation7 Jun 2023 Mohammed Nowaz Rabbani Chowdhury, Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen

In deep learning, mixture-of-experts (MoE) activates one or few experts (sub-networks) on a per-sample or per-token basis, resulting in significant computation reduction.

Learning Music Sequence Representation from Text Supervision

no code implementations31 May 2023 Tianyu Chen, Yuan Xie, Shuai Zhang, Shaohan Huang, Haoyi Zhou, JianXin Li

Music representation learning is notoriously difficult for its complex human-related concepts contained in the sequence of numerical signals.

Contrastive Learning Representation Learning

SMILE: Single-turn to Multi-turn Inclusive Language Expansion via ChatGPT for Mental Health Support

1 code implementation30 Apr 2023 Huachuan Qiu, Hongliang He, Shuai Zhang, Anqi Li, Zhenzhong Lan

Further, we employ our method to generate a large-scale, lifelike, and diverse dialogue dataset named SMILECHAT, consisting of 55k dialogues.

Chatbot

MobileInst: Video Instance Segmentation on the Mobile

no code implementations30 Mar 2023 Renhong Zhang, Tianheng Cheng, Shusheng Yang, Haoyi Jiang, Shuai Zhang, Jiancheng Lyu, Xin Li, Xiaowen Ying, Dashan Gao, Wenyu Liu, Xinggang Wang

To address those issues, we present MobileInst, a lightweight and mobile-friendly framework for video instance segmentation on mobile devices.

Decoder Instance Segmentation +3

Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks

no code implementations6 Feb 2023 Shuai Zhang, Meng Wang, Pin-Yu Chen, Sijia Liu, Songtao Lu, Miao Liu

Due to the significant computational challenge of training large-scale graph neural networks (GNNs), various sparse learning techniques have been exploited to reduce memory and storage costs.

Sparse Learning

SKDBERT: Compressing BERT via Stochastic Knowledge Distillation

no code implementations26 Nov 2022 Zixiang Ding, Guoqing Jiang, Shuai Zhang, Lin Guo, Wei Lin

In this paper, we propose Stochastic Knowledge Distillation (SKD) to obtain compact BERT-style language model dubbed SKDBERT.

Knowledge Distillation Language Modelling

Neural Methods for Logical Reasoning Over Knowledge Graphs

1 code implementation ICLR 2022 Alfonso Amayuelas, Shuai Zhang, Susie Xi Rao, Ce Zhang

We introduce a set of models that use Neural Networks to create one-point vector embeddings to answer the queries.

Benchmarking Knowledge Graphs +2

Profiling Television Watching Behaviour Using Bayesian Hierarchical Joint Models for Time-to-Event and Count Data

1 code implementation6 Sep 2022 Rafael A. Moral, Zhi Chen, Shuai Zhang, Sally McClean, Gabriel R. Palma, Brahim Allan, Ian Kegel

The model drastically reduces the dimensionality of the data from thousands of observations per customer to 11 customer-level parameter estimates and random effects.

Descriptive

Smart Multi-tenant Federated Learning

no code implementations9 Jul 2022 Weiming Zhuang, Yonggang Wen, Shuai Zhang

In this work, we propose a smart multi-tenant FL system, MuFL, to effectively coordinate and execute simultaneous training activities.

Federated Learning

Learning and generalization of one-hidden-layer neural networks, going beyond standard Gaussian data

no code implementations7 Jul 2022 Hongkang Li, Shuai Zhang, Meng Wang

In addition, for the first time, this paper characterizes the impact of the input distributions on the sample complexity and the learning rate.

Chat-to-Design: AI Assisted Personalized Fashion Design

no code implementations3 Jul 2022 Weiming Zhuang, Chongjie Ye, Ying Xu, Pengzhi Mao, Shuai Zhang

In this demo, we present Chat-to-Design, a new multimodal interaction system for personalized fashion design.

Natural Language Understanding Retrieval

Optimizing Performance of Federated Person Re-identification: Benchmarking and Analysis

2 code implementations24 May 2022 Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang

Based on these insights, we propose three optimization approaches: (1) We adopt knowledge distillation to facilitate the convergence of FedReID by better transferring knowledge from clients to the server; (2) We introduce client clustering to improve the performance of large datasets by aggregating clients with similar data distributions; (3) We propose cosine distance weight to elevate performance by dynamically updating the weights for aggregation depending on how well models are trained in clients.

Benchmarking Federated Learning +2

A Fine-grained Interpretability Evaluation Benchmark for Neural NLP

no code implementations23 May 2022 Lijie Wang, Yaozong Shen, Shuyuan Peng, Shuai Zhang, Xinyan Xiao, Hao liu, Hongxuan Tang, Ying Chen, Hua Wu, Haifeng Wang

Based on this benchmark, we conduct experiments on three typical models with three saliency methods, and unveil their strengths and weakness in terms of interpretability.

Reading Comprehension Sentiment Analysis

Modelling graph dynamics in fraud detection with "Attention"

1 code implementation22 Apr 2022 Susie Xi Rao, Clémence Lanfranchi, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Mo Cheng, Yinan Shan, Yang Zhao, Ce Zhang

At online retail platforms, detecting fraudulent accounts and transactions is crucial to improve customer experience, minimize loss, and avoid unauthorized transactions.

Fraud Detection Graph Neural Network

Federated Unsupervised Domain Adaptation for Face Recognition

no code implementations9 Apr 2022 Weiming Zhuang, Xin Gan, Yonggang Wen, Xuesen Zhang, Shuai Zhang, Shuai Yi

To address this problem, we propose federated unsupervised domain adaptation for face recognition, FedFR.

Clustering Face Recognition +2

Divergence-aware Federated Self-Supervised Learning

1 code implementation ICLR 2022 Weiming Zhuang, Yonggang Wen, Shuai Zhang

Using the framework, our study uncovers unique insights of FedSSL: 1) stop-gradient operation, previously reported to be essential, is not always necessary in FedSSL; 2) retaining local knowledge of clients in FedSSL is particularly beneficial for non-IID data.

Federated Learning Federated Unsupervised Learning +2

Reducing language context confusion for end-to-end code-switching automatic speech recognition

no code implementations28 Jan 2022 Shuai Zhang, Jiangyan Yi, Zhengkun Tian, JianHua Tao, Yu Ting Yeung, Liqun Deng

We propose a language-related attention mechanism to reduce multilingual context confusion for the E2E code-switching ASR model based on the Equivalence Constraint (EC) Theory.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis

no code implementations21 Jan 2022 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.

An Efficient Pruning Process with Locality Aware Exploration and Dynamic Graph Editing for Subgraph Matching

no code implementations22 Dec 2021 Zite Jiang, Boxiao Liu, Shuai Zhang, Xingzhong Hou, Mengting Yuan, Haihang You

Subgraph matching is a NP-complete problem that extracts isomorphic embeddings of a query graph $q$ in a data graph $G$.

Self-Instantiated Recurrent Units with Dynamic Soft Recursion

no code implementations NeurIPS 2021 Aston Zhang, Yi Tay, Yikang Shen, Alvin Chan Guo Wei, Shuai Zhang

On the other hand, the extent of the Self-IRU recursion is controlled by gates whose values are between 0 and 1 and may vary across the temporal dimension of sequences, enabling dynamic soft recursion depth at each time step.

Inductive Bias

POLLA: Enhancing the Local Structure Awareness in Long Sequence Spatial-temporal Modeling

1 code implementation TIST 2021 2021 Haoyi Zhou, Hao Peng, Jieqi Peng, Shuai Zhang, JianXin Li

Extensive experiments are conducted on five large-scale datasets, which demonstrate that our method achieves state-of-the-art performance and validates the effectiveness brought by local structure information.

Decoder

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks

no code implementations12 Oct 2021 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer.

How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis

no code implementations ICLR 2022 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.

Collaborative Unsupervised Visual Representation Learning from Decentralized Data

1 code implementation ICCV 2021 Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang, Shuai Yi

In this framework, each party trains models from unlabeled data independently using contrastive learning with an online network and a target network.

Contrastive Learning Federated Learning +3

Joint Optimization in Edge-Cloud Continuum for Federated Unsupervised Person Re-identification

no code implementations14 Aug 2021 Weiming Zhuang, Yonggang Wen, Shuai Zhang

We present FedUReID, a federated unsupervised person ReID system to learn person ReID models without any labels while preserving privacy.

Federated Learning Unsupervised Person Re-Identification

Knowledge Router: Learning Disentangled Representations for Knowledge Graphs

no code implementations NAACL 2021 Shuai Zhang, Xi Rao, Yi Tay, Ce Zhang

To this end, this paper proposes to learn disentangled representations of KG entities - a new method that disentangles the inner latent properties of KG entities.

Knowledge Graphs Representation Learning

A Sequence-to-Set Network for Nested Named Entity Recognition

1 code implementation19 May 2021 Zeqi Tan, Yongliang Shen, Shuai Zhang, Weiming Lu, Yueting Zhuang

We utilize a non-autoregressive decoder to predict the final set of entities in one pass, in which we are able to capture dependencies between entities.

Decoder named-entity-recognition +3

Towards Unsupervised Domain Adaptation for Deep Face Recognition under Privacy Constraints via Federated Learning

no code implementations17 May 2021 Weiming Zhuang, Xin Gan, Yonggang Wen, Xuesen Zhang, Shuai Zhang, Shuai Yi

To this end, FedFR forms an end-to-end training pipeline: (1) pre-train in the source domain; (2) predict pseudo labels by clustering in the target domain; (3) conduct domain-constrained federated learning across two domains.

Clustering Face Recognition +2

EasyFL: A Low-code Federated Learning Platform For Dummies

1 code implementation17 May 2021 Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang

However, these platforms are complex to use and require a deep understanding of FL, which imposes high barriers to entry for beginners, limits the productivity of researchers, and compromises deployment efficiency.

Federated Learning Privacy Preserving

Locate and Label: A Two-stage Identifier for Nested Named Entity Recognition

1 code implementation ACL 2021 Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, Weiming Lu

Although these methods have the innate ability to handle nested NER, they suffer from high computational cost, ignorance of boundary information, under-utilization of the spans that partially match with entities, and difficulties in long entity recognition.

Chinese Named Entity Recognition named-entity-recognition +3

FSR: Accelerating the Inference Process of Transducer-Based Models by Applying Fast-Skip Regularization

no code implementations7 Apr 2021 Zhengkun Tian, Jiangyan Yi, Ye Bai, JianHua Tao, Shuai Zhang, Zhengqi Wen

It takes a lot of computation and time to predict the blank tokens, but only the non-blank tokens will appear in the final output sequence.

Decoder Position +2

TSNAT: Two-Step Non-Autoregressvie Transformer Models for Speech Recognition

1 code implementation4 Apr 2021 Zhengkun Tian, Jiangyan Yi, JianHua Tao, Ye Bai, Shuai Zhang, Zhengqi Wen, Xuefei Liu

To address these two problems, we propose a new model named the two-step non-autoregressive transformer(TSNAT), which improves the performance and accelerating the convergence of the NAR model by learning prior knowledge from a parameters-sharing AR model.

Decoder speech-recognition +2

Switch Spaces: Learning Product Spaces with Sparse Gating

no code implementations17 Feb 2021 Shuai Zhang, Yi Tay, Wenqi Jiang, Da-Cheng Juan, Ce Zhang

In order for learned representations to be effective and efficient, it is ideal that the geometric inductive bias aligns well with the underlying structure of the data.

Inductive Bias Knowledge Graph Completion +1

Learning One-hidden-layer Neural Networks on Gaussian Mixture Models with Guaranteed Generalizability

no code implementations1 Jan 2021 Hongkang Li, Shuai Zhang, Meng Wang

Instead of following the conventional and restrictive assumption in the literature that the input features follow the standard Gaussian distribution, this paper, for the first time, analyzes a more general and practical scenario that the input features follow a Gaussian mixture model of a finite number of Gaussian distributions of various mean and variance.

Binary Classification

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks

no code implementations NeurIPS 2021 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

Moreover, as the algorithm for training a sparse neural network is specified as (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned model weights in the hidden layer.

Suspicious Massive Registration Detection via Dynamic Heterogeneous Graph Neural Networks

no code implementations20 Dec 2020 Susie Xi Rao, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Mo Cheng, Yinan Shan, Yang Zhao, Ce Zhang

Massive account registration has raised concerns on risk management in e-commerce companies, especially when registration increases rapidly within a short time frame.

Graph Neural Network Management

xFraud: Explainable Fraud Transaction Detection

1 code implementation24 Nov 2020 Susie Xi Rao, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Zhiyao Chen, Yinan Shan, Yang Zhao, Ce Zhang

At online retail platforms, it is crucial to actively detect the risks of transactions to improve customer experience and minimize financial loss.

Explainable Models Fraud Detection +2

Learning User Representations with Hypercuboids for Recommender Systems

3 code implementations11 Nov 2020 Shuai Zhang, Huoyu Liu, Aston Zhang, Yue Hu, Ce Zhang, Yumeng Li, Tanchao Zhu, Shaojian He, Wenwu Ou

Furthermore, we present two variants of hypercuboids to enhance the capability in capturing the diversities of user interests.

Collaborative Filtering Recommendation Systems

Decoupling Pronunciation and Language for End-to-end Code-switching Automatic Speech Recognition

no code implementations28 Oct 2020 Shuai Zhang, Jiangyan Yi, Zhengkun Tian, Ye Bai, JianHua Tao, Zhengqi Wen

In this paper, we propose a decoupled transformer model to use monolingual paired data and unpaired text data to alleviate the problem of code-switching data shortage.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

One In A Hundred: Select The Best Predicted Sequence from Numerous Candidates for Streaming Speech Recognition

no code implementations28 Oct 2020 Zhengkun Tian, Jiangyan Yi, Ye Bai, JianHua Tao, Shuai Zhang, Zhengqi Wen

Inspired by the success of two-pass end-to-end models, we introduce a transformer decoder and the two-stage inference method into the streaming CTC model.

Decoder Diversity +3

MicroRec: Efficient Recommendation Inference by Hardware and Data Structure Solutions

no code implementations12 Oct 2020 Wenqi Jiang, Zhenhao He, Shuai Zhang, Thomas B. Preußer, Kai Zeng, Liang Feng, Jiansong Zhang, Tongxuan Liu, Yong Li, Jingren Zhou, Ce Zhang, Gustavo Alonso

MicroRec accelerates recommendation inference by (1) redesigning the data structures involved in the embeddings to reduce the number of lookups needed and (2) taking advantage of the availability of High-Bandwidth Memory (HBM) in FPGA accelerators to tackle the latency by enabling parallel lookups.

Recommendation Systems

Improving Network Slimming with Nonconvex Regularization

1 code implementation3 Oct 2020 Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin

Network slimming with T$\ell_1$ regularization also outperforms the latest Bayesian modification of network slimming in compressing a CNN architecture in terms of memory storage while preserving its model accuracy after channel pruning.

Image Classification object-detection +3

Clustering COVID-19 Lung Scans

no code implementations5 Sep 2020 Jacob Householder, Andrew Householder, John Paul Gomez-Reed, Fredrick Park, Shuai Zhang

While tests do exist for COVID-19, the goal of our research is to explore other methods of identifying infected individuals.

Clustering

Exploring particle dynamics during self-organization processes via rotationally invariant latent representations

no code implementations2 Sep 2020 Sergei V. Kalinin, Shuai Zhang, Mani Valleti, Harley Pyles, David Baker, James J. De Yoreo, Maxim Ziatdinov

The dynamic of complex ordering systems with active rotational degrees of freedom exemplified by protein self-assembly is explored using a machine learning workflow that combines deep learning-based semantic segmentation and rotationally invariant variational autoencoder-based analysis of orientation and shape evolution.

Soft Condensed Matter

A Practical Chinese Dependency Parser Based on A Large-scale Dataset

2 code implementations2 Sep 2020 Shuai Zhang, Lijie Wang, Ke Sun, Xinyan Xiao

DDParser is extended on the graph-based biaffine parser to accommodate to the characteristics of Chinese dataset.

Dependency Parsing

Performance Optimization for Federated Person Re-identification via Benchmark Analysis

2 code implementations26 Aug 2020 Weiming Zhuang, Yonggang Wen, Xuesen Zhang, Xin Gan, Daiying Yin, Dongzhan Zhou, Shuai Zhang, Shuai Yi

Then we propose two optimization methods: (1) To address the unbalanced weight problem, we propose a new method to dynamically change the weights according to the scale of model changes in clients in each training round; (2) To facilitate convergence, we adopt knowledge distillation to refine the server model with knowledge generated from client models on a public dataset.

Federated Learning Knowledge Distillation +2

TensorCoder: Dimension-Wise Attention via Tensor Representation for Natural Language Modeling

no code implementations28 Jul 2020 Shuai Zhang, Peng Zhang, Xindian Ma, Junqiu Wei, Ningning Wang, Qun Liu

Transformer has been widely-used in many Natural Language Processing (NLP) tasks and the scaled dot-product attention between tokens is a core module of Transformer.

Language Modelling Machine Translation +2

Fast Learning of Graph Neural Networks with Guaranteed Generalizability: One-hidden-layer Case

no code implementations ICML 2020 Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong

In this paper, we provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.

Binary Classification General Classification +1

Spike-Triggered Non-Autoregressive Transformer for End-to-End Speech Recognition

no code implementations16 May 2020 Zhengkun Tian, Jiangyan Yi, Jian-Hua Tao, Ye Bai, Shuai Zhang, Zhengqi Wen

To address this problem and improve the inference speed, we propose a spike-triggered non-autoregressive transformer model for end-to-end speech recognition, which introduces a CTC module to predict the length of the target sequence and accelerate the convergence.

Machine Translation speech-recognition +2

TRP: Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation30 Apr 2020 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss, thus eliminating the fine-tuning process after low rank decomposition.

Rnn-transducer with language bias for end-to-end Mandarin-English code-switching speech recognition

no code implementations19 Feb 2020 Shuai Zhang, Jiangyan Yi, Zhengkun Tian, Jian-Hua Tao, Ye Bai

Recently, language identity information has been utilized to improve the performance of end-to-end code-switching (CS) speech recognition.

Language Identification speech-recognition +1

$\ell_0$ Regularized Structured Sparsity Convolutional Neural Networks

no code implementations17 Dec 2019 Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin

Deepening and widening convolutional neural networks (CNNs) significantly increases the number of trainable weight parameters by adding more convolutional layers and feature maps per layer, respectively.

Synchronous Transformers for End-to-End Speech Recognition

no code implementations6 Dec 2019 Zhengkun Tian, Jiangyan Yi, Ye Bai, Jian-Hua Tao, Shuai Zhang, Zhengqi Wen

Once a fixed-length chunk of the input sequence is processed by the encoder, the decoder begins to predict symbols immediately.

Decoder speech-recognition +1

Integrating Knowledge into End-to-End Speech Recognition from External Text-Only Data

no code implementations4 Dec 2019 Ye Bai, Jiangyan Yi, Jian-Hua Tao, Zhengqi Wen, Zhengkun Tian, Shuai Zhang

To alleviate the above two issues, we propose a unified method called LST (Learn Spelling from Teachers) to integrate knowledge into an AED model from the external text-only data and leverage the whole context in a sentence.

Language Modelling Sentence +2

Weakly-Supervised Degree of Eye-Closeness Estimation

no code implementations24 Oct 2019 Eyasu Mequanint, Shuai Zhang, Bijan Forutanpour, Yingyong Qi, Ning Bi

To alleviate this issue, we propose a weakly-supervised method which utilizes the accurate annotation from the synthetic data set, to learn accurate degree of eye openness, and the weakly labeled (open or closed) real world eye data set to control the domain shift.

DeGNN: Characterizing and Improving Graph Neural Networks with Graph Decomposition

no code implementations10 Oct 2019 Xupeng Miao, Nezihe Merve Gürel, Wentao Zhang, Zhichao Han, Bo Li, Wei Min, Xi Rao, Hansheng Ren, Yinan Shan, Yingxia Shao, Yujie Wang, Fan Wu, Hui Xue, Yaming Yang, Zitao Zhang, Yang Zhao, Shuai Zhang, Yujing Wang, Bin Cui, Ce Zhang

Despite the wide application of Graph Convolutional Network (GCN), one major limitation is that it does not benefit from the increasing depth and suffers from the oversmoothing problem.

Graph Neural Network

Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation9 Oct 2019 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations.

Holographic Factorization Machines for Recommendation

1 code implementation AAAI 2019 Yi Tay, Shuai Zhang, Anh Tuan Luu, Siu Cheung Hui, Lina Yao, Tran Dang Quang Vinh

Factorization Machines (FMs) are a class of popular algorithms that have been widely adopted for collaborative filtering and recommendation tasks.

Collaborative Filtering Retrieval

A Tensorized Transformer for Language Modeling

1 code implementation NeurIPS 2019 Xindian Ma, Peng Zhang, Shuai Zhang, Nan Duan, Yuexian Hou, Dawei Song, Ming Zhou

In this paper, based on the ideas of tensor decomposition and parameters sharing, we propose a novel self-attention model (namely Multi-linear attention) with Block-Term Tensor Decomposition (BTD).

Decoder Language Modelling +3

Fully Decoupled Neural Network Learning Using Delayed Gradients

1 code implementation21 Jun 2019 Huiping Zhuang, Yi Wang, Qinglai Liu, Shuai Zhang, Zhiping Lin

Training neural networks with back-propagation (BP) requires a sequential passing of activations and gradients, which forces the network modules to work in a synchronous fashion.

Quaternion Collaborative Filtering for Recommendation

no code implementations6 Jun 2019 Shuai Zhang, Lina Yao, Lucas Vinh Tran, Aston Zhang, Yi Tay

All in all, we conduct extensive experiments on six real-world datasets, demonstrating the effectiveness of Quaternion algebra in recommender systems.

Collaborative Filtering Inductive Bias +2

DeepRec: An Open-source Toolkit for Deep Learning based Recommendation

4 code implementations25 May 2019 Shuai Zhang, Yi Tay, Lina Yao, Bin Wu, Aixin Sun

In this toolkit, we have implemented a number of deep learning based recommendation algorithms using Python and the widely used deep learning package - Tensorflow.

Deep Learning Sequential Recommendation

Quaternion Knowledge Graph Embeddings

1 code implementation NeurIPS 2019 Shuai Zhang, Yi Tay, Lina Yao, Qi Liu

In this work, we move beyond the traditional complex-valued representations, introducing more expressive hypercomplex representations to model entities and relations for knowledge graph embeddings.

Knowledge Graph Embedding Knowledge Graph Embeddings +1

Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets

no code implementations ICLR 2019 Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin

We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss.

Negation

AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks

no code implementations24 Jan 2019 Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin

In addition, we found experimentally that the standard convex relaxation of permutation matrices into stochastic matrices leads to poor performance.

Graph Matching

DAC: Data-free Automatic Acceleration of Convolutional Networks

1 code implementation20 Dec 2018 Xin Li, Shuai Zhang, Bolan Jiang, Yingyong Qi, Mooi Choo Chuah, Ning Bi

A complex deep learning model with high accuracy runs slowly on resource-limited devices, while a light-weight model that runs much faster loses accuracy.

Image Classification Multi-Person Pose Estimation +2

Trained Rank Pruning for Efficient Deep Neural Networks

1 code implementation6 Dec 2018 Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong

We propose Trained Rank Pruning (TRP), which iterates low rank approximation and training.

Quantization

DNQ: Dynamic Network Quantization

no code implementations6 Dec 2018 Yuhui Xu, Shuai Zhang, Yingyong Qi, Jiaxian Guo, Weiyao Lin, Hongkai Xiong

Network quantization is an effective method for the deployment of neural networks on memory and energy constrained mobile devices.

Quantization

Next Item Recommendation with Self-Attention

no code implementations20 Aug 2018 Shuai Zhang, Yi Tay, Lina Yao, Aixin Sun

In this paper, we propose a novel sequence-aware recommendation model.

Metric Learning

Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks

no code implementations15 Aug 2018 Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin

We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks.

Binarization Quantization

GrCAN: Gradient Boost Convolutional Autoencoder with Neural Decision Forest

no code implementations21 Jun 2018 Manqing Dong, Lina Yao, Xianzhi Wang, Boualem Benatallah, Shuai Zhang

We develop a gradient boost module and embed it into the proposed convolutional autoencoder with neural decision forest to improve the performance.

Self-Attentive Neural Collaborative Filtering

no code implementations17 Jun 2018 Yi Tay, Shuai Zhang, Luu Anh Tuan, Siu Cheung Hui

This paper has been withdrawn as we discovered a bug in our tensorflow implementation that involved accidental mixing of vectors across batches.

Collaborative Filtering

NeuRec: On Nonlinear Transformation for Personalized Ranking

no code implementations8 May 2018 Shuai Zhang, Lina Yao, Aixin Sun, Sen Wang, Guodong Long, Manqing Dong

Modeling user-item interaction patterns is an important task for personalized recommendations.

Recommendation Systems

Metric Factorization: Recommendation beyond Matrix Factorization

2 code implementations13 Feb 2018 Shuai Zhang, Lina Yao, Yi Tay, Xiwei Xu, Xiang Zhang, Liming Zhu

In the past decade, matrix factorization has been extensively researched and has become one of the most popular techniques for personalized recommendations.