no code implementations • Findings (ACL) 2022 • Shuai Zhang, Wang Lijie, Xinyan Xiao, Hua Wu
Syntactic information has been proved to be useful for transformer-based pre-trained language models.
no code implementations • ACL 2022 • Shuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, Weiming Lu
Named entity recognition (NER) is a fundamental task to recognize specific types of entities from a given sentence.
no code implementations • ACL 2022 • Ningning Wang, Guobing Gan, Peng Zhang, Shuai Zhang, Junqiu Wei, Qun Liu, Xin Jiang
Other sparse methods use clustering patterns to select words, but the clustering process is separate from the training process of the target task, which causes a decrease in effectiveness.
1 code implementation • 18 Sep 2023 • Huachuan Qiu, Shuai Zhang, Hongliang He, Anqi Li, Zhenzhong Lan
NSFW (Not Safe for Work) content, in the context of a dialogue, can have severe side effects on users in open-domain dialogue systems.
1 code implementation • 8 Aug 2023 • Shuai Zhang, Xiaodong Gu, Yuting Chen, Beijun Shen
Particularly, InfeRE outperforms the popular tree-based generation approach by 18. 1% and 11. 3% on both datasets, respectively, in terms of DFA@5 accuracy.
1 code implementation • 31 Jul 2023 • Huachuan Qiu, Tong Zhao, Anqi Li, Shuai Zhang, Hongliang He, Zhenzhong Lan
Our study reveals that ChatGPT struggles to detect safety categories with detailed safety definitions in a zero- and few-shot paradigm, whereas the fine-tuned model proves to be more suitable.
1 code implementation • ICCV 2023 • Weiming Zhuang, Yonggang Wen, Lingjuan Lyu, Shuai Zhang
Then, we present our new approach, MAS (Merge and Split), to optimize the performance of training multiple simultaneous FL tasks.
1 code implementation • 17 Jul 2023 • Huachuan Qiu, Shuai Zhang, Anqi Li, Hongliang He, Zhenzhong Lan
We present a systematic analysis of the safety and robustness of LLMs regarding the position of explicit normal instructions, word replacements (verbs in explicit normal instructions, target groups in malicious instructions, cue words for explicit normal instructions), and instruction replacements (different explicit normal instructions).
no code implementations • 11 Jul 2023 • Sikai Bai, Shuaicheng Li, Weiming Zhuang, Jie Zhang, Song Guo, Kunlin Yang, Jun Hou, Shuai Zhang, Junyu Gao, Shuai Yi
Theoretically, we show the convergence guarantee of the dual regulators.
no code implementations • 28 Jun 2023 • Shuai Zhang
Despite the recent development in machine learning, most learning systems are still under the concept of "black box", where the performance cannot be understood and derived.
1 code implementation • 27 Jun 2023 • Anqi Li, Lizhi Ma, Yaling Mei, Hongliang He, Shuai Zhang, Huachuan Qiu, Zhenzhong Lan
Communication success relies heavily on reading participants' reactions.
no code implementations • 23 Jun 2023 • George Eskandar, Shuai Zhang, Mohamed Abdelsamad, Mark Youssef, Diandian Guo, Bin Yang
Data efficiency, or the ability to generalize from a few labeled data, remains a major challenge in deep learning.
no code implementations • 19 Jun 2023 • Wenqi Jiang, Shigang Li, Yu Zhu, Johannes De Fine Licht, Zhenhao He, Runbin Shi, Cedric Renggli, Shuai Zhang, Theodoros Rekatsinas, Torsten Hoefler, Gustavo Alonso
Vector search has emerged as the foundation for large-scale information retrieval and machine learning systems, with search engines like Google and Bing processing tens of thousands of queries per second on petabyte-scale document datasets by evaluating vector similarities between encoded query texts and web documents.
no code implementations • 15 Jun 2023 • Jing Li, Yequan Wang, Shuai Zhang, Min Zhang
Recently, numerous efforts have continued to push up performance boundaries of document-level relation extraction (DocRE) and have claimed significant progress in DocRE.
1 code implementation • 7 Jun 2023 • Mohammed Nowaz Rabbani Chowdhury, Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen
In deep learning, mixture-of-experts (MoE) activates one or few experts (sub-networks) on a per-sample or per-token basis, resulting in significant computation reduction.
no code implementations • 31 May 2023 • Tianyu Chen, Yuan Xie, Shuai Zhang, Shaohan Huang, Haoyi Zhou, JianXin Li
Music representation learning is notoriously difficult for its complex human-related concepts contained in the sequence of numerical signals.
1 code implementation • IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2022 • George Eskandar, Mohamed Abdelsamad, Karim Armanious, Shuai Zhang, Bin Yang
Semantic Image Synthesis (SIS) is a subclass of image-to-image translation where a semantic layout is used to generate a photorealistic image.
Multimodal Unsupervised Image-To-Image Translation
Translation
+1
1 code implementation • 30 Apr 2023 • Huachuan Qiu, Hongliang He, Shuai Zhang, Anqi Li, Zhenzhong Lan
There has been an increasing research interest in developing specialized dialogue systems that can offer mental health support.
1 code implementation • 10 Apr 2023 • Shuhuai Ren, Aston Zhang, Yi Zhu, Shuai Zhang, Shuai Zheng, Mu Li, Alex Smola, Xu sun
This work proposes POMP, a prompt pre-training method for vision-language models.
Ranked #1 on
Open Vocabulary Semantic Segmentation
on PascalVOC-20
(hIoU metric)
no code implementations • 30 Mar 2023 • Renhong Zhang, Tianheng Cheng, Shusheng Yang, Haoyi Jiang, Shuai Zhang, Jiancheng Lyu, Xin Li, Xiaowen Ying, Dashan Gao, Wenyu Liu, Xinggang Wang
To address those issues, we present MobileInst, a lightweight and mobile-friendly framework for video instance segmentation on mobile devices.
no code implementations • 6 Feb 2023 • Shuai Zhang, Meng Wang, Pin-Yu Chen, Sijia Liu, Songtao Lu, Miao Liu
Due to the significant computational challenge of training large-scale graph neural networks (GNNs), various sparse learning techniques have been exploited to reduce memory and storage costs.
no code implementations • 26 Nov 2022 • Zixiang Ding, Guoqing Jiang, Shuai Zhang, Lin Guo, Wei Lin
In this paper, we propose Stochastic Knowledge Distillation (SKD) to obtain compact BERT-style language model dubbed SKDBERT.
1 code implementation • ICLR 2022 • Alfonso Amayuelas, Shuai Zhang, Susie Xi Rao, Ce Zhang
We introduce a set of models that use Neural Networks to create one-point vector embeddings to answer the queries.
1 code implementation • 6 Sep 2022 • Rafael A. Moral, Zhi Chen, Shuai Zhang, Sally McClean, Gabriel R. Palma, Brahim Allan, Ian Kegel
The model drastically reduces the dimensionality of the data from thousands of observations per customer to 11 customer-level parameter estimates and random effects.
no code implementations • 9 Jul 2022 • Weiming Zhuang, Yonggang Wen, Shuai Zhang
In this work, we propose a smart multi-tenant FL system, MuFL, to effectively coordinate and execute simultaneous training activities.
no code implementations • 7 Jul 2022 • Hongkang Li, Shuai Zhang, Meng Wang
In addition, for the first time, this paper characterizes the impact of the input distributions on the sample complexity and the learning rate.
no code implementations • 3 Jul 2022 • Weiming Zhuang, Chongjie Ye, Ying Xu, Pengzhi Mao, Shuai Zhang
In this demo, we present Chat-to-Design, a new multimodal interaction system for personalized fashion design.
2 code implementations • 24 May 2022 • Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang
Based on these insights, we propose three optimization approaches: (1) We adopt knowledge distillation to facilitate the convergence of FedReID by better transferring knowledge from clients to the server; (2) We introduce client clustering to improve the performance of large datasets by aggregating clients with similar data distributions; (3) We propose cosine distance weight to elevate performance by dynamically updating the weights for aggregation depending on how well models are trained in clients.
no code implementations • 23 May 2022 • Lijie Wang, Yaozong Shen, Shuyuan Peng, Shuai Zhang, Xinyan Xiao, Hao liu, Hongxuan Tang, Ying Chen, Hua Wu, Haifeng Wang
Based on this benchmark, we conduct experiments on three typical models with three saliency methods, and unveil their strengths and weakness in terms of interpretability.
1 code implementation • 22 Apr 2022 • Susie Xi Rao, Clémence Lanfranchi, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Mo Cheng, Yinan Shan, Yang Zhao, Ce Zhang
At online retail platforms, detecting fraudulent accounts and transactions is crucial to improve customer experience, minimize loss, and avoid unauthorized transactions.
no code implementations • 9 Apr 2022 • Weiming Zhuang, Xin Gan, Yonggang Wen, Xuesen Zhang, Shuai Zhang, Shuai Yi
To address this problem, we propose federated unsupervised domain adaptation for face recognition, FedFR.
1 code implementation • ICLR 2022 • Weiming Zhuang, Yonggang Wen, Shuai Zhang
Using the framework, our study uncovers unique insights of FedSSL: 1) stop-gradient operation, previously reported to be essential, is not always necessary in FedSSL; 2) retaining local knowledge of clients in FedSSL is particularly beneficial for non-IID data.
no code implementations • 17 Feb 2022 • Jiangyan Yi, Ruibo Fu, JianHua Tao, Shuai Nie, Haoxin Ma, Chenglong Wang, Tao Wang, Zhengkun Tian, Ye Bai, Cunhang Fan, Shan Liang, Shiming Wang, Shuai Zhang, Xinrui Yan, Le Xu, Zhengqi Wen, Haizhou Li, Zheng Lian, Bin Liu
Audio deepfake detection is an emerging topic, which was included in the ASVspoof 2021.
no code implementations • 28 Jan 2022 • Shuai Zhang, Jiangyan Yi, Zhengkun Tian, JianHua Tao, Yu Ting Yeung, Liqun Deng
We propose a language-related attention mechanism to reduce multilingual context confusion for the E2E code-switching ASR model based on the Equivalence Constraint (EC) Theory.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 21 Jan 2022 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.
no code implementations • 22 Dec 2021 • Zite Jiang, Boxiao Liu, Shuai Zhang, Xingzhong Hou, Mengting Yuan, Haihang You
Subgraph matching is a NP-complete problem that extracts isomorphic embeddings of a query graph $q$ in a data graph $G$.
no code implementations • NeurIPS 2021 • Aston Zhang, Yi Tay, Yikang Shen, Alvin Chan Guo Wei, Shuai Zhang
On the other hand, the extent of the Self-IRU recursion is controlled by gates whose values are between 0 and 1 and may vary across the temporal dimension of sequences, enabling dynamic soft recursion depth at each time step.
1 code implementation • TIST 2021 2021 • Haoyi Zhou, Hao Peng, Jieqi Peng, Shuai Zhang, JianXin Li
Extensive experiments are conducted on five large-scale datasets, which demonstrate that our method achieves state-of-the-art performance and validates the effectiveness brought by local structure information.
no code implementations • 12 Oct 2021 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer.
no code implementations • ICLR 2022 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.
1 code implementation • 14 Aug 2021 • Weiming Zhuang, Yonggang Wen, Shuai Zhang
We present FedUReID, a federated unsupervised person ReID system to learn person ReID models without any labels while preserving privacy.
1 code implementation • ICCV 2021 • Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang, Shuai Yi
In this framework, each party trains models from unlabeled data independently using contrastive learning with an online network and a target network.
no code implementations • ACL 2021 • Aston Zhang, Alvin Chan, Yi Tay, Jie Fu, Shuohang Wang, Shuai Zhang, Huajie Shao, Shuochao Yao, Roy Ka-Wei Lee
Orthogonality constraints encourage matrices to be orthogonal for numerical stability.
no code implementations • NAACL 2021 • Shuai Zhang, Xi Rao, Yi Tay, Ce Zhang
To this end, this paper proposes to learn disentangled representations of KG entities - a new method that disentangles the inner latent properties of KG entities.
1 code implementation • 19 May 2021 • Zeqi Tan, Yongliang Shen, Shuai Zhang, Weiming Lu, Yueting Zhuang
We utilize a non-autoregressive decoder to predict the final set of entities in one pass, in which we are able to capture dependencies between entities.
Ranked #6 on
Nested Named Entity Recognition
on ACE 2005
1 code implementation • 17 May 2021 • Weiming Zhuang, Xin Gan, Yonggang Wen, Shuai Zhang
However, these platforms are complex to use and require a deep understanding of FL, which imposes high barriers to entry for beginners, limits the productivity of researchers, and compromises deployment efficiency.
no code implementations • 17 May 2021 • Weiming Zhuang, Xin Gan, Yonggang Wen, Xuesen Zhang, Shuai Zhang, Shuai Yi
To this end, FedFR forms an end-to-end training pipeline: (1) pre-train in the source domain; (2) predict pseudo labels by clustering in the target domain; (3) conduct domain-constrained federated learning across two domains.
1 code implementation • ACL 2021 • Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, Weiming Lu
Although these methods have the innate ability to handle nested NER, they suffer from high computational cost, ignorance of boundary information, under-utilization of the spans that partially match with entities, and difficulties in long entity recognition.
Ranked #6 on
Nested Named Entity Recognition
on GENIA
Chinese Named Entity Recognition
named-entity-recognition
+3
no code implementations • 7 Apr 2021 • Zhengkun Tian, Jiangyan Yi, Ye Bai, JianHua Tao, Shuai Zhang, Zhengqi Wen
It takes a lot of computation and time to predict the blank tokens, but only the non-blank tokens will appear in the final output sequence.
1 code implementation • 4 Apr 2021 • Zhengkun Tian, Jiangyan Yi, JianHua Tao, Ye Bai, Shuai Zhang, Zhengqi Wen, Xuefei Liu
To address these two problems, we propose a new model named the two-step non-autoregressive transformer(TSNAT), which improves the performance and accelerating the convergence of the NAR model by learning prior knowledge from a parameters-sharing AR model.
3 code implementations • 17 Feb 2021 • Aston Zhang, Yi Tay, Shuai Zhang, Alvin Chan, Anh Tuan Luu, Siu Cheung Hui, Jie Fu
Recent works have demonstrated reasonable success of representation learning in hypercomplex space.
no code implementations • 17 Feb 2021 • Shuai Zhang, Yi Tay, Wenqi Jiang, Da-Cheng Juan, Ce Zhang
In order for learned representations to be effective and efficient, it is ideal that the geometric inductive bias aligns well with the underlying structure of the data.
no code implementations • 15 Feb 2021 • Ye Bai, Jiangyan Yi, JianHua Tao, Zhengkun Tian, Zhengqi Wen, Shuai Zhang
Based on this idea, we propose a non-autoregressive speech recognition model called LASO (Listen Attentively, and Spell Once).
no code implementations • 1 Jan 2021 • Hongkang Li, Shuai Zhang, Meng Wang
Instead of following the conventional and restrictive assumption in the literature that the input features follow the standard Gaussian distribution, this paper, for the first time, analyzes a more general and practical scenario that the input features follow a Gaussian mixture model of a finite number of Gaussian distributions of various mean and variance.
no code implementations • 1 Jan 2021 • Yi Tay, Yikang Shen, Alvin Chan, Aston Zhang, Shuai Zhang
This paper explores an intriguing idea of recursively parameterizing recurrent nets.
no code implementations • NeurIPS 2021 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Moreover, as the algorithm for training a sparse neural network is specified as (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned model weights in the hidden layer.
no code implementations • ICLR 2021 • Aston Zhang, Yi Tay, Shuai Zhang, Alvin Chan, Anh Tuan Luu, Siu Hui, Jie Fu
Recent works have demonstrated reasonable success of representation learning in hypercomplex space.
no code implementations • 20 Dec 2020 • Susie Xi Rao, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Mo Cheng, Yinan Shan, Yang Zhao, Ce Zhang
Massive account registration has raised concerns on risk management in e-commerce companies, especially when registration increases rapidly within a short time frame.
6 code implementations • 14 Dec 2020 • Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, JianXin Li, Hui Xiong, Wancai Zhang
Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning.
Ranked #5 on
Time Series Forecasting
on ETTh2 (168)
1 code implementation • 24 Nov 2020 • Susie Xi Rao, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Zhiyao Chen, Yinan Shan, Yang Zhao, Ce Zhang
At online retail platforms, it is crucial to actively detect the risks of transactions to improve customer experience and minimize financial loss.
3 code implementations • 11 Nov 2020 • Shuai Zhang, Huoyu Liu, Aston Zhang, Yue Hu, Ce Zhang, Yumeng Li, Tanchao Zhu, Shaojian He, Wenwu Ou
Furthermore, we present two variants of hypercuboids to enhance the capability in capturing the diversities of user interests.
no code implementations • 28 Oct 2020 • Shuai Zhang, Jiangyan Yi, Zhengkun Tian, Ye Bai, JianHua Tao, Zhengqi Wen
In this paper, we propose a decoupled transformer model to use monolingual paired data and unpaired text data to alleviate the problem of code-switching data shortage.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 28 Oct 2020 • Zhengkun Tian, Jiangyan Yi, Ye Bai, JianHua Tao, Shuai Zhang, Zhengqi Wen
Inspired by the success of two-pass end-to-end models, we introduce a transformer decoder and the two-stage inference method into the streaming CTC model.
no code implementations • 12 Oct 2020 • Wenqi Jiang, Zhenhao He, Shuai Zhang, Thomas B. Preußer, Kai Zeng, Liang Feng, Jiansong Zhang, Tongxuan Liu, Yong Li, Jingren Zhou, Ce Zhang, Gustavo Alonso
MicroRec accelerates recommendation inference by (1) redesigning the data structures involved in the embeddings to reduce the number of lookups needed and (2) taking advantage of the availability of High-Bandwidth Memory (HBM) in FPGA accelerators to tackle the latency by enabling parallel lookups.
1 code implementation • 3 Oct 2020 • Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin
Network slimming with T$\ell_1$ regularization also outperforms the latest Bayesian modification of network slimming in compressing a CNN architecture in terms of memory storage while preserving its model accuracy after channel pruning.
no code implementations • 5 Sep 2020 • Jacob Householder, Andrew Householder, John Paul Gomez-Reed, Fredrick Park, Shuai Zhang
While tests do exist for COVID-19, the goal of our research is to explore other methods of identifying infected individuals.
no code implementations • 3 Sep 2020 • Bin Huang, Yuanyang Du, Shuai Zhang, Wenfei Li, Jun Wang, Jian Zhang
RNAs play crucial and versatile roles in biological processes.
2 code implementations • 2 Sep 2020 • Shuai Zhang, Lijie Wang, Ke Sun, Xinyan Xiao
DDParser is extended on the graph-based biaffine parser to accommodate to the characteristics of Chinese dataset.
no code implementations • 2 Sep 2020 • Sergei V. Kalinin, Shuai Zhang, Mani Valleti, Harley Pyles, David Baker, James J. De Yoreo, Maxim Ziatdinov
The dynamic of complex ordering systems with active rotational degrees of freedom exemplified by protein self-assembly is explored using a machine learning workflow that combines deep learning-based semantic segmentation and rotationally invariant variational autoencoder-based analysis of orientation and shape evolution.
Soft Condensed Matter
2 code implementations • 26 Aug 2020 • Weiming Zhuang, Yonggang Wen, Xuesen Zhang, Xin Gan, Daiying Yin, Dongzhan Zhou, Shuai Zhang, Shuai Yi
Then we propose two optimization methods: (1) To address the unbalanced weight problem, we propose a new method to dynamically change the weights according to the scale of model changes in clients in each training round; (2) To facilitate convergence, we adopt knowledge distillation to refine the server model with knowledge generated from client models on a public dataset.
no code implementations • 28 Jul 2020 • Shuai Zhang, Peng Zhang, Xindian Ma, Junqiu Wei, Ningning Wang, Qun Liu
Transformer has been widely-used in many Natural Language Processing (NLP) tasks and the scaled dot-product attention between tokens is a core module of Transformer.
no code implementations • ICML 2020 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
In this paper, we provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
no code implementations • 18 Jun 2020 • Shuai Zhang, Xiaoyan Xin, Yang Wang, Yachong Guo, Qiuqiao Hao, Xianfeng Yang, Jun Wang, Jian Zhang, Bing Zhang, Wei Wang
The model provides automated recognition of given scans and generation of reports.
no code implementations • 16 May 2020 • Zhengkun Tian, Jiangyan Yi, Jian-Hua Tao, Ye Bai, Shuai Zhang, Zhengqi Wen
To address this problem and improve the inference speed, we propose a spike-triggered non-autoregressive transformer model for end-to-end speech recognition, which introduces a CTC module to predict the length of the target sequence and accelerate the convergence.
no code implementations • 11 May 2020 • Ye Bai, Jiangyan Yi, Jian-Hua Tao, Zhengkun Tian, Zhengqi Wen, Shuai Zhang
Without beam-search, the one-pass propagation much reduces inference time cost of LASO.
1 code implementation • 30 Apr 2020 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss, thus eliminating the fine-tuning process after low rank decomposition.
no code implementations • 19 Feb 2020 • Shuai Zhang, Jiangyan Yi, Zhengkun Tian, Jian-Hua Tao, Ye Bai
Recently, language identity information has been utilized to improve the performance of end-to-end code-switching (CS) speech recognition.
no code implementations • 17 Dec 2019 • Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin
Deepening and widening convolutional neural networks (CNNs) significantly increases the number of trainable weight parameters by adding more convolutional layers and feature maps per layer, respectively.
no code implementations • 6 Dec 2019 • Zhengkun Tian, Jiangyan Yi, Ye Bai, Jian-Hua Tao, Shuai Zhang, Zhengqi Wen
Once a fixed-length chunk of the input sequence is processed by the encoder, the decoder begins to predict symbols immediately.
no code implementations • 4 Dec 2019 • Ye Bai, Jiangyan Yi, Jian-Hua Tao, Zhengqi Wen, Zhengkun Tian, Shuai Zhang
To alleviate the above two issues, we propose a unified method called LST (Learn Spelling from Teachers) to integrate knowledge into an AED model from the external text-only data and leverage the whole context in a sentence.
Language Modelling
Sequence-To-Sequence Speech Recognition
+1
no code implementations • 24 Oct 2019 • Eyasu Mequanint, Shuai Zhang, Bijan Forutanpour, Yingyong Qi, Ning Bi
To alleviate this issue, we propose a weakly-supervised method which utilizes the accurate annotation from the synthetic data set, to learn accurate degree of eye openness, and the weakly labeled (open or closed) real world eye data set to control the domain shift.
no code implementations • 10 Oct 2019 • Xupeng Miao, Nezihe Merve Gürel, Wentao Zhang, Zhichao Han, Bo Li, Wei Min, Xi Rao, Hansheng Ren, Yinan Shan, Yingxia Shao, Yujie Wang, Fan Wu, Hui Xue, Yaming Yang, Zitao Zhang, Yang Zhao, Shuai Zhang, Yujing Wang, Bin Cui, Ce Zhang
Despite the wide application of Graph Convolutional Network (GCN), one major limitation is that it does not benefit from the increasing depth and suffers from the oversmoothing problem.
1 code implementation • 9 Oct 2019 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations.
no code implementations • 25 Sep 2019 • Yi Tay, Aston Zhang, Shuai Zhang, Alvin Chan, Luu Anh Tuan, Siu Cheung Hui
We propose R2D2 layers, a new neural block for training efficient NLP models.
1 code implementation • AAAI 2019 • Yi Tay, Shuai Zhang, Anh Tuan Luu, Siu Cheung Hui, Lina Yao, Tran Dang Quang Vinh
Factorization Machines (FMs) are a class of popular algorithms that have been widely adopted for collaborative filtering and recommendation tasks.
1 code implementation • NeurIPS 2019 • Xindian Ma, Peng Zhang, Shuai Zhang, Nan Duan, Yuexian Hou, Dawei Song, Ming Zhou
In this paper, based on the ideas of tensor decomposition and parameters sharing, we propose a novel self-attention model (namely Multi-linear attention) with Block-Term Tensor Decomposition (BTD).
1 code implementation • 21 Jun 2019 • Huiping Zhuang, Yi Wang, Qinglai Liu, Shuai Zhang, Zhiping Lin
Training neural networks with back-propagation (BP) requires a sequential passing of activations and gradients, which forces the network modules to work in a synchronous fashion.
1 code implementation • ACL 2019 • Yi Tay, Aston Zhang, Luu Anh Tuan, Jinfeng Rao, Shuai Zhang, Shuohang Wang, Jie Fu, Siu Cheung Hui
Many state-of-the-art neural models for NLP are heavily parameterized and thus memory inefficient.
no code implementations • 6 Jun 2019 • Shuai Zhang, Lina Yao, Lucas Vinh Tran, Aston Zhang, Yi Tay
All in all, we conduct extensive experiments on six real-world datasets, demonstrating the effectiveness of Quaternion algebra in recommender systems.
4 code implementations • 25 May 2019 • Shuai Zhang, Yi Tay, Lina Yao, Bin Wu, Aixin Sun
In this toolkit, we have implemented a number of deep learning based recommendation algorithms using Python and the widely used deep learning package - Tensorflow.
1 code implementation • NeurIPS 2019 • Shuai Zhang, Yi Tay, Lina Yao, Qi Liu
In this work, we move beyond the traditional complex-valued representations, introducing more expressive hypercomplex representations to model entities and relations for knowledge graph embeddings.
Ranked #4 on
Link Prediction
on FB15k
no code implementations • ICLR 2019 • Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin
We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss.
no code implementations • 24 Jan 2019 • Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin
In addition, we found experimentally that the standard convex relaxation of permutation matrices into stochastic matrices leads to poor performance.
1 code implementation • 20 Dec 2018 • Xin Li, Shuai Zhang, Bolan Jiang, Yingyong Qi, Mooi Choo Chuah, Ning Bi
A complex deep learning model with high accuracy runs slowly on resource-limited devices, while a light-weight model that runs much faster loses accuracy.
1 code implementation • 6 Dec 2018 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
We propose Trained Rank Pruning (TRP), which iterates low rank approximation and training.
no code implementations • 6 Dec 2018 • Yuhui Xu, Shuai Zhang, Yingyong Qi, Jiaxian Guo, Weiyao Lin, Hongkai Xiong
Network quantization is an effective method for the deployment of neural networks on memory and energy constrained mobile devices.
no code implementations • 5 Sep 2018 • Lucas Vinh Tran, Yi Tay, Shuai Zhang, Gao Cong, Xiao-Li Li
This paper investigates the notion of learning user and item representations in non-Euclidean space.
Ranked #1 on
Recommendation Systems
on MovieLens 20M
(HR@10 metric)
no code implementations • 20 Aug 2018 • Shuai Zhang, Yi Tay, Lina Yao, Aixin Sun
In this paper, we propose a novel sequence-aware recommendation model.
no code implementations • 15 Aug 2018 • Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin
We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks.
no code implementations • 21 Jun 2018 • Manqing Dong, Lina Yao, Xianzhi Wang, Boualem Benatallah, Shuai Zhang
We develop a gradient boost module and embed it into the proposed convolutional autoencoder with neural decision forest to improve the performance.
no code implementations • 17 Jun 2018 • Yi Tay, Shuai Zhang, Luu Anh Tuan, Siu Cheung Hui
This paper has been withdrawn as we discovered a bug in our tensorflow implementation that involved accidental mixing of vectors across batches.
no code implementations • 8 May 2018 • Shuai Zhang, Lina Yao, Aixin Sun, Sen Wang, Guodong Long, Manqing Dong
Modeling user-item interaction patterns is an important task for personalized recommendations.
2 code implementations • 13 Feb 2018 • Shuai Zhang, Lina Yao, Yi Tay, Xiwei Xu, Xiang Zhang, Liming Zhu
In the past decade, matrix factorization has been extensively researched and has become one of the most popular techniques for personalized recommendations.
2 code implementations • 19 Jan 2018 • Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin
We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights.
no code implementations • 25 Nov 2017 • Shuai Zhang, Jian-Xin Li, Pengtao Xie, Yingchun Zhang, Minglai Shao, Haoyi Zhou, Mengyi Yan
Similar to DNNs, a SKN is composed of multiple layers of hidden units, but each parameterized by a RKHS function rather than a finite-dimensional vector.
8 code implementations • 24 Jul 2017 • Shuai Zhang, Lina Yao, Aixin Sun, Yi Tay
This article aims to provide a comprehensive review of recent research efforts on deep learning based recommender systems.
no code implementations • 19 Dec 2016 • Penghang Yin, Shuai Zhang, Yingyong Qi, Jack Xin
We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs).
no code implementations • 28 Mar 2015 • Zhanbin Bai, Rui Wang, Yazhou Zhou, Tianru Wu, Jianlei Ge, Jing Li, Yuyuan Qin, Fucong Fei, Lu Cao, Xuefeng Wang, Xinran Wang, Shuai Zhang, Liling Sun, You Song, Fengqi Song
On the efforts of enhancing the spin orbit interaction (SOI) of graphene for seeking the dissipationless quantum spin Hall devices, unique Kane-Mele type SOI and high mobility samples are desired.
Mesoscale and Nanoscale Physics