Search Results for author: Michael Lyu

Found 23 papers, 12 papers with code

Validating Multimedia Content Moderation Software via Semantic Fusion

no code implementations23 May 2023 Wenxuan Wang, Jingyuan Huang, Chang Chen, Jiazhen Gu, Jianping Zhang, Weibin Wu, Pinjia He, Michael Lyu

To this end, content moderation software has been widely deployed on these platforms to detect and blocks toxic content.

Sentence

BiasAsker: Measuring the Bias in Conversational AI System

1 code implementation21 May 2023 Yuxuan Wan, Wenxuan Wang, Pinjia He, Jiazhen Gu, Haonan Bai, Michael Lyu

Particularly, it is hard to generate inputs that can comprehensively trigger potential bias due to the lack of data containing both social groups as well as biased properties.

Bias Detection

ChatGPT or Grammarly? Evaluating ChatGPT on Grammatical Error Correction Benchmark

no code implementations15 Mar 2023 Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, Michael Lyu

ChatGPT is a cutting-edge artificial intelligence language model developed by OpenAI, which has attracted a lot of attention due to its surprisingly strong ability in answering follow-up questions.

Grammatical Error Correction Language Modelling +1

MTTM: Metamorphic Testing for Textual Content Moderation Software

1 code implementation11 Feb 2023 Wenxuan Wang, Jen-tse Huang, Weibin Wu, Jianping Zhang, Yizhan Huang, Shuqing Li, Pinjia He, Michael Lyu

In addition, we leverage the test cases generated by MTTM to retrain the model we explored, which largely improves model robustness (0% to 5. 9% EFR) while maintaining the accuracy on the original test set.

Sentence

Understanding and Improving Sequence-to-Sequence Pretraining for Neural Machine Translation

no code implementations ACL 2022 Wenxuan Wang, Wenxiang Jiao, Yongchang Hao, Xing Wang, Shuming Shi, Zhaopeng Tu, Michael Lyu

In this paper, we present a substantial step in better understanding the SOTA sequence-to-sequence (Seq2Seq) pretraining for neural machine translation~(NMT).

Machine Translation NMT +1

Discrete Auto-regressive Variational Attention Models for Text Modeling

1 code implementation16 Jun 2021 Xianghong Fang, Haoli Bai, Jian Li, Zenglin Xu, Michael Lyu, Irwin King

We further design discrete latent space for the variational attention and mathematically show that our model is free from posterior collapse.

Language Modelling

Multi-Task Learning with Shared Encoder for Non-Autoregressive Machine Translation

1 code implementation NAACL 2021 Yongchang Hao, Shilin He, Wenxiang Jiao, Zhaopeng Tu, Michael Lyu, Xing Wang

In addition, experimental results demonstrate that our Multi-Task NAT is complementary to knowledge distillation, the standard knowledge transfer method for NAT.

Knowledge Distillation Machine Translation +2

Learning 3D Face Reconstruction with a Pose Guidance Network

no code implementations9 Oct 2020 Pengpeng Liu, Xintong Han, Michael Lyu, Irwin King, Jia Xu

We present a self-supervised learning approach to learning monocular 3D face reconstruction with a pose guidance network (PGN).

3D Face Reconstruction Pose Estimation +1

Discrete Variational Attention Models for Language Generation

no code implementations21 Apr 2020 Xianghong Fang, Haoli Bai, Zenglin Xu, Michael Lyu, Irwin King

Variational autoencoders have been widely applied for natural language generation, however, there are two long-standing problems: information under-representation and posterior collapse.

Language Modelling Text Generation

Few Shot Network Compression via Cross Distillation

1 code implementation21 Nov 2019 Haoli Bai, Jiaxiang Wu, Irwin King, Michael Lyu

The core challenge of few shot network compression lies in high estimation errors from the original network during inference, since the compressed network can easily over-fits on the few training instances.

Knowledge Distillation Model Compression

Detecting Deep Neural Network Defects with Data Flow Analysis

no code implementations5 Sep 2019 Jiazhen Gu, Huanlin Xu, Yangfan Zhou, Xin Wang, Hui Xu, Michael Lyu

Deep neural networks (DNNs) are shown to be promising solutions in many challenging artificial intelligence tasks.

Object Recognition

Learning to Rank Using Localized Geometric Mean Metrics

1 code implementation22 May 2017 Yuxin Su, Irwin King, Michael Lyu

First, we design a concept called \textit{ideal candidate document} to introduce metric learning algorithm to query-independent model.

Computational Efficiency Learning-To-Rank +1

Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization

no code implementations11 Nov 2016 Guangxi Li, Zenglin Xu, Linnan Wang, Jinmian Ye, Irwin King, Michael Lyu

Probabilistic Temporal Tensor Factorization (PTTF) is an effective algorithm to model the temporal tensor data.

Adaptive Regularization for Transductive Support Vector Machine

no code implementations NeurIPS 2009 Zenglin Xu, Rong Jin, Jianke Zhu, Irwin King, Michael Lyu, Zhirong Yang

In this framework, SVM and TSVM can be regarded as a learning machine without regularization and one with full regularization from the unlabeled data, respectively.

An Extended Level Method for Efficient Multiple Kernel Learning

no code implementations NeurIPS 2008 Zenglin Xu, Rong Jin, Irwin King, Michael Lyu

We consider the problem of multiple kernel learning (MKL), which can be formulated as a convex-concave problem.

Learning with Consistency between Inductive Functions and Kernels

no code implementations NeurIPS 2008 Haixuan Yang, Irwin King, Michael Lyu

Regularized Least Squares (RLS) algorithms have the ability to avoid over-fitting problems and to express solutions as kernel expansions.

Efficient Convex Relaxation for Transductive Support Vector Machine

no code implementations NeurIPS 2007 Zenglin Xu, Rong Jin, Jianke Zhu, Irwin King, Michael Lyu

We consider the problem of Support Vector Machine transduction, which involves a combinatorial problem with exponential computational complexity in the number of unlabeled examples.

Cannot find the paper you are looking for? You can Submit a new open access paper.