1 code implementation • 25 May 2023 • Yuejiao Fei, Leyang Cui, Sen yang, Wai Lam, Zhenzhong Lan, Shuming Shi
Grammatical error correction systems improve written communication by detecting and correcting language mistakes.
1 code implementation • 24 May 2023 • Junlei Zhang, Zhenzhong Lan, Junxian He
Contrastive learning has been the dominant approach to train state-of-the-art sentence embeddings.
1 code implementation • 12 May 2023 • Hongliang He, Junlei Zhang, Zhenzhong Lan, Yue Zhang
Contrastive learning-based methods, such as unsup-SimCSE, have achieved state-of-the-art (SOTA) performances in learning unsupervised sentence embeddings.
1 code implementation • 30 Apr 2023 • Huachuan Qiu, Hongliang He, Shuai Zhang, Anqi Li, Zhenzhong Lan
There has been an increasing research interest in developing specialized dialogue systems that can offer mental health support.
no code implementations • 7 Mar 2022 • Anqi Li, Jingsong Ma, Lizhi Ma, Pengfei Fang, Hongliang He, Zhenzhong Lan
However, these methods often demand large scale and high quality counseling data, which are difficult to collect.
no code implementations • 23 Nov 2021 • Junlei Zhang, Zhenzhong Lan
The corresponding outputs, two sentence embeddings derived from the same sentence with different dropout masks, can be used to build a positive pair.
3 code implementations • NeurIPS 2021 • Mingjian Zhu, Kai Han, Enhua Wu, Qiulin Zhang, Ying Nie, Zhenzhong Lan, Yunhe Wang
To this end, we propose a novel dynamic-resolution network (DRNet) in which the input resolution is determined dynamically based on each input sample.
1 code implementation • 2 Jun 2021 • Chiyu Song, Hongliang He, Haofei Yu, Pengfei Fang, Leyang Cui, Zhenzhong Lan
The current state-of-the-art ranking methods mainly use an encoding paradigm called Cross-Encoder, which separately encodes each context-candidate pair and ranks the candidates according to their fitness scores.
1 code implementation • EMNLP 2021 • Sharan Narang, Hyung Won Chung, Yi Tay, William Fedus, Thibault Fevry, Michael Matena, Karishma Malkan, Noah Fiedel, Noam Shazeer, Zhenzhong Lan, Yanqi Zhou, Wei Li, Nan Ding, Jake Marcus, Adam Roberts, Colin Raffel
The research community has proposed copious modifications to the Transformer architecture since it was introduced over three years ago, relatively few of which have seen widespread adoption.
no code implementations • 14 Oct 2020 • Qingyang Wu, Zhenzhong Lan, Kun Qian, Jing Gu, Alborz Geramifard, Zhou Yu
Transformers have reached remarkable success in sequence modeling.
no code implementations • 29 Sep 2020 • Nan Ding, Xinjie Fan, Zhenzhong Lan, Dale Schuurmans, Radu Soricut
Models based on the Transformer architecture have achieved better accuracy than the ones based on competing architectures for a large set of tasks.
3 code implementations • COLING 2020 • Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, Zhenzhong Lan
The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks.
4 code implementations • 5 Mar 2020 • Noam Shazeer, Zhenzhong Lan, Youlong Cheng, Nan Ding, Le Hou
We introduce "talking-heads attention" - a variation on multi-head attention which includes linearprojections across the attention-heads dimension, immediately before and after the softmax operation. While inserting only a small number of additional parameters and a moderate amount of additionalcomputation, talking-heads attention leads to better perplexities on masked language modeling tasks, aswell as better quality when transfer-learning to language comprehension and question answering tasks.
44 code implementations • ICLR 2020 • Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut
Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.
Ranked #1 on
Natural Language Inference
on QNLI
no code implementations • 23 Sep 2019 • Sebastian Goodman, Zhenzhong Lan, Radu Soricut
Neural models for abstractive summarization tend to achieve the best performance in the presence of highly specialized, summarization specific modeling add-ons such as pointer-generator, coverage-modeling, and inferencetime heuristics.
no code implementations • 5 Jul 2017 • Po-Yao Huang, Ye Yuan, Zhenzhong Lan, Lu Jiang, Alexander G. Hauptmann
We report on CMU Informedia Lab's system used in Google's YouTube 8 Million Video Understanding Challenge.
3 code implementations • 2 Apr 2017 • Yi Zhu, Zhenzhong Lan, Shawn Newsam, Alexander G. Hauptmann
State-of-the-art action recognition approaches rely on traditional optical flow estimation methods to pre-compute motion information for CNNs.
Ranked #18 on
Action Recognition
on UCF101
no code implementations • 8 Feb 2017 • Yi Zhu, Zhenzhong Lan, Shawn Newsam, Alexander G. Hauptmann
We study the unsupervised learning of CNNs for optical flow estimation using proxy ground truth data.
no code implementations • 25 Jan 2017 • Zhenzhong Lan, Yi Zhu, Alexander G. Hauptmann
We investigate the problem of representing an entire video using CNN features for human action recognition.
no code implementations • 17 Jun 2016 • Shoou-I Yu, Yi Yang, Zhongwen Xu, Shicheng Xu, Deyu Meng, Zexi Mao, Zhigang Ma, Ming Lin, Xuanchong Li, Huan Li, Zhenzhong Lan, Lu Jiang, Alexander G. Hauptmann, Chuang Gan, Xingzhong Du, Xiaojun Chang
The large number of user-generated videos uploaded on to the Internet everyday has led to many commercial video search engines, which mainly rely on text metadata for search.
no code implementations • 11 Dec 2015 • Zhenzhong Lan, Shoou-I Yu, Alexander G. Hauptmann
We propose two well-motivated ranking-based methods to enhance the performance of current state-of-the-art human activity recognition systems.
no code implementations • 16 Nov 2015 • Zhenzhong Lan, Shoou-I Yu, Ming Lin, Bhiksha Raj, Alexander G. Hauptmann
We approach this problem by first showing that local handcrafted features and Convolutional Neural Networks (CNNs) share the same convolution-pooling network structure.
no code implementations • 15 Oct 2015 • Zhenzhong Lan, Alexander G. Hauptmann
We address the problem of generating video features for action recognition.
no code implementations • 17 May 2015 • Zhenzhong Lan, Dezhong Yao, Ming Lin, Shoou-I Yu, Alexander Hauptmann
First, we propose a two-stream Stacked Convolutional Independent Subspace Analysis (ConvISA) architecture to show that unsupervised learning methods can significantly boost the performance of traditional local features extracted from data-independent models.
1 code implementation • 12 Mar 2015 • Zhiyong Cheng, Daniel Soudry, Zexi Mao, Zhenzhong Lan
In this paper, we investigate the capability of BMNNs using the EBP algorithm on multiclass image classification tasks.
no code implementations • 13 Feb 2015 • Zhenzhong Lan, Xuanchong Li, Ming Lin, Alexander G. Hauptmann
Therefore, they need to occur frequently enough in the videos and to be be able to tell the difference among different types of motions.
no code implementations • NeurIPS 2014 • Lu Jiang, Deyu Meng, Shoou-I Yu, Zhenzhong Lan, Shiguang Shan, Alexander Hauptmann
Self-paced learning (SPL) is a recently proposed learning regime inspired by the learning process of humans and animals that gradually incorporates easy to more complex samples into training.
no code implementations • CVPR 2015 • Zhenzhong Lan, Ming Lin, Xuanchong Li, Alexander G. Hauptmann, Bhiksha Raj
MIFS compensates for information lost from using differential operators by recapturing information at coarse scales.
no code implementations • 29 Aug 2014 • Zhenzhong Lan, Xuanchong Li, Alexandar G. Hauptmann
To achieve temporal scale invariance, we develop a method called temporal scale pyramid (TSP).