Search Results for author: Dejiao Zhang

Found 22 papers, 10 papers with code

Iterative Grassmannian Optimization for Robust Image Alignment

no code implementations3 Jun 2013 Jun He, Dejiao Zhang, Laura Balzano, Tao Tao

t-GRASTA iteratively performs incremental gradient descent constrained to the Grassmann manifold of subspaces in order to simultaneously estimate a decomposition of a collection of images into a low-rank subspace, a sparse part of occlusions and foreground objects, and a transformation such as rotation or translation of the image.

Face Recognition

Global Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation

no code implementations24 Jun 2015 Dejiao Zhang, Laura Balzano

It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex.

Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation From Undersampled Data

no code implementations1 Oct 2016 Dejiao Zhang, Laura Balzano

We study two sampling cases: where each data vector of the streaming matrix is fully sampled, or where it is undersampled by a sampling matrix $A_t\in \mathbb{R}^{m\times n}$ with $m\ll n$.

Deep Unsupervised Clustering Using Mixture of Autoencoders

1 code implementation21 Dec 2017 Dejiao Zhang, Yifan Sun, Brian Eriksson, Laura Balzano

Unsupervised clustering is one of the most fundamental challenges in machine learning.

Clustering

LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING

1 code implementation ICLR 2018 Dejiao Zhang, Haozhu Wang, Mario Figueiredo, Laura Balzano

This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers.

INFORMATION MAXIMIZATION AUTO-ENCODING

no code implementations ICLR 2019 Dejiao Zhang, Tianchen Zhao, Laura Balzano

Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a hybrid discrete and continuous representation with the objective of maximizing the mutual information between the data and their representations.

Disentanglement Informativeness +1

Pairwise Supervised Contrastive Learning of Sentence Representations

1 code implementation EMNLP 2021 Dejiao Zhang, Shang-Wen Li, Wei Xiao, Henghui Zhu, Ramesh Nallapati, Andrew O. Arnold, Bing Xiang

Many recent successes in sentence representation learning have been achieved by simply fine-tuning on the Natural Language Inference (NLI) datasets with triplet loss or siamese loss.

Contrastive Learning Natural Language Inference +4

Virtual Augmentation Supported Contrastive Learning of Sentence Representations

1 code implementation Findings (ACL) 2022 Dejiao Zhang, Wei Xiao, Henghui Zhu, Xiaofei Ma, Andrew O. Arnold

We then define an instance discrimination task regarding this neighborhood and generate the virtual augmentation in an adversarial training manner.

Contrastive Learning Data Augmentation +2

Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey

no code implementations16 Oct 2021 Xiaokai Wei, Shen Wang, Dejiao Zhang, Parminder Bhatia, Andrew Arnold

This new paradigm has revolutionized the entire field of natural language processing, and set the new state-of-the-art performance for a wide variety of NLP tasks.

QaNER: Prompting Question Answering Models for Few-shot Named Entity Recognition

1 code implementation3 Mar 2022 Andy T. Liu, Wei Xiao, Henghui Zhu, Dejiao Zhang, Shang-Wen Li, Andrew Arnold

Recently, prompt-based learning for pre-trained language models has succeeded in few-shot Named Entity Recognition (NER) by exploiting prompts as task guidance to increase label efficiency.

Few-shot NER Named Entity Recognition +2

Learning Dialogue Representations from Consecutive Utterances

1 code implementation NAACL 2022 Zhihan Zhou, Dejiao Zhang, Wei Xiao, Nicholas Dingwall, Xiaofei Ma, Andrew O. Arnold, Bing Xiang

In this paper, we introduce Dialogue Sentence Embedding (DSE), a self-supervised contrastive learning method that learns effective dialogue representations suitable for a wide range of dialogue tasks.

Contrastive Learning Conversational Question Answering +14

Code Representation Learning At Scale

no code implementations2 Feb 2024 Dejiao Zhang, Wasi Ahmad, Ming Tan, Hantian Ding, Ramesh Nallapati, Dan Roth, Xiaofei Ma, Bing Xiang

Recent studies have shown that code language models at scale demonstrate significant performance gains on downstream tasks, i. e., code generation.

Code Generation Contrastive Learning +3

Repoformer: Selective Retrieval for Repository-Level Code Completion

no code implementations15 Mar 2024 Di wu, Wasi Uddin Ahmad, Dejiao Zhang, Murali Krishna Ramanathan, Xiaofei Ma

Recent advances in retrieval-augmented generation (RAG) have initiated a new era in repository-level code completion.

Code Completion Retrieval +1

Cannot find the paper you are looking for? You can Submit a new open access paper.