Search Results for author: Zhili Feng

Found 11 papers, 1 papers with code

Learning-Augmented $k$-means Clustering

no code implementations27 Oct 2021 Jon Ergun, Zhili Feng, Sandeep Silwal, David P. Woodruff, Samson Zhou

$k$-means clustering is a well-studied problem due to its wide applicability.

Non-PSD Matrix Sketching with Applications to Regression and Optimization

no code implementations16 Jun 2021 Zhili Feng, Fred Roosta, David P. Woodruff

In this paper, we present novel dimensionality reduction methods for non-PSD matrices, as well as their ``square-roots", which involve matrices with complex entries.

Dimensionality Reduction

Provable Adaptation across Multiway Domains via Representation Learning

no code implementations12 Jun 2021 Zhili Feng, Shaobo Han, Simon S. Du

This paper studies zero-shot domain adaptation where each domain is indexed on a multi-dimensional array, and we only have data from a small subset of domains.

Domain Adaptation Representation Learning

On the Neural Tangent Kernel of Equilibrium Models

no code implementations1 Jan 2021 Zhili Feng, J Zico Kolter

Existing analyses of the neural tangent kernel (NTK) for infinite-depth networks show that the kernel typically becomes degenerate as the number of layers grows.

Joint Reasoning for Temporal and Causal Relations

no code implementations ACL 2018 Qiang Ning, Zhili Feng, Hao Wu, Dan Roth

Understanding temporal and causal relations between events is a fundamental natural language understanding task.

Language understanding Natural Language Understanding

CogCompTime: A Tool for Understanding Time in Natural Language Text

no code implementations12 Jun 2019 Qiang Ning, Ben Zhou, Zhili Feng, Haoruo Peng, Dan Roth

Automatic extraction of temporal information in text is an important component of natural language understanding.

Language understanding Natural Language Understanding

Does Data Augmentation Lead to Positive Margin?

no code implementations8 May 2019 Shashank Rajput, Zhili Feng, Zachary Charles, Po-Ling Loh, Dimitris Papailiopoulos

Data augmentation (DA) is commonly used during model training, as it significantly improves test error and model robustness.

Data Augmentation

Online learning with graph-structured feedback against adaptive adversaries

no code implementations1 Apr 2018 Zhili Feng, Po-Ling Loh

When the adversary is allowed a bounded memory of size 1, we show that a matching lower bound of $\widetilde\Omega(T^{2/3})$ is achieved in the case of full-information feedback.

Cannot find the paper you are looking for? You can Submit a new open access paper.