Search Results for author: Zhili Feng

Found 17 papers, 1 papers with code

Online learning with graph-structured feedback against adaptive adversaries

no code implementations1 Apr 2018 Zhili Feng, Po-Ling Loh

When the adversary is allowed a bounded memory of size 1, we show that a matching lower bound of $\widetilde\Omega(T^{2/3})$ is achieved in the case of full-information feedback.

Does Data Augmentation Lead to Positive Margin?

no code implementations8 May 2019 Shashank Rajput, Zhili Feng, Zachary Charles, Po-Ling Loh, Dimitris Papailiopoulos

Data augmentation (DA) is commonly used during model training, as it significantly improves test error and model robustness.

Data Augmentation

Joint Reasoning for Temporal and Causal Relations

no code implementations ACL 2018 Qiang Ning, Zhili Feng, Hao Wu, Dan Roth

Understanding temporal and causal relations between events is a fundamental natural language understanding task.

Natural Language Understanding

CogCompTime: A Tool for Understanding Time in Natural Language Text

no code implementations12 Jun 2019 Qiang Ning, Ben Zhou, Zhili Feng, Haoruo Peng, Dan Roth

Automatic extraction of temporal information in text is an important component of natural language understanding.

Natural Language Understanding

Provable Adaptation across Multiway Domains via Representation Learning

no code implementations ICLR 2022 Zhili Feng, Shaobo Han, Simon S. Du

This paper studies zero-shot domain adaptation where each domain is indexed on a multi-dimensional array, and we only have data from a small subset of domains.

Domain Adaptation Representation Learning

Non-PSD Matrix Sketching with Applications to Regression and Optimization

no code implementations16 Jun 2021 Zhili Feng, Fred Roosta, David P. Woodruff

In this paper, we present novel dimensionality reduction methods for non-PSD matrices, as well as their ``square-roots", which involve matrices with complex entries.

Dimensionality Reduction regression

Text Descriptions are Compressive and Invariant Representations for Visual Learning

no code implementations10 Jul 2023 Zhili Feng, Anna Bair, J. Zico Kolter

This method first automatically generates multiple visual descriptions of each class via a large language model (LLM), then uses a VLM to translate these descriptions to a set of visual feature embeddings of each image, and finally uses sparse logistic regression to select a relevant subset of these features to classify each image.

Descriptive Few-Shot Learning +5

Monotone deep Boltzmann machines

no code implementations11 Jul 2023 Zhili Feng, Ezra Winston, J. Zico Kolter

Deep Boltzmann machines (DBMs), one of the first ``deep'' learning methods ever studied, are multi-layered probabilistic models governed by a pairwise energy function that describes the likelihood of all variables/nodes in the network.

On the Neural Tangent Kernel of Equilibrium Models

no code implementations21 Oct 2023 Zhili Feng, J. Zico Kolter

This work studies the neural tangent kernel (NTK) of the deep equilibrium (DEQ) model, a practical ``infinite-depth'' architecture which directly computes the infinite-depth limit of a weight-tied network via root-finding.

TOFU: A Task of Fictitious Unlearning for LLMs

no code implementations11 Jan 2024 Pratyush Maini, Zhili Feng, Avi Schwarzschild, Zachary C. Lipton, J. Zico Kolter

Large language models trained on massive corpora of data from the web can memorize and reproduce sensitive or private data raising both legal and ethical concerns.

An Axiomatic Approach to Model-Agnostic Concept Explanations

no code implementations12 Jan 2024 Zhili Feng, Michal Moshkovitz, Dotan Di Castro, J. Zico Kolter

Concept explanation is a popular approach for examining how human-interpretable concepts impact the predictions of a model.

Model Selection

RankCLIP: Ranking-Consistent Language-Image Pretraining

no code implementations15 Apr 2024 Yiming Zhang, Zhuokai Zhao, Zhaorun Chen, Zhili Feng, Zenghui Ding, Yining Sun

Among the ever-evolving development of vision-language models, contrastive language-image pretraining (CLIP) has set new benchmarks in many downstream tasks such as zero-shot classifications by leveraging self-supervised contrastive learning on large amounts of text-image pairs.

Contrastive Learning

Rethinking LLM Memorization through the Lens of Adversarial Compression

no code implementations23 Apr 2024 Avi Schwarzschild, Zhili Feng, Pratyush Maini, Zachary C. Lipton, J. Zico Kolter

We outline the limitations of existing notions of memorization and show how the ACR overcomes these challenges by (i) offering an adversarial view to measuring memorization, especially for monitoring unlearning and compliance; and (ii) allowing for the flexibility to measure memorization for arbitrary strings at a reasonably low compute.

Memorization

Cannot find the paper you are looking for? You can Submit a new open access paper.