Search Results for author: Binghui Peng

Found 21 papers, 2 papers with code

Theoretical limitations of multi-layer Transformer

no code implementations4 Dec 2024 Lijie Chen, Binghui Peng, Hongxun Wu

We also introduce a new proof technique that finds a certain $\textit{indistinguishable}$ $\textit{decomposition}$ of all possible inputs iteratively for proving lower bounds in this model.

Decoder

The complexity of approximate (coarse) correlated equilibrium for incomplete information games

no code implementations4 Jun 2024 Binghui Peng, Aviad Rubinstein

We study the iteration complexity of decentralized learning of approximate correlated equilibria in incomplete information games.

On Limitations of the Transformer Architecture

no code implementations13 Feb 2024 Binghui Peng, Srini Narayanan, Christos Papadimitriou

What are the root causes of hallucinations in large language models (LLMs)?

The sample complexity of multi-distribution learning

no code implementations7 Dec 2023 Binghui Peng

Multi-distribution learning generalizes the classic PAC learning to handle data coming from multiple distributions.

PAC learning

Fast swap regret minimization and applications to approximate correlated equilibria

no code implementations30 Oct 2023 Binghui Peng, Aviad Rubinstein

We give a simple and computationally efficient algorithm that, for any constant $\varepsilon>0$, obtains $\varepsilon T$-swap regret within only $T = \mathsf{polylog}(n)$ rounds; this is an exponential improvement compared to the super-linear number of rounds required by the state-of-the-art algorithm, and resolves the main open problem of [Blum and Mansour 2007].

The complexity of non-stationary reinforcement learning

no code implementations13 Jul 2023 Christos Papadimitriou, Binghui Peng

The problem of continual learning in the domain of reinforcement learning, often called non-stationary reinforcement learning, has been identified as an important challenge to the application of reinforcement learning.

Continual Learning reinforcement-learning +1

Memory-Query Tradeoffs for Randomized Convex Optimization

no code implementations21 Jun 2023 Xi Chen, Binghui Peng

We show that any randomized first-order algorithm which minimizes a $d$-dimensional, $1$-Lipschitz convex function over the unit ball must either use $\Omega(d^{2-\delta})$ bits of memory or make $\Omega(d^{1+\delta/6-o(1)})$ queries, for any constant $\delta\in (0, 1)$ and when the precision $\epsilon$ is quasipolynomially small in $d$.

Near Optimal Memory-Regret Tradeoff for Online Learning

no code implementations3 Mar 2023 Binghui Peng, Aviad Rubinstein

In the experts problem, on each of $T$ days, an agent needs to follow the advice of one of $n$ ``experts''.

Learning Theory

Online Prediction in Sub-linear Space

no code implementations16 Jul 2022 Binghui Peng, Fred Zhang

We provide the first sub-linear space and sub-linear regret algorithm for online learning with expert advice (against an oblivious adversary), addressing an open question raised recently by Srinivas, Woodruff, Xu and Zhou (STOC 2022).

Open-Ended Question Answering

Memory Bounds for Continual Learning

no code implementations22 Apr 2022 Xi Chen, Christos Papadimitriou, Binghui Peng

We make novel uses of communication complexity to establish that any continual learner, even an improper one, needs memory that grows linearly with $k$, strongly suggesting that the problem is intractable.

Continual Learning

Continual learning: a feature extraction formalization, an efficient algorithm, and fundamental obstructions

no code implementations27 Mar 2022 Binghui Peng, Andrej Risteski

When the features are linear, we design an efficient gradient-based algorithm $\mathsf{DPGD}$, that is guaranteed to perform well on the current environment, as well as avoid catastrophic forgetting.

Continual Learning

The Complexity of Dynamic Least-Squares Regression

1 code implementation1 Jan 2022 Shunhua Jiang, Binghui Peng, Omri Weinstein

We settle the complexity of dynamic least-squares regression (LSR), where rows and labels $(\mathbf{A}^{(t)}, \mathbf{b}^{(t)})$ can be adaptively inserted and/or deleted, and the goal is to efficiently maintain an $\epsilon$-approximate solution to $\min_{\mathbf{x}^{(t)}} \| \mathbf{A}^{(t)} \mathbf{x}^{(t)} - \mathbf{b}^{(t)} \|_2$ for all $t\in [T]$.

regression

On the Complexity of Dynamic Submodular Maximization

no code implementations5 Nov 2021 Xi Chen, Binghui Peng

We study dynamic algorithms for the problem of maximizing a monotone submodular function over a stream of $n$ insertions and deletions.

Shuffle Private Stochastic Convex Optimization

no code implementations ICLR 2022 Albert Cheu, Matthew Joseph, Jieming Mao, Binghui Peng

In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler, the shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy.

Self-Attention Networks Can Process Bounded Hierarchical Languages

1 code implementation ACL 2021 Shunyu Yao, Binghui Peng, Christos Papadimitriou, Karthik Narasimhan

Despite their impressive performance in NLP, self-attention networks were recently proved to be limited for processing formal languages with hierarchical structure, such as $\mathsf{Dyck}_k$, the language consisting of well-nested parentheses of $k$ types.

Hard Attention

MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training

no code implementations ICLR 2021 Beidi Chen, Zichang Liu, Binghui Peng, Zhaozhuo Xu, Jonathan Lingjie Li, Tri Dao, Zhao Song, Anshumali Shrivastava, Christopher Re

Recent advances by practitioners in the deep learning community have breathed new life into Locality Sensitive Hashing (LSH), using it to reduce memory and time bottlenecks in neural network (NN) training.

Efficient Neural Network Language Modelling +2

MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery

no code implementations22 Oct 2020 Xiaoxiao Li, Yangsibo Huang, Binghui Peng, Zhao Song, Kai Li

To address the issue that deep neural networks (DNNs) are vulnerable to model inversion attacks, we design an objective function, which adjusts the separability of the hidden data representations, as a way to control the trade-off between data utility and vulnerability to inversion attacks.

Training (Overparametrized) Neural Networks in Near-Linear Time

no code implementations20 Jun 2020 Jan van den Brand, Binghui Peng, Zhao Song, Omri Weinstein

The slow convergence rate and pathological curvature issues of first-order gradient methods for training deep neural networks, initiated an ongoing effort for developing faster $\mathit{second}$-$\mathit{order}$ optimization algorithms beyond SGD, without compromising the generalization error.

Dimensionality Reduction regression

Adaptive Greedy versus Non-adaptive Greedy for Influence Maximization

no code implementations19 Nov 2019 Wei Chen, Binghui Peng, Grant Schoenebeck, Biaoshuai Tao

On the other side, we prove that in any submodular cascade, the adaptive greedy algorithm always outputs a $(1-1/e)$-approximation to the expected number of adoptions in the optimal non-adaptive seed choice.

Social and Information Networks

On Adaptivity Gaps of Influence Maximization under the Independent Cascade Model with Full Adoption Feedback

no code implementations3 Jul 2019 Wei Chen, Binghui Peng

In this paper, we study the adaptivity gap of the influence maximization problem under independent cascade model when full-adoption feedback is available.

Social and Information Networks

Adaptive Influence Maximization with Myopic Feedback

no code implementations NeurIPS 2019 Binghui Peng, Wei Chen

We study the adaptive influence maximization problem with myopic feedback under the independent cascade model: one sequentially selects k nodes as seeds one by one from a social network, and each selected seed returns the immediate neighbors it activates as the feedback available for later selections, and the goal is to maximize the expected number of total activated nodes, referred as the influence spread.

Social and Information Networks

Cannot find the paper you are looking for? You can Submit a new open access paper.