Search Results for author: Yimeng Zeng

Found 7 papers, 3 papers with code

Covering Multiple Objectives with a Small Set of Solutions Using Bayesian Optimization

no code implementations31 Jan 2025 Natalie Maus, Kyurae Kim, Yimeng Zeng, Haydn Thomas Jones, Fangping Wan, Marcelo Der Torossian Torres, Cesar de la Fuente-Nunez, Jacob R. Gardner

In this work, we introduce a novel problem setting that departs from this paradigm: finding a smaller set of K solutions, where K < T, that collectively "covers" the T objectives.

Bayesian Optimization Drug Design

Improving Structural Diversity of Blackbox LLMs via Chain-of-Specification Prompting

no code implementations12 Aug 2024 Halley Young, Yimeng Zeng, Jacob Gardner, Osbert Bastani

In addition, we propose a novel strategy called chain-of-specification (CoS) prompting for improving diversity by first having the LLM generate a specification encoding one instance of structural features, and then prompting the LLM to generate text that satisfies these features; notably, our strategy works with blackbox LLMs.

Diversity

Zeroth-Order Fine-Tuning of LLMs with Extreme Sparsity

no code implementations5 Jun 2024 Wentao Guo, Jikai Long, Yimeng Zeng, Zirui Liu, Xinyu Yang, Yide Ran, Jacob R. Gardner, Osbert Bastani, Christopher De Sa, Xiaodong Yu, Beidi Chen, Zhaozhuo Xu

Zeroth-order optimization (ZO) is a memory-efficient strategy for fine-tuning Large Language Models using only forward passes.

Quantization

Generative Adversarial Model-Based Optimization via Source Critic Regularization

1 code implementation9 Feb 2024 Michael S. Yao, Yimeng Zeng, Hamsa Bastani, Jacob Gardner, James C. Gee, Osbert Bastani

Offline model-based optimization seeks to optimize against a learned surrogate model without querying the true oracle objective function during optimization.

Bayesian Optimization Protein Design

Learning Performance-Improving Code Edits

2 code implementations15 Feb 2023 Alexander Shypula, Aman Madaan, Yimeng Zeng, Uri Alon, Jacob Gardner, Milad Hashemi, Graham Neubig, Parthasarathy Ranganathan, Osbert Bastani, Amir Yazdanbakhsh

Next, we propose a broad range of adaptation strategies for code optimization; for prompting, these include retrieval-based few-shot prompting and chain-of-thought, and for finetuning, these include performance-conditioned generation and synthetic data augmentation based on self-play.

Code Generation Code Repair +2

Cyclical Kernel Adaptive Metropolis

1 code implementation29 Jun 2022 Jianan Canal Li, Yimeng Zeng, Wentao Guo

We propose cKAM, cyclical Kernel Adaptive Metropolis, which incorporates a cyclical stepsize scheme to allow control for exploration and sampling.

Cannot find the paper you are looking for? You can Submit a new open access paper.