Search Results for author: Denny Zhou

Found 63 papers, 31 papers with code

Network Pruning by Greedy Subnetwork Selection

no code implementations ICML 2020 Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu

Theoretically, we show that the small networks pruned using our method achieve provably lower loss than small networks trained from scratch with the same size.

Network Pruning

Chain of Thought Empowers Transformers to Solve Inherently Serial Problems

no code implementations20 Feb 2024 Zhiyuan Li, Hong Liu, Denny Zhou, Tengyu Ma

Given input length $n$, previous works have shown that constant-depth transformers with finite precision $\mathsf{poly}(n)$ embedding size can only solve problems in $\mathsf{TC}^0$ without CoT.

Chain-of-Thought Reasoning Without Prompting

no code implementations15 Feb 2024 Xuezhi Wang, Denny Zhou

In enhancing the reasoning capabilities of large language models (LLMs), prior research primarily focuses on specific prompting techniques such as few-shot or zero-shot chain-of-thought (CoT) prompting.

Prompt Engineering

Transformers Can Achieve Length Generalization But Not Robustly

no code implementations14 Feb 2024 Yongchao Zhou, Uri Alon, Xinyun Chen, Xuezhi Wang, Rishabh Agarwal, Denny Zhou

We show that the success of length generalization is intricately linked to the data format and the type of position encoding.

Position

Premise Order Matters in Reasoning with Large Language Models

no code implementations14 Feb 2024 Xinyun Chen, Ryan A. Chi, Xuezhi Wang, Denny Zhou

We first examine the effect of premise ordering on deductive reasoning on a variety of LLMs, and our evaluation shows that permuting the premise order can cause a performance drop of over 30%.

GSM8K

Self-Discover: Large Language Models Self-Compose Reasoning Structures

2 code implementations6 Feb 2024 Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, Heng-Tze Cheng, Quoc V. Le, Ed H. Chi, Denny Zhou, Swaroop Mishra, Huaixiu Steven Zheng

We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the task-intrinsic reasoning structures to tackle complex reasoning problems that are challenging for typical prompting methods.

Math

Universal Self-Consistency for Large Language Model Generation

no code implementations29 Nov 2023 Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan Xiao, Pengcheng Yin, Sushant Prakash, Charles Sutton, Xuezhi Wang, Denny Zhou

Self-consistency with chain-of-thought prompting (CoT) has demonstrated remarkable performance gains on various challenging tasks, by utilizing multiple reasoning paths sampled from large language models (LLMs).

Code Generation Language Modelling +3

Large Language Models can Learn Rules

no code implementations10 Oct 2023 Zhaocheng Zhu, Yuan Xue, Xinyun Chen, Denny Zhou, Jian Tang, Dale Schuurmans, Hanjun Dai

In the deduction stage, the LLM is then prompted to employ the learned rule library to perform reasoning to answer test questions.

Relational Reasoning

Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models

no code implementations9 Oct 2023 Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H. Chi, Quoc V Le, Denny Zhou

We present Step-Back Prompting, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details.

FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation

1 code implementation5 Oct 2023 Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, Thang Luong

Specifically, we introduce FreshQA, a novel dynamic QA benchmark encompassing a diverse range of question and answer types, including questions that require fast-changing world knowledge as well as questions with false premises that need to be debunked.

Hallucination World Knowledge

Large Language Models Cannot Self-Correct Reasoning Yet

1 code implementation3 Oct 2023 Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, Denny Zhou

Large Language Models (LLMs) have emerged as a groundbreaking technology with their unparalleled text generation capabilities across various applications.

Text Generation

Large Language Models as Analogical Reasoners

no code implementations3 Oct 2023 Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Pasupat, Jure Leskovec, Percy Liang, Ed H. Chi, Denny Zhou

Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks, but typically needs labeled exemplars of the reasoning process.

Code Generation GSM8K +1

Simple synthetic data reduces sycophancy in large language models

1 code implementation7 Aug 2023 Jerry Wei, Da Huang, Yifeng Lu, Denny Zhou, Quoc V. Le

Adding these data in a lightweight finetuning step can significantly reduce sycophantic behavior on held-out prompts.

Training Socially Aligned Language Models on Simulated Social Interactions

1 code implementation26 May 2023 Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M. Dai, Diyi Yang, Soroush Vosoughi

Social alignment in AI systems aims to ensure that these models behave according to established societal values.

Large Language Models as Tool Makers

1 code implementation26 May 2023 Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, Denny Zhou

Our approach consists of two phases: 1) tool making: an LLM acts as the tool maker that crafts tools for a set of tasks.

A Pretrainer's Guide to Training Data: Measuring the Effects of Data Age, Domain Coverage, Quality, & Toxicity

no code implementations22 May 2023 Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, Jason Wei, Kevin Robinson, David Mimno, Daphne Ippolito

Second, we explore the effect of quality and toxicity filters, showing a trade-off between performance on standard benchmarks and risk of toxic generations.

Not All Semantics are Created Equal: Contrastive Self-supervised Learning with Automatic Temperature Individualization

1 code implementation19 May 2023 Zi-Hao Qiu, Quanqi Hu, Zhuoning Yuan, Denny Zhou, Lijun Zhang, Tianbao Yang

In this paper, we aim to optimize a contrastive loss with individualized temperatures in a principled and systematic manner for self-supervised learning.

Self-Supervised Learning

PaLM 2 Technical Report

1 code implementation17 May 2023 Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, Yaguang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, ZiRui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, Yonghui Wu

Through extensive evaluations on English and multilingual language, and reasoning tasks, we demonstrate that PaLM 2 has significantly improved quality on downstream tasks across different model sizes, while simultaneously exhibiting faster and more efficient inference compared to PaLM.

Code Generation Common Sense Reasoning +6

Symbol tuning improves in-context learning in language models

no code implementations15 May 2023 Jerry Wei, Le Hou, Andrew Lampinen, Xiangning Chen, Da Huang, Yi Tay, Xinyun Chen, Yifeng Lu, Denny Zhou, Tengyu Ma, Quoc V. Le

We present symbol tuning - finetuning language models on in-context input-label pairs where natural language labels (e. g., "positive/negative sentiment") are replaced with arbitrary symbols (e. g., "foo/bar").

In-Context Learning

Teaching Large Language Models to Self-Debug

no code implementations11 Apr 2023 Xinyun Chen, Maxwell Lin, Nathanael Schärli, Denny Zhou

In particular, we demonstrate that Self-Debugging can teach the large language model to perform rubber duck debugging; i. e., without any human feedback on the code correctness or error messages, the model is able to identify its mistakes by investigating the execution results and explaining the generated code in natural language.

Code Generation Language Modelling +3

Larger language models do in-context learning differently

no code implementations7 Mar 2023 Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, Tengyu Ma

We next study semantically-unrelated label ICL (SUL-ICL), in which labels are semantically unrelated to their inputs (e. g., foo/bar instead of negative/positive), thereby forcing language models to learn the input-label mappings shown in in-context exemplars in order to perform the task.

In-Context Learning

Large Language Models Can Be Easily Distracted by Irrelevant Context

1 code implementation31 Jan 2023 Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou

We use this benchmark to measure the distractibility of cutting-edge prompting techniques for large language models, and find that the model performance is dramatically decreased when irrelevant information is included.

Arithmetic Reasoning Language Modelling +1

The Flan Collection: Designing Data and Methods for Effective Instruction Tuning

1 code implementation31 Jan 2023 Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, Adam Roberts

We study the design decisions of publicly available instruction tuning methods, and break down the development of Flan 2022 (Chung et al., 2022).

What learning algorithm is in-context learning? Investigations with linear models

no code implementations28 Nov 2022 Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, Denny Zhou

We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly, by encoding smaller models in their activations, and updating these implicit models as new examples appear in the context.

In-Context Learning regression

TEMPERA: Test-Time Prompting via Reinforcement Learning

1 code implementation21 Nov 2022 Tianjun Zhang, Xuezhi Wang, Denny Zhou, Dale Schuurmans, Joseph E. Gonzalez

To achieve this, we design a novel action space that allows flexible editing of the initial prompts covering a wide set of commonly-used components like instructions, few-shot exemplars, and verbalizers.

Few-Shot Learning Natural Language Inference +5

Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them

1 code implementation17 Oct 2022 Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, Jason Wei

BIG-Bench (Srivastava et al., 2022) is a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models.

Language Modelling

Mind's Eye: Grounded Language Model Reasoning through Simulation

no code implementations11 Oct 2022 Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai

By training solely on written text, current language models (LMs) miss the grounded experience of humans in the real-world -- their failure to relate language to the physical world causes knowledge to be misrepresented and obvious mistakes in their reasoning.

Language Modelling

Language Models are Multilingual Chain-of-Thought Reasoners

2 code implementations6 Oct 2022 Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei

Finally, we show that the multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment.

GSM8K Math

Recitation-Augmented Language Models

1 code implementation4 Oct 2022 Zhiqing Sun, Xuezhi Wang, Yi Tay, Yiming Yang, Denny Zhou

We propose a new paradigm to help Large Language Models (LLMs) generate more accurate factual knowledge without retrieving from an external corpus, called RECITation-augmented gEneration (RECITE).

Natural Questions Question Answering +2

Rationale-Augmented Ensembles in Language Models

no code implementations2 Jul 2022 Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou

Recent research has shown that rationales, or step-by-step chains of thought, can be used to improve performance in multi-step reasoning tasks.

In-Context Learning Prompt Engineering +3

UL2: Unifying Language Learning Paradigms

1 code implementation10 May 2022 Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Jason Wei, Xuezhi Wang, Hyung Won Chung, Siamak Shakeri, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Denny Zhou, Neil Houlsby, Donald Metzler

Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization.

 Ranked #1 on Long-range modeling on SCROLLS (CNLI metric)

Arithmetic Reasoning Common Sense Reasoning +11

Self-Consistency Improves Chain of Thought Reasoning in Language Models

1 code implementation21 Mar 2022 Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou

Chain-of-thought prompting combined with pre-trained large language models has achieved encouraging results on complex reasoning tasks.

Ranked #76 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning GSM8K +3

DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection

1 code implementation CVPR 2022 Yingwei Li, Adams Wei Yu, Tianjian Meng, Ben Caine, Jiquan Ngiam, Daiyi Peng, Junyang Shen, Bo Wu, Yifeng Lu, Denny Zhou, Quoc V. Le, Alan Yuille, Mingxing Tan

In this paper, we propose two novel techniques: InverseAug that inverses geometric-related augmentations, e. g., rotation, to enable accurate geometric alignment between lidar points and image pixels, and LearnableAlign that leverages cross-attention to dynamically capture the correlations between image and lidar features during fusion.

3D Object Detection Autonomous Driving +2

Provable Stochastic Optimization for Global Contrastive Learning: Small Batch Does Not Harm Performance

1 code implementation24 Feb 2022 Zhuoning Yuan, Yuexin Wu, Zi-Hao Qiu, Xianzhi Du, Lijun Zhang, Denny Zhou, Tianbao Yang

In this paper, we study contrastive learning from an optimization perspective, aiming to analyze and address a fundamental issue of existing contrastive learning methods that either rely on a large batch size or a large dictionary of feature vectors.

Contrastive Learning Self-Supervised Learning +1

Auto-scaling Vision Transformers without Training

1 code implementation ICLR 2022 Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wang, Denny Zhou

The motivation comes from two pain spots: 1) the lack of efficient and principled methods for designing and scaling ViTs; 2) the tremendous computational cost of training ViT that is much heavier than its convolution counterpart.

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

13 code implementations28 Jan 2022 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou

We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning.

Common Sense Reasoning GSM8K +2

A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation

3 code implementations17 Dec 2021 Wuyang Chen, Xianzhi Du, Fan Yang, Lucas Beyer, Xiaohua Zhai, Tsung-Yi Lin, Huizhong Chen, Jing Li, Xiaodan Song, Zhangyang Wang, Denny Zhou

In this paper, we comprehensively study three architecture design choices on ViT -- spatial reduction, doubled channels, and multiscale features -- and demonstrate that a vanilla ViT architecture can fulfill this goal without handcrafting multiscale features, maintaining the original ViT design philosophy.

Image Classification Instance Segmentation +6

SMORE: Knowledge Graph Completion and Multi-hop Reasoning in Massive Knowledge Graphs

1 code implementation28 Oct 2021 Hongyu Ren, Hanjun Dai, Bo Dai, Xinyun Chen, Denny Zhou, Jure Leskovec, Dale Schuurmans

There are two important reasoning tasks on KGs: (1) single-hop knowledge graph completion, which involves predicting individual links in the KG; and (2), multi-hop reasoning, where the goal is to predict which KG entities satisfy a given logical query.

Scheduling

Speeding up Deep Model Training by Sharing Weights and Then Unsharing

no code implementations8 Oct 2021 Shuo Yang, Le Hou, Xiaodan Song, Qiang Liu, Denny Zhou

Our approach exploits the special structure of BERT that contains a stack of repeated modules (i. e., transformer encoders).

SpreadsheetCoder: Formula Prediction from Semi-structured Context

1 code implementation26 Jun 2021 Xinyun Chen, Petros Maniatis, Rishabh Singh, Charles Sutton, Hanjun Dai, Max Lin, Denny Zhou

In this work, we present the first approach for synthesizing spreadsheet formulas from tabular context, which includes both headers and semi-structured tabular data.

Program Synthesis

Speeding up Deep Learning Training by Sharing Weights and Then Unsharing

no code implementations1 Jan 2021 Shuo Yang, Le Hou, Xiaodan Song, Qiang Liu, Denny Zhou

It has been widely observed that increasing deep learning model sizes often leads to significant performance improvements on a variety of natural language processing and computer vision tasks.

Fast WordPiece Tokenization

1 code implementation EMNLP 2021 Xinying Song, Alex Salcianu, Yang song, Dave Dopson, Denny Zhou

For general text, we further propose an algorithm that combines pre-tokenization (splitting the text into words) and our linear-time WordPiece method into a single pass.

Sentence

Compositional Generalization via Neural-Symbolic Stack Machines

no code implementations NeurIPS 2020 Xinyun Chen, Chen Liang, Adams Wei Yu, Dawn Song, Denny Zhou

Despite achieving tremendous success, existing deep learning models have exposed limitations in compositional generalization, the capability to learn compositional rules and apply them to unseen cases in a systematic manner.

Few-Shot Learning Machine Translation +1

Go Wide, Then Narrow: Efficient Training of Deep Thin Networks

no code implementations ICML 2020 Denny Zhou, Mao Ye, Chen Chen, Tianjian Meng, Mingxing Tan, Xiaodan Song, Quoc Le, Qiang Liu, Dale Schuurmans

This is achieved by layerwise imitation, that is, forcing the thin network to mimic the intermediate outputs of the wide network from layer to layer.

Computational Efficiency Model Compression

Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension

no code implementations ICLR 2020 Xinyun Chen, Chen Liang, Adams Wei Yu, Denny Zhou, Dawn Song, Quoc V. Le

Integrating distributed representations with symbolic operations is essential for reading comprehension requiring complex reasoning, such as counting, sorting and arithmetics, but most existing approaches are hard to scale to more domains or more complex reasoning.

Data Augmentation Math +2

Black-box Off-policy Estimation for Infinite-Horizon Reinforcement Learning

no code implementations ICLR 2020 Ali Mousavi, Lihong Li, Qiang Liu, Denny Zhou

Off-policy estimation for long-horizon problems is important in many real-life applications such as healthcare and robotics, where high-fidelity simulators may not be available and on-policy evaluation is expensive or impossible.

reinforcement-learning Reinforcement Learning (RL)

Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection

1 code implementation3 Mar 2020 Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, Qiang Liu

This differs from the existing methods based on backward elimination, which remove redundant neurons from the large network.

Network Pruning

Deep Physiological State Space Model for Clinical Forecasting

no code implementations4 Dec 2019 Yuan Xue, Denny Zhou, Nan Du, Andrew Dai, Zhen Xu, Kun Zhang, Claire Cui

Clinical forecasting based on electronic medical records (EMR) can uncover the temporal correlations between patients' conditions and outcomes from sequences of longitudinal clinical measurements.

Extremely Small BERT Models from Mixed-Vocabulary Training

no code implementations EACL 2021 Sanqiang Zhao, Raghav Gupta, Yang song, Denny Zhou

Pretrained language models like BERT have achieved good results on NLP tasks, but are impractical on resource-limited devices due to memory footprint.

Knowledge Distillation Language Modelling +2

Neural Logic Machines

2 code implementations ICLR 2019 Honghua Dong, Jiayuan Mao, Tian Lin, Chong Wang, Lihong Li, Denny Zhou

We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning.

Decision Making Inductive logic programming +1

Doubly Sparse: Sparse Mixture of Sparse Experts for Efficient Softmax Inference

no code implementations ICLR 2019 Shun Liao, Ting Chen, Tian Lin, Denny Zhou, Chong Wang

In this paper, we present a novel softmax inference speedup method, Doubly Sparse Softmax (DS-Softmax), that leverages sparse mixture of sparse experts to efficiently retrieve top-k classes.

Image Classification Language Modelling +2

Adaptive Mixture of Low-Rank Factorizations for Compact Neural Modeling

no code implementations NIPS Workshop CDNNRIA 2018 Ting Chen, Ji Lin, Tian Lin, Song Han, Chong Wang, Denny Zhou

Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones.

Image Classification Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.