Search Results for author: Betty Li Hou

Found 3 papers, 1 papers with code

Foundational Moral Values for AI Alignment

no code implementations28 Nov 2023 Betty Li Hou, Brian Patrick Green

Solving the AI alignment problem requires having clear, defensible values towards which AI systems can align.

Philosophy

GPQA: A Graduate-Level Google-Proof Q&A Benchmark

1 code implementation20 Nov 2023 David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, Samuel R. Bowman

We present GPQA, a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry.

Multiple-choice

A Multi-Level Framework for the AI Alignment Problem

no code implementations10 Jan 2023 Betty Li Hou, Brian Patrick Green

AI alignment considers how we can encode AI systems in a way that is compatible with human values.

Cannot find the paper you are looking for? You can Submit a new open access paper.