no code implementations • 5 Mar 2024 • Ashwin Nayak, Pulkit Sinha
In the classical case, there are examples of concept classes with VC dimension $d$ that have sample complexity $\Omega\left(\frac d\epsilon\log\frac1\epsilon\right)$ for proper learning with error $\epsilon$, while the complexity for improper learning is O$\!\left(\frac d\epsilon\right)$.
no code implementations • 28 Nov 2023 • Maia Kotelanski, Robert Gallo, Ashwin Nayak, Thomas Savage
The methods evaluated were Intrinsic Confidence, SC Agreement Frequency and CoT Response Length.
no code implementations • 27 Aug 2023 • Scott L. Fleming, Alejandro Lozano, William J. Haberkorn, Jenelle A. Jindal, Eduardo P. Reis, Rahul Thapa, Louis Blankemeier, Julian Z. Genkins, Ethan Steinberg, Ashwin Nayak, Birju S. Patel, Chia-Chun Chiang, Alison Callahan, Zepeng Huo, Sergios Gatidis, Scott J. Adams, Oluseyi Fayanju, Shreya J. Shah, Thomas Savage, Ethan Goh, Akshay S. Chaudhari, Nima Aghaeepour, Christopher Sharp, Michael A. Pfeffer, Percy Liang, Jonathan H. Chen, Keith E. Morse, Emma P. Brunskill, Jason A. Fries, Nigam H. Shah
The ability of large language models (LLMs) to follow natural language instructions with human-level fluency suggests many opportunities in healthcare to reduce administrative burden and improve quality of care.
no code implementations • 13 Aug 2023 • Thomas Savage, Ashwin Nayak, Robert Gallo, Ekanath Rangan, Jonathan H Chen
One of the major barriers to using large language models (LLMs) in medicine is the perception they use uninterpretable methods to make clinical decisions that are inherently different from the cognitive processes of clinicians.
no code implementations • 5 Jan 2023 • Shima Bab Hadiashar, Ashwin Nayak, Pulkit Sinha
In this paper, we derive optimal lower bounds for quantum sample complexity in both the PAC and agnostic models via an information-theoretic approach.
no code implementations • 29 Jul 2022 • Angus Lowe, Ashwin Nayak
This leads to stronger lower bounds when the learner uses measurements with a constant number of outcomes.
no code implementations • NeurIPS 2018 • Scott Aaronson, Xinyi Chen, Elad Hazan, Satyen Kale, Ashwin Nayak
Even in the "non-realizable" setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that do significantly worse than the best possible states at most $\operatorname{O}\!\left(\sqrt {Tn}\right) $ times on the first $T$ measurements.