1 code implementation • 13 Feb 2024 • Kenneth Li, Tianle Liu, Naomi Bashkansky, David Bau, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg
System-prompting is a standard tool for customizing language-model chatbots, enabling them to follow a specific instruction.
1 code implementation • 9 Jun 2023 • Yida Chen, Fernanda Viégas, Martin Wattenberg
Latent diffusion models (LDMs) exhibit an impressive ability to produce realistic images, yet the inner workings of these models remain mysterious.
1 code implementation • NeurIPS 2023 • Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg
This intervention significantly improves the performance of LLaMA models on the TruthfulQA benchmark.
no code implementations • 4 May 2023 • Catherine Yeh, Yida Chen, Aoyu Wu, Cynthia Chen, Fernanda Viégas, Martin Wattenberg
Transformer models are revolutionizing machine learning, but their inner workings remain mysterious.
no code implementations • 4 May 2023 • Fernanda Viégas, Martin Wattenberg
We conjecture that, for many systems, the two most important models will be of the user and of the system itself.
1 code implementation • 24 Oct 2022 • Kenneth Li, Aspen K. Hopkins, David Bau, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg
Language models show a surprising range of capabilities, but the source of their apparent competence is unclear.
no code implementations • 14 Apr 2021 • Tolga Bolukbasi, Adam Pearce, Ann Yuan, Andy Coenen, Emily Reif, Fernanda Viégas, Martin Wattenberg
We describe an "interpretability illusion" that arises when analyzing the BERT model.
2 code implementations • NeurIPS 2019 • Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg
Transformer architectures show significant promise for natural language processing.
2 code implementations • ICCV 2019 • Andrei Kapishnikov, Tolga Bolukbasi, Fernanda Viégas, Michael Terry
Saliency methods can aid understanding of deep neural networks.
1 code implementation • 5 Sep 2018 • Minsuk Kahng, Nikhil Thorat, Duen Horng Chau, Fernanda Viégas, Martin Wattenberg
Recent success in deep learning has generated immense interest among practitioners and students, inspiring many to learn about this new technology.
2 code implementations • ICLR 2018 • Been Kim, Justin Gilmer, Martin Wattenberg, Fernanda Viégas
In particular, this framework enables non-machine learning experts to express concepts of interests and test hypotheses using examples (e. g., a set of pictures that illustrate the concept).
20 code implementations • 12 Jun 2017 • Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, Martin Wattenberg
Explaining the output of a deep network remains a challenge.
4 code implementations • TACL 2017 • Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, Jeffrey Dean
In addition to improving the translation quality of language pairs that the model was trained with, our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation.