no code implementations • 28 May 2024 • Madhura Sondharangalla, Dan Moldovan, Raja Ayyanar
A new control scheme, namely volt-PF control, is proposed here where the Q support is inherently a function of both the voltage and $P$ from DERs, which alleviates the above concerns while limiting the PF variation within a narrow range of 0. 9 to 1.
1 code implementation • 3 Jan 2023 • Evan Tey, Dan Moldovan, Michelle Kunimoto, Chelsea X. Huang, Avi Shporer, Tansu Daylan, Daniel Muthukrishna, Andrew Vanderburg, Anne Dattilo, George R. Ricker, S. Seager
Since 90% of our training data is from the Primary Mission, we also test our ability to generalize on held-out 1st Extended Mission data.
1 code implementation • 13 May 2021 • Maggie Makar, Ben Packer, Dan Moldovan, Davis Blalock, Yoni Halpern, Alexander D'Amour
Shortcut learning, in which models make use of easy-to-represent but unstable associations, is a major failure mode for robust machine learning.
no code implementations • 6 Nov 2020 • Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yian Ma, Cory McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley
Predictors returned by underspecified pipelines are often treated as equivalent based on their training domain performance, but we show here that such predictors can behave very differently in deployment domains.
1 code implementation • CVPR 2021 • Josip Djolonga, Jessica Yung, Michael Tschannen, Rob Romijnders, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Matthias Minderer, Alexander D'Amour, Dan Moldovan, Sylvain Gelly, Neil Houlsby, Xiaohua Zhai, Mario Lucic
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
1 code implementation • LREC 2020 • Takshak Desai, Parag Pravin Dakle, Dan Moldovan
This paper describes an accurate framework for carrying out multi-lingual discourse segmentation with BERT (Devlin et al., 2019).
no code implementations • LREC 2020 • Parag Pravin Dakle, Takshak Desai, Dan Moldovan
This paper investigates the problem of entity resolution for email conversations and presents a seed annotated corpus of email threads labeled with entity coreference chains.
no code implementations • LREC 2020 • Linrui Zhang, Hsin-Lun Huang, Yang Yu, Dan Moldovan
As opposed to the traditional machine learning models which require considerable effort in designing task specific features, our model can be well adapted to the proposed tasks with a very limited amount of fine-tuning, which significantly reduces the manual effort in feature engineering.
no code implementations • 16 Oct 2018 • Dan Moldovan, James M Decker, Fei Wang, Andrew A Johnson, Brian K. Lee, Zachary Nado, D. Sculley, Tiark Rompf, Alexander B. Wiltschko
In machine learning, imperative style libraries like Autograd and PyTorch are easy to write, but suffer from high interpretive overhead and are not easily deployable in production or mobile settings.
no code implementations • NeurIPS 2018 • Bart van Merriënboer, Dan Moldovan, Alexander B. Wiltschko
The need to efficiently calculate first- and higher-order derivatives of increasingly complex models expressed in Python has stressed or exceeded the capabilities of available tools.
no code implementations • COLING 2018 • Linrui Zhang, Dan Moldovan
This paper presents a neural net approach to determine Semantic Textual Similarity (STS) using attention-based bidirectional Long Short-Term Memory Networks (Bi-LSTM).
no code implementations • WS 2018 • Takshak Desai, Parag Dakle, Dan Moldovan
In this paper, we have proposed a technique for generating complex reading comprehension questions from a discourse that are more useful than factual ones derived from assertions.
no code implementations • 7 Nov 2017 • Bart van Merriënboer, Alexander B. Wiltschko, Dan Moldovan
Automatic differentiation (AD) is an essential primitive for machine learning programming systems.
no code implementations • LREC 2014 • Tatiana Erekhinskaya, Meghana Satpute, Dan Moldovan
This paper presents a method to create WordNet-like lexical resources for different languages.
no code implementations • LREC 2012 • Marta Tatu, Dan Moldovan
Explicitly conveyed knowledge represents only a portion of the information communicated by a text snippet.
no code implementations • LREC 2012 • Dan Moldovan, Eduardo Blanco
Polaris is a supervised semantic parser that given text extracts semantic relations.