no code implementations • ACL 2022 • Iz Beltagy, Arman Cohan, Robert Logan IV, Sewon Min, Sameer Singh
The ability to efficiently learn from little-to-no data is critical to applying NLP to tasks where data collection is costly or otherwise difficult.
no code implementations • EMNLP (ACL) 2021 • Kai-Wei Chang, He He, Robin Jia, Sameer Singh
In particular, we will review recent studies on analyzing the weakness of NLP systems when facing adversarial inputs and data with a distribution shift.
no code implementations • EMNLP (BlackboxNLP) 2021 • Zhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Zayd Hammoudeh, Daniel Lowd, Sameer Singh
Adversarial attacks curated against NLP models are increasingly becoming practical threats.
no code implementations • EMNLP (NLP-COVID19) 2020 • Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, Sameer Singh
The ongoing pandemic has heightened the need for developing tools to flag COVID-19-related misinformation on the internet, specifically on social media such as Twitter.
1 code implementation • WOSP 2020 • Yoshitomo Matsubara, Sameer Singh
Our models are accurate; we identify at least one of authors, affiliations, and nationalities of held-out papers with 40. 3%, 47. 9% and 86. 0% accuracy respectively, from the top-10 guesses of our models.
no code implementations • EMNLP (WNUT) 2020 • Bahareh Harandizadeh, Sameer Singh
Further, there is a lack of a large, linked corpus of tweets to aid researchers, along with lack of gold dataset to evaluate the accuracy of entity linking.
no code implementations • 16 Mar 2023 • Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Tulio Ribeiro
We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program.
no code implementations • 28 Jan 2023 • Kolby Nottingham, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Hannaneh Hajishirzi, Sameer Singh, Roy Fox
Reinforcement learning (RL) agents typically learn tabula rasa, without prior knowledge of the world, which makes learning complex tasks with sparse rewards difficult.
no code implementations • 20 Dec 2022 • Dheeru Dua, Emma Strubell, Sameer Singh, Pat Verga
Recent advances in open-domain question answering (ODQA) have demonstrated impressive accuracy on standard Wikipedia style benchmarks.
no code implementations • 8 Dec 2022 • Dheeru Dua, Shivanshu Gupta, Sameer Singh, Matt Gardner
The intermediate supervision is typically manually written, which can be expensive to collect.
1 code implementation • 21 Oct 2022 • Kalyani Asthana, Zhouhang Xie, Wencong You, Adam Noack, Jonathan Brophy, Sameer Singh, Daniel Lowd
In addition to the primary tasks of detecting and labeling attacks, TCAB can also be used for attack localization, attack target labeling, and attack characterization.
1 code implementation • 19 Oct 2022 • Zhaofeng Wu, Robert L. Logan IV, Pete Walsh, Akshita Bhagia, Dirk Groeneveld, Sameer Singh, Iz Beltagy
We demonstrate that a simple recipe, continued pretraining that incorporates a trainable prompt during multi-task learning, leads to improved promptability in both zero- and few-shot settings compared to existing methods, up to 31% relative.
no code implementations • 9 Oct 2022 • Preethi Seshadri, Pouya Pezeshkpour, Sameer Singh
Recently, there has been an increase in efforts to understand how large language models (LLMs) propagate and amplify social biases.
no code implementations • 12 Sep 2022 • Zhouhang Xie, Sameer Singh, Julian McAuley, Bodhisattwa Prasad Majumder
Recent models can generate fluent and grammatical synthetic reviews while accurately predicting user ratings.
1 code implementation • 8 Jul 2022 • Dylan Slack, Satyapriya Krishna, Himabindu Lakkaraju, Sameer Singh
In real-world evaluations with humans, 73% of healthcare workers (e. g., doctors and nurses) agreed they would use TalkToModel over baseline point-and-click systems for explainability in a disease prediction task, and 85% of ML professionals agreed TalkToModel was easier to use for computing explanations.
1 code implementation • 25 May 2022 • Kolby Nottingham, Alekhya Pyla, Sameer Singh, Roy Fox
We show that our method correctly learns to execute queries to maximize reward in a reinforcement learning setting.
1 code implementation • ACL 2022 • Sanjay Subramanian, William Merrill, Trevor Darrell, Matt Gardner, Sameer Singh, Anna Rohrbach
Training a referring expression comprehension (ReC) model for a new visual domain requires collecting referring expressions, and potentially corresponding bounding boxes, for images in the domain.
1 code implementation • 16 Mar 2022 • Shivanshu Gupta, Sameer Singh, Matt Gardner
A growing body of research has demonstrated the inability of NLP models to generalize compositionally and has tried to alleviate it through specialized architectures, training schemes, and data augmentation, among other approaches.
no code implementations • 15 Feb 2022 • Yasaman Razeghi, Robert L. Logan IV, Matt Gardner, Sameer Singh
Pretrained Language Models (LMs) have demonstrated ability to perform numerical reasoning by extrapolating from a few examples in few-shot settings.
1 code implementation • 3 Feb 2022 • Himabindu Lakkaraju, Dylan Slack, Yuxin Chen, Chenhao Tan, Sameer Singh
Overall, we hope our work serves as a starting place for researchers and engineers to design interactive explainability systems.
no code implementations • 21 Jan 2022 • Zhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Sameer Singh, Daniel Lowd
The landscape of adversarial attacks against text classifiers continues to grow, with new attacks developed every year and many of them available in standard toolkits, such as TextAttack and OpenAttack.
2 code implementations • 7 Jan 2022 • Yoshitomo Matsubara, Davide Callegaro, Sameer Singh, Marco Levorato, Francesco Restuccia
We show that BottleFit decreases power consumption and latency respectively by up to 49% and 89% with respect to (w. r. t.)
no code implementations • NAACL 2022 • Robert L. Logan IV, Alexandre Passos, Sameer Singh, Ming-Wei Chang
Textual knowledge bases such as Wikipedia require considerable effort to keep up to date and consistent.
1 code implementation • NAACL 2022 • Daniel Khashabi, Shane Lyu, Sewon Min, Lianhui Qin, Kyle Richardson, Sean Welleck, Hannaneh Hajishirzi, Tushar Khot, Ashish Sabharwal, Sameer Singh, Yejin Choi
Fine-tuning continuous prompts for target tasks has recently emerged as a compact alternative to full model fine-tuning.
1 code implementation • EMNLP 2021 • Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, Sameer Singh
To understand how models use these sources together, we formalize the problem of knowledge conflicts, where the contextual information contradicts the learned information.
no code implementations • 5 Sep 2021 • Kolby Nottingham, Litian Liang, Daeyun Shin, Charless C. Fowlkes, Roy Fox, Sameer Singh
Natural language instruction following tasks serve as a valuable test-bed for grounded language and robotics research.
1 code implementation • ACL 2021 • Robert L Logan IV, Andrew McCallum, Sameer Singh, Dan Bikel
We investigate: how to best encode mentions, which clustering algorithms are most effective for grouping mentions, how models transfer to different domains, and how bounding the number of mentions tracked during inference impacts performance.
1 code implementation • ACL 2021 • Nitish Gupta, Sameer Singh, Matt Gardner
The predominant challenge in weakly supervised semantic parsing is that of spurious programs that evaluate to correct answers for the wrong reasons.
no code implementations • Findings (ACL) 2022 • Pouya Pezeshkpour, Sarthak Jain, Sameer Singh, Byron C. Wallace
In this paper we evaluate use of different attribution methods for aiding identification of training data artifacts.
2 code implementations • Findings (ACL) 2022 • Robert L. Logan IV, Ivana Balažević, Eric Wallace, Fabio Petroni, Sameer Singh, Sebastian Riedel
Prompting language models (LMs) with training examples and task descriptions has been seen as critical to recent successes in few-shot learning.
no code implementations • 23 Jun 2021 • Dylan Slack, Sophie Hilgard, Sameer Singh, Himabindu Lakkaraju
As machine learning models are increasingly used in critical decision-making settings (e. g., healthcare, finance), there has been a growing emphasis on developing methods to explain model predictions.
1 code implementation • ACL 2021 • Anthony Chen, Pallavi Gudipati, Shayne Longpre, Xiao Ling, Sameer Singh
These experiments on AmbER sets show their utility as an evaluation tool and highlight the weaknesses of popular retrieval systems.
no code implementations • NeurIPS 2021 • Dylan Slack, Sophie Hilgard, Himabindu Lakkaraju, Sameer Singh
In this work, we introduce the first framework that describes the vulnerabilities of counterfactual explanations and shows how they can be manipulated.
no code implementations • EMNLP 2021 • Dheeru Dua, Cicero Nogueira dos santos, Patrick Ng, Ben Athiwaratkun, Bing Xiang, Matt Gardner, Sameer Singh
Compositional reasoning tasks like multi-hop question answering, require making latent decisions to get the final answer, given a question.
no code implementations • EMNLP 2021 • Dheeru Dua, Pradeep Dasigi, Sameer Singh, Matt Gardner
When training most modern reading comprehension models, all the questions associated with a context are treated as being independent from each other.
no code implementations • EMNLP 2021 • Matt Gardner, William Merrill, Jesse Dodge, Matthew E. Peters, Alexis Ross, Sameer Singh, Noah A. Smith
In this work we argue that for complex language understanding tasks, all simple feature correlations are spurious, and we formalize this notion into a class of problems which we call competency problems.
1 code implementation • NAACL 2021 • Pouya Pezeshkpour, Sarthak Jain, Byron C. Wallace, Sameer Singh
Instance attribution methods constitute one means of accomplishing these goals by retrieving training instances that (may have) led to a particular prediction.
no code implementations • EMNLP 2021 • Nitish Gupta, Sameer Singh, Matt Gardner, Dan Roth
Such an objective does not require external supervision for the values of the latent output, or even the end task, yet provides an additional training signal to that provided by individual training examples themselves.
3 code implementations • 19 Feb 2021 • Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh
We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the training examples can cause accuracy to vary from near chance to near state-of-the-art.
2 code implementations • 20 Nov 2020 • Yoshitomo Matsubara, Davide Callegaro, Sabur Baidya, Marco Levorato, Sameer Singh
In this paper, we propose to modify the structure and training process of DNN models for complex image classification tasks to achieve in-network compression in the early network layers.
no code implementations • EMNLP 2020 • Eric Wallace, Matt Gardner, Sameer Singh
Although neural NLP models are highly expressive and empirically successful, they also systematically fail in counterintuitive ways and are opaque in their decision-making process.
3 code implementations • EMNLP 2020 • Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, Sameer Singh
The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining.
no code implementations • NAACL 2021 • Eric Wallace, Tony Z. Zhao, Shi Feng, Sameer Singh
In this work, we develop a new data poisoning attack that allows an adversary to control model predictions whenever a desired trigger phrase is present in the input.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Junlin Wang, Jens Tuyls, Eric Wallace, Sameer Singh
Gradient-based analysis methods, such as saliency map visualizations and adversarial input perturbations, have found widespread use in interpreting neural NLP models due to their simplicity, flexibility, and most importantly, their faithfulness.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Sanjay Subramanian, Lucy Lu Wang, Sachin Mehta, Ben Bogin, Madeleine van Zuylen, Sravanthi Parasa, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi
To address challenges in figure retrieval and figure-to-text alignment, we introduce MedICaT, a dataset of medical images in context.
1 code implementation • EMNLP 2020 • Anthony Chen, Gabriel Stanovsky, Sameer Singh, Matt Gardner
Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers.
no code implementations • 1 Oct 2020 • Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, A. Zhang, Ben Zhou
Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.
1 code implementation • NeurIPS 2021 • Dylan Slack, Sophie Hilgard, Sameer Singh, Himabindu Lakkaraju
In this paper, we address the aforementioned challenges by developing a novel Bayesian framework for generating local explanations along with their associated uncertainty.
no code implementations • ACL 2020 • Dheeru Dua, Sameer Singh, Matt Gardner
Complex compositional reading comprehension datasets require performing latent sequential decisions that are learned via supervision from the final answer.
no code implementations • ACL 2020 • Ananth Gottumukkala, Dheeru Dua, Sameer Singh, Matt Gardner
Building general reading comprehension systems, capable of solving multiple datasets at the same time, is a recent aspirational goal in the research community.
no code implementations • ACL 2020 • Robert L. Logan IV, Matt Gardner, Sameer Singh
In addition, we elucidate subtle differences in how importance sampling is applied in these works that can have substantial effects on the final estimates, as well as provide theoretical results which reinforce the validity of this technique.
no code implementations • 4 Jun 2020 • Zhengli Zhao, Zizhao Zhang, Ting Chen, Sameer Singh, Han Zhang
We provide new state-of-the-art results for conditional generation on CIFAR-10 with both consistency loss and contrastive loss as additional regularizations.
4 code implementations • ACL 2020 • Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, Sameer Singh
Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors.
1 code implementation • ACL 2020 • Sanjay Subramanian, Ben Bogin, Nitish Gupta, Tomer Wolfson, Sameer Singh, Jonathan Berant, Matt Gardner
Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflecting the compositional structure of the problem in the network architecture.
1 code implementation • ICLR 2020 • Piyush Gupta, Nikaash Puri, Sukriti Verma, Dhruv Kayastha, Shripad Deshmukh, Balaji Krishnamurthy, Sameer Singh
We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that our approach generates saliency maps that are more interpretable for humans than existing approaches.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou
Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.
no code implementations • AKBC 2020 • Pouya Pezeshkpour, Yifan Tian, Sameer Singh
To address these issues, we gather a semi-complete KG referred as YAGO3-TC, using a random subgraph from the test and validation data of YAGO3-10, which enables us to compute accurate triple classification accuracy on this data.
no code implementations • 11 Feb 2020 • Zhengli Zhao, Sameer Singh, Honglak Lee, Zizhao Zhang, Augustus Odena, Han Zhang
Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a consistency cost on the discriminator.
no code implementations • 29 Dec 2019 • Dheeru Dua, Ananth Gottumukkala, Alon Talmor, Sameer Singh, Matt Gardner
A lot of diverse reading comprehension datasets have recently been introduced to study various phenomena in natural language, ranging from simple paraphrase matching and entity typing to entity tracking and understanding the implications of the context.
2 code implementations • 23 Dec 2019 • Nikaash Puri, Sukriti Verma, Piyush Gupta, Dhruv Kayastha, Shripad Deshmukh, Balaji Krishnamurthy, Sameer Singh
We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that SARFA generates saliency maps that are more interpretable for humans than existing approaches.
2 code implementations • ICLR 2020 • Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, Matt Gardner
Answering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations.
2 code implementations • 6 Nov 2019 • Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, Himabindu Lakkaraju
Our approach can be used to scaffold any biased classifier in such a way that its predictions on the input data distribution still remain biased, but the post hoc explanations of the scaffolded classifier look innocuous.
3 code implementations • 5 Nov 2019 • Forough Arabshahi, Zhichu Lu, Pranay Mundra, Sameer Singh, Animashree Anandkumar
We study compositional generalization, viz., the problem of zero-shot generalization to novel compositions of concepts in a domain.
no code implementations • WS 2019 • Anthony Chen, Gabriel Stanovsky, Sameer Singh, Matt Gardner
Our study suggests that while current metrics may be suitable for existing QA datasets, they limit the complexity of QA datasets that can be created.
no code implementations • WS 2019 • Dheeru Dua, Ananth Gottumukkala, Alon Talmor, Sameer Singh, Matt Gardner
A lot of diverse reading comprehension datasets have recently been introduced to study various phenomena in natural language, ranging from simple paraphrase matching and entity typing to entity tracking and understanding the implications of the context.
no code implementations • 2 Oct 2019 • Zhengli Zhao, Nicolas Papernot, Sameer Singh, Neoklis Polyzotis, Augustus Odena
Broad adoption of machine learning techniques has increased privacy concerns for models trained on sensitive data such as medical records.
2 code implementations • 1 Oct 2019 • Yoshitomo Matsubara, Sabur Baidya, Davide Callegaro, Marco Levorato, Sameer Singh
Offloading the execution of complex Deep Neural Networks (DNNs) models to compute-capable devices at the network edge, that is, edge servers, can significantly reduce capture-to-output delay.
1 code implementation • IJCNLP 2019 • Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subramanian, Matt Gardner, Sameer Singh
Neural NLP models are increasingly accurate but are imperfect and opaque---they break in counterintuitive ways and leave end users puzzled at their behavior.
1 code implementation • IJCNLP 2019 • Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, Matt Gardner
The ability to understand and work with numbers (numeracy) is critical for many complex reasoning tasks.
1 code implementation • IJCNLP 2019 • Matthew E. Peters, Mark Neumann, Robert L. Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, Noah A. Smith
Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those entities.
Ranked #11 on
Entity Linking
on AIDA-CoNLL
1 code implementation • IJCNLP 2019 • Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh
We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset.
1 code implementation • ACL 2019 • Robert Logan, Nelson F. Liu, Matthew E. Peters, Matt Gardner, Sameer Singh
Modeling human language requires the ability to not only generate fluent text but also encode factual knowledge.
1 code implementation • ACL 2019 • Marco Tulio Ribeiro, Carlos Guestrin, Sameer Singh
Although current evaluation of question-answering systems treats predictions in isolation, we need to consider the relationship between predictions to measure true understanding.
1 code implementation • 17 Jun 2019 • Robert L. Logan IV, Nelson F. Liu, Matthew E. Peters, Matt Gardner, Sameer Singh
Modeling human language requires the ability to not only generate fluent text but also encode factual knowledge.
1 code implementation • ACL 2019 • Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, Luke Zettlemoyer
Multi-hop reading comprehension (RC) questions are challenging because they require reading and reasoning over multiple paragraphs.
no code implementations • NAACL 2019 • {Ananya}, Nitya Parthasarthi, Sameer Singh
Language is gendered if the context surrounding a mention is suggestive of a particular binary gender for that mention.
no code implementations • NAACL 2019 • William Yang Wang, Sameer Singh, Jiwei Li
Adversarial learning is a game-theoretic learning paradigm, which has achieved huge successes in the field of Computer Vision recently.
1 code implementation • NAACL 2019 • Pouya Pezeshkpour, Yifan Tian, Sameer Singh
We use these techniques to evaluate the robustness of link prediction models (by measuring sensitivity to additional facts), study interpretability through the facts most responsible for predictions (by identifying the most influential neighbors), and detect incorrect facts in the knowledge base.
no code implementations • NAACL 2019 • Jun Seok Kang, Robert L. Logan IV, Zewei Chu, Yang Chen, Dheeru Dua, Kevin Gimpel, Sameer Singh, Niranjan Balasubramanian
Given a sentence about a target entity, the task is to automatically generate a post-modifier phrase that provides contextually relevant information about the entity.
3 code implementations • NAACL 2019 • Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, Matt Gardner
We introduce a new English reading comprehension benchmark, DROP, which requires Discrete Reasoning Over the content of Paragraphs.
Ranked #11 on
Question Answering
on DROP Test
2 code implementations • EMNLP 2018 • Pouya Pezeshkpour, Liyan Chen, Sameer Singh
In this paper, we propose multimodal knowledge base embeddings (MKBE) that use different neural encoders for this variety of observed data, and combine them with existing relational models to learn embeddings of the entities and multimodal data.
no code implementations • EMNLP 2018 • Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rocktäschel, Mike Sheldon, Guillaume Bouchard, Sebastian Riedel
This task requires both the interpretation of rules and the application of background knowledge.
1 code implementation • ACL 2018 • Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically.
1 code implementation • 1 May 2018 • Pouya Pezeshkpour, Carlos Guestrin, Sameer Singh
Matrix factorization is a well-studied task in machine learning for compactly representing large, noisy data.
1 code implementation • ICLR 2018 • Forough Arabshahi, Sameer Singh, Animashree Anandkumar
This is because they mostly rely either on black-box function evaluations that do not capture the structure of the program, or on detailed execution traces that are expensive to obtain, and hence the training data has poor coverage of the domain under consideration.
1 code implementation • 29 Nov 2017 • Robert L. Logan IV, Samuel Humeau, Sameer Singh
The broad goal of information extraction is to derive structured information from unstructured data.
1 code implementation • ICLR 2018 • Zhengli Zhao, Dheeru Dua, Sameer Singh
Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed.
no code implementations • EMNLP 2017 • Nitish Gupta, Sameer Singh, Dan Roth
For accurate entity linking, we need to capture various information aspects of an entity, such as its description in a KB, contexts in which it is mentioned, and structured knowledge.
no code implementations • 25 Jul 2017 • Parisa Kordjamshidi, Sameer Singh, Daniel Khashabi, Christos Christodoulopoulos, Mark Summons, Saurabh Sinha, Dan Roth
In particular, we provide an initial prototype for a relational and graph traversal query language where queries are directly used as relational features for structured machine learning models.
1 code implementation • COLING 2016 • Parisa Kordjamshidi, Daniel Khashabi, Christos Christodoulopoulos, Bhargav Mangipudi, Sameer Singh, Dan Roth
We present a novel way for designing complex joint inference and learning models using Saul (Kordjamshidi et al., 2015), a recently-introduced declarative learning-based programming language (DeLBP).
no code implementations • 22 Nov 2016 • Sameer Singh, Marco Tulio Ribeiro, Carlos Guestrin
Recent work in model-agnostic explanations of black-box machine learning has demonstrated that interpretability of complex models does not have to come at the cost of accuracy or model flexibility.
no code implementations • 17 Nov 2016 • Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model's behavior.
no code implementations • 16 Jun 2016 • Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
Understanding why machine learning models behave the way they do empowers both system designers and end-users in many ways: in model selection, feature engineering, in order to trust and act upon the predictions, and in more intuitive user interfaces.
23 code implementations • 16 Feb 2016 • Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
Despite widespread adoption, machine learning models remain mostly black boxes.
no code implementations • ACL 2016 • Hannah Rashkin, Sameer Singh, Yejin Choi
Through a particular choice of a predicate (e. g., "x violated y"), a writer can subtly connote a range of implied sentiments and presupposed facts about the entities x and y: (1) writer's perspective: projecting x as an "antagonist"and y as a "victim", (2) entities' perspective: y probably dislikes x, (3) effect: something bad happened to y, (4) value: y is something valuable, and (5) mental state: y is distressed by the event.
no code implementations • 23 Apr 2015 • Nitish Gupta, Sameer Singh
Matrix factorization has found incredible success and widespread application as a collaborative filtering based approach to recommendations.
no code implementations • TACL 2015 • Xiao Ling, Sameer Singh, Daniel S. Weld
Recent research on entity linking (EL) has introduced a plethora of promising techniques, ranging from deep neural networks to joint inference.
no code implementations • 14 Nov 2013 • Sameer Singh, Sebastian Riedel, Andrew McCallum
Belief Propagation has been widely used for marginal inference, however it is slow on problems with large-domain variables and high-order factors.
no code implementations • NeurIPS 2009 • Andrew Mccallum, Karl Schultz, Sameer Singh
Discriminatively trained undirected graphical models have had wide empirical success, and there has been increasing interest in toolkits that ease their application to complex relational data.
no code implementations • NeurIPS 2009 • Khashayar Rohanimanesh, Sameer Singh, Andrew McCallum, Michael J. Black
Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems.