no code implementations • ACL 2022 • Vivek Gupta, Shuo Zhang, Alakananda Vempala, Yujie He, Temma Choji, Vivek Srikumar
On the downstream tabular inference task, using only the automatically extracted evidence as the premise, our approach outperforms prior benchmarks.
1 code implementation • EMNLP 2021 • Taelin Karidi, Yichu Zhou, Nathan Schneider, Omri Abend, Vivek Srikumar
We present a method for exploring regions around individual points in a contextualized vector space (particularly, BERT space), as a way to investigate how these regions correspond to word senses.
no code implementations • EMNLP (LAW, DMR) 2021 • Ghazaleh Kazeminejad, Martha Palmer, Tao Li, Vivek Srikumar
Tracking entity states is a natural language processing task assumed to require human annotation.
no code implementations • LREC (LAW) 2022 • Yang Janet Liu, Jena D. Hwang, Nathan Schneider, Vivek Srikumar
The SNACS framework provides a network of semantic labels called supersenses for annotating adpositional semantics in corpora.
no code implementations • NAACL (CLPsych) 2022 • Maitrey Mehta, Derek Caperton, Katherine Axford, Lauren Weitzman, David Atkins, Vivek Srikumar, Zac Imel
Yet, NLP work in this area has been restricted to evaluating a single approach to treatment, when prior research indicates therapists used a wide variety of interventions with their clients, often in the same session.
no code implementations • COLING (LAW) 2020 • Jena D. Hwang, Nathan Schneider, Vivek Srikumar
We reevaluate an existing adpositional annotation scheme with respect to two thorny semantic domains: accompaniment and purpose.
no code implementations • 23 Dec 2024 • Kyle Richardson, Vivek Srikumar, Ashish Sabharwal
Recent direct preference alignment algorithms (DPA), such as DPO, have shown great promise in aligning large language models to human preferences.
no code implementations • 18 Dec 2024 • Zhichao Xu, Jinghua Yan, Ashim Gupta, Vivek Srikumar
Transformers dominate NLP and IR; but their inference inefficiencies and challenges in extrapolating to longer contexts have sparked interest in alternative model architectures.
1 code implementation • 6 Jul 2024 • Zhichao Xu, Ashim Gupta, Tao Li, Oliver Bentham, Vivek Srikumar
To this end, we investigate the impact of model compression along four dimensions: (1) degeneration harm, i. e., bias and toxicity in generation; (2) representational harm, i. e., biases in discriminative tasks; (3) dialect bias; and(4) language modeling and downstream task performance.
no code implementations • 17 Jun 2024 • Ashim Gupta, Sina Mahdipour Saravani, P. Sadayappan, Vivek Srikumar
The increasing size of transformer-based models in NLP makes the question of compressing them important.
no code implementations • 18 Feb 2024 • Zhichao Xu, Daniel Cohen, Bei Wang, Vivek Srikumar
Inspired by the idea of learning from label proportions, we propose two principles for in-context example ordering guided by model's probability predictions.
1 code implementation • 12 Jan 2024 • Maitrey Mehta, Valentina Pyatkin, Vivek Srikumar
Can the promise of the prompt-based paradigm be extended to such structured outputs?
1 code implementation • 16 Nov 2023 • Ashim Gupta, Rishanth Rajendhran, Nathan Stringham, Vivek Srikumar, Ana Marasović
Do larger and more performant models resolve NLP's longstanding robustness issues?
no code implementations • 16 Nov 2023 • Yanai Elazar, Bhargavi Paranjape, Hao Peng, Sarah Wiegreffe, Khyathi Raghavi, Vivek Srikumar, Sameer Singh, Noah A. Smith
Previous work has found that datasets with paired inputs are prone to correlations between a specific part of the input (e. g., the hypothesis in NLI) and the label; consequently, models trained only on those outperform chance.
no code implementations • 14 Nov 2023 • Vivek Gupta, Pranshu Kandoi, Mahek Bhavesh Vora, Shuo Zhang, Yujie He, Ridho Reinanda, Vivek Srikumar
Given these results, our dataset has the potential to serve as a challenging benchmark to improve the temporal reasoning capabilities of NLP models.
no code implementations • 30 Jun 2023 • Vivek Srikumar, Dan Roth
At the end, we will see two worked examples to illustrate the use of these recipes.
no code implementations • 25 May 2023 • Ashim Gupta, Carter Wood Blum, Temma Choji, Yingjie Fei, Shalin Shah, Alakananda Vempala, Vivek Srikumar
For example, on sentiment classification using the SST-2 dataset, our method improves the adversarial accuracy over the best existing defense approach by more than 4% with a smaller decrease in task accuracy (0. 5% vs 2. 5%).
1 code implementation • 24 May 2023 • Tao Li, Ghazaleh Kazeminejad, Susan W. Brown, Martha Palmer, Vivek Srikumar
In this paper, we eliminate such issue with a framework that jointly models VerbNet and PropBank labels as one sequence.
no code implementations • 17 Apr 2023 • Zhenduo Wang, Zhichao Xu, Qingyao Ai, Vivek Srikumar
Our goal is to supplement existing work with an insightful hand-analysis of unsolved challenges by the baseline and propose our solutions.
2 code implementations • 20 Dec 2022 • Valentina Pyatkin, Jena D. Hwang, Vivek Srikumar, Ximing Lu, Liwei Jiang, Yejin Choi, Chandra Bhagavatula
Context is everything, even in commonsense moral reasoning.
1 code implementation • 2 Dec 2022 • Bhargavi Paranjape, Pradeep Dasigi, Vivek Srikumar, Luke Zettlemoyer, Hannaneh Hajishirzi
We propose AGRO -- Adversarial Group discovery for Distributionally Robust Optimization -- an end-to-end approach that jointly identifies error-prone groups and improves accuracy on them.
1 code implementation • 2 Sep 2022 • Wenya Wang, Vivek Srikumar, Hanna Hajishirzi, Noah A. Smith
In question answering requiring common sense, language models (e. g., GPT-3) have been used to generate text expressing background knowledge that helps improve performance.
4 code implementations • 9 Jun 2022 • Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, ZiRui Wang, Ziyi Wu
BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models.
1 code implementation • 29 Sep 2021 • Ricardo Bigolin Lanfredi, Mingyuan Zhang, William F. Auffermann, Jessica Chan, Phuong-Anh T. Duong, Vivek Srikumar, Trafton Drew, Joyce D. Schroeder, Tolga Tasdizen
Furthermore, a small subset of the data contains readings from all radiologists, allowing for the calculation of inter-rater scores.
1 code implementation • 23 Sep 2021 • Taelin Karidi, Yichu Zhou, Nathan Schneider, Omri Abend, Vivek Srikumar
We present a method for exploring regions around individual points in a contextualized vector space (particularly, BERT space), as a way to investigate how these regions correspond to word senses.
no code implementations • 2 Aug 2021 • Vivek Gupta, Riyaz A. Bhat, Atreya Ghosal, Manish Shrivastava, Maneesh Singh, Vivek Srikumar
Our experiments demonstrate that a RoBERTa-based model, representative of the current state-of-the-art, fails at reasoning on the following counts: it (a) ignores relevant parts of the evidence, (b) is over-sensitive to annotation artifacts, and (c) relies on the knowledge encoded in the pre-trained language model rather than the evidence presented in its tabular inputs.
1 code implementation • 28 Jul 2021 • Mattia Medina Grespan, Ashim Gupta, Vivek Srikumar
Symbolic knowledge can provide crucial inductive bias for training neural models, especially in low data regimes.
1 code implementation • ACL 2022 • Yichu Zhou, Vivek Srikumar
Given the prevalence of pre-trained contextualized representations in today's NLP, there have been many efforts to understand what information they contain, and why they seem to be universally successful.
1 code implementation • ACL 2021 • Ashim Gupta, Vivek Srikumar
In this work, we introduce X-FACT: the largest publicly available multilingual dataset for factual verification of naturally existing real-world claims.
3 code implementations • 26 May 2021 • Debjyoti Paul, Jie Cao, Feifei Li, Vivek Srikumar
To address this workload characterization problem, we propose our query plan encoders that learn essential features and their correlations from query plans.
1 code implementation • NAACL 2021 • Yichu Zhou, Vivek Srikumar
Understanding how linguistic structures are encoded in contextualized embedding could help explain their impressive performance across NLP@.
1 code implementation • NAACL 2021 • J. Neeraja, Vivek Gupta, Vivek Srikumar
Reasoning about tabular information presents unique challenges to modern NLP approaches which largely rely on pre-trained contextualized embeddings of text.
1 code implementation • 6 Apr 2021 • Archit Rathore, Sunipa Dev, Jeff M. Phillips, Vivek Srikumar, Yan Zheng, Chin-Chia Michael Yeh, Junpeng Wang, Wei zhang, Bei Wang
To aid this, we present Visualization of Embedding Representations for deBiasing system ("VERB"), an open-source web-based visualization tool that helps the users gain a technical understanding and visual intuition of the inner workings of debiasing techniques, with a focus on their geometric properties.
1 code implementation • 10 Jan 2021 • Ashim Gupta, Giorgi Kvernadze, Vivek Srikumar
In this paper, we study the response of large models from the BERT family to incoherent inputs that should confuse any model that claims to understand natural language.
1 code implementation • 2 Dec 2020 • Jakob Prange, Nathan Schneider, Vivek Srikumar
Our best tagger is capable of recovering a sizeable fraction of the long-tail supertags and even generates CCG categories that have never been seen in training, while approximating the prior state of the art in overall tag accuracy with fewer parameters.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Tao Li, Tushar Khot, Daniel Khashabi, Ashish Sabharwal, Vivek Srikumar
Our broad study reveals that (1) all these models, with and without fine-tuning, have notable stereotyping biases in these classes; (2) larger models often have higher bias; and (3) the effect of fine-tuning on bias varies strongly with the dataset and the model size.
no code implementations • 2 Sep 2020 • Yichu Zhou, Omri Koshorek, Vivek Srikumar, Jonathan Berant
Discourse parsing is largely dominated by greedy parsers with manually-designed features, while global parsing is rare due to its computational expense.
1 code implementation • EMNLP 2021 • Sunipa Dev, Tao Li, Jeff M. Phillips, Vivek Srikumar
Language representations are known to carry stereotypical biases and, as a result, lead to biased predictions in downstream tasks.
1 code implementation • ACL 2020 • Xingyuan Pan, Maitrey Mehta, Vivek Srikumar
Various natural language processing tasks are structured prediction problems where outputs are constructed with multiple interdependent decisions.
no code implementations • ACL 2020 • Vivek Gupta, Maitrey Mehta, Pegah Nokhiz, Vivek Srikumar
In this paper, we observe that semi-structured tabulated text is ubiquitous; understanding them requires not only comprehending the meaning of text fragments, but also implicit relationships between them.
1 code implementation • ACL 2020 • Tao Li, Parth Anand Jawale, Martha Palmer, Vivek Srikumar
We start with a strong baseline (RoBERTa) to validate the impact of our approach, and show that our framework outperforms the baseline by learning to comply with declarative constraints.
no code implementations • CONLL 2019 • Jie Cao, Yi Zhang, Adel Youssef, Vivek Srikumar
This paper describes the system submission of our team Amazon to the shared task on Cross Framework Meaning Representation Parsing (MRP) at the 2019 Conference for Computational Language Learning (CoNLL).
no code implementations • CONLL 2019 • Omri Koshorek, Gabriel Stanovsky, Yichu Zhou, Vivek Srikumar, Jonathan Berant
We conclude that the current applicability of LTAL for improving data efficiency in learning semantic meaning representations is limited.
Learning Semantic Representations Natural Language Understanding
1 code implementation • IJCNLP 2019 • Tao Li, Vivek Gupta, Maitrey Mehta, Vivek Srikumar
While neural models show remarkable accuracy on individual predictions, their internal beliefs can be inconsistent across examples.
2 code implementations • 25 Aug 2019 • Sunipa Dev, Tao Li, Jeff Phillips, Vivek Srikumar
Word embeddings carry stereotypical connotations from the text they are trained on, which can lead to invalid inferences in downstream models that rely on them.
1 code implementation • WS 2019 • Adi Shalev, Jena D. Hwang, Nathan Schneider, Vivek Srikumar, Omri Abend, Ari Rappoport
Research on adpositions and possessives in multiple languages has led to a small inventory of general-purpose meaning classes that disambiguate tokens.
1 code implementation • ACL 2019 • Jie Cao, Michael Tanana, Zac E. Imel, Eric Poitras, David C. Atkins, Vivek Srikumar
Specifically, we address the problem of providing real-time guidance to therapists with a dialogue observer that (1) categorizes therapist and client MI behavioral codes and, (2) forecasts codes for upcoming utterances to help guide the conversation and potentially alert the therapist.
1 code implementation • ACL 2019 • Tao Li, Vivek Srikumar
Today, the dominant paradigm for training neural networks involves minimizing task loss on a large dataset.
no code implementations • SEMEVAL 2019 • Yichu Zhou, Vivek Srikumar
We define a new modeling framework for training word embeddings that captures this intuition.
no code implementations • 27 May 2019 • Annie Cherkaev, Waiming Tai, Jeff Phillips, Vivek Srikumar
There is a mismatch between the standard theoretical analyses of statistical machine learning and how learning is used in practice.
no code implementations • EMNLP 2018 • Shusen Liu, Tao Li, Zhimin Li, Vivek Srikumar, Valerio Pascucci, Peer-Timo Bremer
Neural networks models have gained unprecedented popularity in natural language processing due to their state-of-the-art performance and the flexible end-to-end training scheme.
no code implementations • ICML 2018 • Xingyuan Pan, Vivek Srikumar
Predicting structured outputs can be computationally onerous due to the combinatorially large output spaces.
1 code implementation • ACL 2018 • Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Jakob Prange, Austin Blodgett, Sarah R. Moeller, Aviram Stern, Adi Bitan, Omri Abend
Semantic relations are often signaled with prepositional or possessive marking--but extreme polysemy bedevils their analysis and automatic interpretation.
Ranked #4 on Natural Language Understanding on STREUSLE (Role F1 (Preps) metric)
1 code implementation • LREC 2018 • Daniel Khashabi, Mark Sammons, Ben Zhou, Tom Redman, Christos Christodoulopoulos, Vivek Srikumar, Nicholas Rizzolo, Lev Ratinov, Guanheng Luo, Quang Do, Chen-Tse Tsai, Subhro Roy, Stephen Mayhew, Zhili Feng, John Wieting, Xiaodong Yu, Yangqiu Song, Shashank Gupta, Shyam Upadhyay, Naveen Arivazhagan, Qiang Ning, Shaoshi Ling, Dan Roth
no code implementations • 10 Mar 2018 • Anirban Nag, Ali Shafiee, Rajeev Balasubramonian, Vivek Srikumar, Naveen Muralimanohar
Many recent works have designed accelerators for Convolutional Neural Networks (CNNs).
no code implementations • SEMEVAL 2017 • Jena D. Hwang, Archna Bhatia, Na-Rae Han, Tim O{'}Gorman, Vivek Srikumar, Nathan Schneider
We consider the semantics of prepositions, revisiting a broad-coverage annotation scheme used for annotating all 4, 250 preposition tokens in a 55, 000 word corpus of English.
no code implementations • ACL 2017 • Vivek Srikumar
Though feature extraction is a necessary first step in statistical NLP, it is often seen as a mere preprocessing step.
4 code implementations • 7 Apr 2017 • Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Archna Bhatia, Na-Rae Han, Tim O'Gorman, Sarah R. Moeller, Omri Abend, Adi Shalev, Austin Blodgett, Jakob Prange
This document offers a detailed linguistic description of SNACS (Semantic Network of Adposition and Case Supersenses; Schneider et al., 2018), an inventory of 52 semantic labels ("supersenses") that characterize the use of adpositions and case markers at a somewhat coarse level of granularity, as demonstrated in the STREUSLE corpus (https://github. com/nert-nlp/streusle/ ; version 4. 5 tracks guidelines version 2. 6).
no code implementations • EACL 2017 • Dan Roth, Vivek Srikumar
We will cover a range of topics, from the theoretical foundations of learning and inference with ILP models, to practical modeling guides, to software packages and applications. The goal of this tutorial is to introduce the computational framework to broader ACL community, motivate it as a generic framework for learning and inference in global NLP decision problems, present some of the key theoretical and practical issues involved and survey some of the existing applications of it as a way to promote further development of the framework and additional applications.
no code implementations • 10 Mar 2017 • Jena D. Hwang, Archna Bhatia, Na-Rae Han, Tim O'Gorman, Vivek Srikumar, Nathan Schneider
We consider the semantics of prepositions, revisiting a broad-coverage annotation scheme used for annotating all 4, 250 preposition tokens in a 55, 000 word corpus of English.
no code implementations • 8 May 2016 • Nathan Schneider, Jena D. Hwang, Vivek Srikumar, Meredith Green, Kathryn Conger, Tim O'Gorman, Martha Palmer
We present the first corpus annotated with preposition supersenses, unlexicalized categories for semantic functions that can be marked by English prepositions (Schneider et al., 2015).
no code implementations • LREC 2016 • Mark Sammons, Christos Christodoulopoulos, Parisa Kordjamshidi, Daniel Khashabi, Vivek Srikumar, Dan Roth
We present EDISON, a Java library of feature generation functions used in a suite of state-of-the-art NLP tools, based on a set of generic NLP data structures.
no code implementations • 18 Nov 2015 • Xingyuan Pan, Vivek Srikumar
However, unlike threshold and sigmoid networks, ReLU networks are less explored from the perspective of their expressiveness.
no code implementations • 23 Sep 2015 • Kai-Wei Chang, Shyam Upadhyay, Ming-Wei Chang, Vivek Srikumar, Dan Roth
IllinoisSL is a Java library for learning structured prediction models.
no code implementations • NeurIPS 2014 • Vivek Srikumar, Christopher D. Manning
In recent years, distributed representations of inputs have led to performance gains in many applications by allowing statistical information to be shared across inputs.
no code implementations • 24 May 2013 • Vivek Srikumar, Dan Roth
We describe an inventory of semantic relations that are expressed by prepositions.
no code implementations • TACL 2013 • Vivek Srikumar, Dan Roth
This paper introduces the problem of predicting semantic relations expressed by prepositions and develops statistical learning models for predicting the relations, their arguments and the semantic types of the arguments.
no code implementations • LREC 2012 • James Clarke, Vivek Srikumar, Mark Sammons, Dan Roth
Natural Language Processing continues to grow in popularity in a range of research and commercial applications, yet managing the wide array of potential NLP components remains a difficult problem.