no code implementations • ICML 2020 • Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joseph Gonzalez
Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference.
no code implementations • NAACL (TeachingNLP) 2021 • David Gaddy, Daniel Fried, Nikita Kitaev, Mitchell Stern, Rodolfo Corona, John DeNero, Dan Klein
We present a set of assignments for a graduate-level NLP course.
1 code implementation • ACL 2022 • Nikita Kitaev, Thomas Lu, Dan Klein
We present an incremental syntactic representation that consists of assigning a single discrete label to each word in a sentence, where the label is predicted using strictly incremental processing of a prefix of the sentence, and the sequence of labels for a sentence fully determines a parse tree.
no code implementations • 25 Nov 2024 • Charlie Snell, Eric Wallace, Dan Klein, Sergey Levine
In this work, we first pose the task of emergence prediction: given access to current LLMs that have random few-shot accuracy on a task, can we predict whether future models (GPT-N+1) will have non-trivial accuracy on that task?
1 code implementation • 13 Sep 2024 • Ruiqi Zhong, Heng Wang, Dan Klein, Jacob Steinhardt
To make sense of massive data, we often fit simplified models and then interpret the parameters; for example, we cluster the text embeddings and then interpret the mean parameters of each cluster.
1 code implementation • 27 Jun 2024 • Austen Liao, Nicholas Tomlin, Dan Klein
Crucially, the objective in DoND can be modified to produce a fully cooperative game, a strictly competitive one, or anything in between.
no code implementations • 13 Jun 2024 • Eve Fleisig, Genevieve Smith, Madeline Bossi, Ishita Rustagi, Xavier Yin, Dan Klein
We find that the models default to "standard" varieties of English; based on evaluation by native speakers, we also find that model responses to non-"standard" varieties consistently exhibit a range of issues: stereotyping (19% worse than for "standard" varieties), demeaning content (25% worse), lack of comprehension (9% worse), and condescending responses (15% worse).
1 code implementation • 6 Jun 2024 • Kayo Yin, Terry Regier, Dan Klein
Communicative efficiency is a key topic in linguistics and cognitive psychology, with many studies demonstrating how the pressure to communicate with minimal effort guides the form of natural language.
no code implementations • 9 May 2024 • Eve Fleisig, Su Lin Blodgett, Dan Klein, Zeerak Talat
Longstanding data labeling practices in machine learning involve collecting and aggregating labels from multiple annotators.
no code implementations • 6 May 2024 • Sanjay Subramanian, Evonne Ng, Lea Müller, Dan Klein, Shiry Ginosar, Trevor Darrell
We present a zero-shot pose optimization method that enforces accurate physical contact constraints when estimating the 3D pose of humans.
1 code implementation • 9 Apr 2024 • Yizhou Chi, Kevin Yang, Dan Klein
We present THOUGHTSCULPT, a general reasoning and search method for tasks with outputs that can be decomposed into components.
1 code implementation • 19 Feb 2024 • Alexander Wan, Eric Wallace, Dan Klein
Retrieval-augmented language models are being increasingly tasked with subjective, contentious, and conflicting queries such as "is aspartame linked to cancer".
1 code implementation • 13 Feb 2024 • Daniel Nahmias, Gal Engelberg, Dan Klein, Asaf Shabtai
Spear-phishing attacks present a significant security challenge, with large language models (LLMs) escalating the threat by generating convincing emails and facilitating target reconnaissance.
2 code implementations • 12 Nov 2023 • Chancharik Mitra, Abrar Anwar, Rodolfo Corona, Dan Klein, Trevor Darrell, Jesse Thomason
When connecting objects and their language referents in an embodied 3D environment, it is important to note that: (1) an object can be better characterized by leveraging comparative information between itself and other objects, and (2) an object's appearance can vary with camera position.
1 code implementation • 8 Nov 2023 • Yichen Wang, Kevin Yang, Xiaoming Liu, Dan Klein
Existing LLM-based systems for writing long-form stories or story outlines frequently suffer from unnatural pacing, whether glossing over important events or over-elaborating on insignificant details, resulting in a jarring experience for the reader.
no code implementations • 6 Nov 2023 • Olivia Huang, Eve Fleisig, Dan Klein
Current practices regarding data collection for natural language processing on Amazon Mechanical Turk (MTurk) often rely on a combination of studies on data quality and heuristics shared among NLP researchers.
no code implementations • ICCV 2023 • Evonne Ng, Sanjay Subramanian, Dan Klein, Angjoo Kanazawa, Trevor Darrell, Shiry Ginosar
We present a framework for generating appropriate facial responses from a listener in dyadic social interactions based on the speaker's words.
no code implementations • 31 Jul 2023 • Jessy Lin, Yuqing Du, Olivia Watkins, Danijar Hafner, Pieter Abbeel, Dan Klein, Anca Dragan
While current agents can learn to execute simple language instructions, we aim to build agents that leverage diverse language -- language like "this button turns on the TV" or "I put the bowls away" -- that conveys general knowledge, describes the state of the world, provides interactive feedback, and more.
2 code implementations • 24 Jul 2023 • Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian
We propose Reinforcement Learning from Contrastive Distillation (RLCD), a method for aligning language models to follow principles expressed in natural language (e. g., to be more harmless) without using human feedback.
1 code implementation • 6 Jul 2023 • Jonathan Pei, Kevin Yang, Dan Klein
We propose Prefix-Adaptive Decoding (PREADD), a flexible method for controlled text generation.
1 code implementation • 8 Jun 2023 • Sanjay Subramanian, Medhini Narasimhan, Kushal Khangaonkar, Kevin Yang, Arsha Nagrani, Cordelia Schmid, Andy Zeng, Trevor Darrell, Dan Klein
We present a framework that formulates visual question answering as modular code generation.
1 code implementation • 1 Jun 2023 • Catherine Chen, Zejiang Shen, Dan Klein, Gabriel Stanovsky, Doug Downey, Kyle Lo
Recent work has shown that infusing layout features into language models (LMs) improves processing of visually-rich documents such as scientific papers.
no code implementations • 24 May 2023 • Kevin Lin, Kyle Lo, Joseph E. Gonzalez, Dan Klein
When re-finding items, users who forget or are uncertain about identifying details often rely on creative strategies for expressing their information needs -- complex queries that describe content elements (e. g., book characters or events), information beyond the document text (e. g., descriptions of book covers), or personal context (e. g., when they read a book).
no code implementations • 24 May 2023 • Vyoma Raman, Eve Fleisig, Dan Klein
We also find text and demographic outliers to be particularly susceptible to errors in the classification of severe toxicity and identity attacks.
2 code implementations • 24 May 2023 • Vivek Verma, Eve Fleisig, Nicholas Tomlin, Dan Klein
In conjunction with our model, we release three new datasets of human- and AI-generated text as detection benchmarks in the domains of student essays, creative writing, and news articles.
no code implementations • 20 May 2023 • Vivek Verma, Nicholas Tomlin, Dan Klein
The uniform information density (UID) hypothesis states that humans tend to distribute information roughly evenly across an utterance or discourse.
no code implementations • 11 May 2023 • Eve Fleisig, Rediet Abebe, Dan Klein
Thus, a crucial problem in hate speech detection is determining whether a statement is offensive to the demographic group that it targets, when that group may constitute a small fraction of the annotator pool.
1 code implementation • 1 May 2023 • Alexander Wan, Eric Wallace, Sheng Shen, Dan Klein
In this work, we show that adversaries can contribute poison examples to these datasets, allowing them to manipulate model predictions whenever a desired trigger phrase appears in the input.
1 code implementation • 20 Dec 2022 • Kevin Yang, Dan Klein, Nanyun Peng, Yuandong Tian
In human evaluations of automatically generated stories, DOC substantially outperforms a strong Re3 baseline (Yang et al., 2022) on plot coherence (22. 5% absolute gain), outline relevance (28. 2%), and interestingness (20. 7%).
no code implementations • 20 Dec 2022 • Boyi Li, Rodolfo Corona, Karttikeya Mangalam, Catherine Chen, Daniel Flaherty, Serge Belongie, Kilian Q. Weinberger, Jitendra Malik, Trevor Darrell, Dan Klein
Are multimodal inputs necessary for grammar induction?
1 code implementation • 7 Dec 2022 • Collin Burns, Haotian Ye, Dan Klein, Jacob Steinhardt
Existing techniques for training language models can be misaligned with the truth: if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can't detect.
no code implementations • 16 Nov 2022 • Andre He, Nicholas Tomlin, Dan Klein
We evaluate our performance on the task of reconstructing Latin from a dataset of cognates across five Romance languages, achieving a notable reduction in edit distance from the target word forms compared to previous methods.
1 code implementation • 13 Oct 2022 • Kevin Yang, Yuandong Tian, Nanyun Peng, Dan Klein
We consider the problem of automatically generating longer stories of over two thousand words.
no code implementations • 30 Sep 2022 • Charlie Snell, Dan Klein, Ruiqi Zhong
We show that context distillation is a general method to train language models, and it can effectively internalize 3 types of training signals.
1 code implementation • 16 Sep 2022 • Hao Fang, Anusha Balakrishnan, Harsh Jhamtani, John Bufe, Jean Crawford, Jayant Krishnamurthy, Adam Pauls, Jason Eisner, Jacob Andreas, Dan Klein
Satisfying these constraints simultaneously is difficult for the two predominant paradigms in language generation: neural language modeling and rule-based generation.
1 code implementation • 25 May 2022 • Ruiqi Zhong, Charlie Snell, Dan Klein, Jason Eisner
We introduce APEL, a framework in which non-programmers select among candidate programs generated by a seed semantic parser (e. g., Codex).
1 code implementation • ACL 2022 • Eric Wallace, Nicholas Tomlin, Albert Xu, Kevin Yang, Eshaan Pathak, Matthew Ginsberg, Dan Klein
We present the Berkeley Crossword Solver, a state-of-the-art approach for automatically solving crossword puzzles.
2 code implementations • ACL 2022 • Rodolfo Corona, Shizhan Zhu, Dan Klein, Trevor Darrell
Natural language applied to natural 2D images describes a fundamentally 3D world.
1 code implementation • ACL 2022 • Nicholas Tomlin, Andre He, Dan Klein
We present a new dataset containing 10K human-annotated games of Go and show how these natural language annotations can be used as a tool for model interpretability.
1 code implementation • ACL 2022 • Jessy Lin, Daniel Fried, Dan Klein, Anca Dragan
In classic instruction following, language like "I'd like the JetBlue flight" maps to actions (e. g., selecting that flight).
1 code implementation • 28 Jan 2022 • Ruiqi Zhong, Charlie Snell, Dan Klein, Jacob Steinhardt
We then re-rank the descriptions by checking how often they hold on a larger set of samples with a learned verifier.
1 code implementation • EMNLP 2021 • Daniel Fried, Justin T. Chiu, Dan Klein
We present a grounded neural dialogue model that successfully collaborates with people in a partially-observable reference game.
no code implementations • ACL 2021 • Emmanouil Antonios Platanios, Adam Pauls, Subhro Roy, Yuchen Zhang, Alexander Kyte, Alan Guo, Sam Thomson, Jayant Krishnamurthy, Jason Wolfe, Jacob Andreas, Dan Klein
Conversational semantic parsers map user utterances to executable programs given dialogue histories composed of previous utterances, programs, and system responses.
2 code implementations • NeurIPS 2021 • Kevin Yang, Tianjun Zhang, Chris Cummins, Brandon Cui, Benoit Steiner, Linnan Wang, Joseph E. Gonzalez, Dan Klein, Yuandong Tian
Path planning, the problem of efficiently discovering high-reward trajectories, often requires optimizing a high-dimensional and multimodal reward function.
1 code implementation • ACL 2021 • David Gaddy, Dan Klein
In this paper, we present an improved model for voicing silent speech, where audio is synthesized from facial electromyography (EMG) signals.
1 code implementation • Findings (ACL) 2021 • Ruiqi Zhong, Dhruba Ghosh, Dan Klein, Jacob Steinhardt
We develop statistically rigorous methods to address this, and after accounting for pretraining and finetuning noise, we find that our BERT-Large is worse than BERT-Mini on at least 1-4% of instances across MNLI, SST-2, and QQP, compared to the overall accuracy improvement of 2-10%.
1 code implementation • EMNLP 2021 • Richard Shin, Christopher H. Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, Benjamin Van Durme
We explore the use of large pretrained language models as few-shot semantic parsers.
1 code implementation • NAACL 2021 • Albert Xu, Eshaan Pathak, Eric Wallace, Suchin Gururangan, Maarten Sap, Dan Klein
Language models (LMs) must be both safe and equitable to be responsibly deployed in practice.
3 code implementations • NAACL 2021 • Kevin Yang, Dan Klein
We propose Future Discriminators for Generation (FUDGE), a flexible and modular method for controlled text generation.
1 code implementation • Findings (EMNLP) 2021 • Ruiqi Zhong, Kristy Lee, Zheng Zhang, Dan Klein
However, the next word prediction training objective is still misaligned with the target zero-shot learning objective.
1 code implementation • 13 Mar 2021 • Charlie Snell, Ruiqi Zhong, Dan Klein, Jacob Steinhardt
Our approximation explains why models sometimes attend to salient words, and inspires a toy example where a multi-head attention model can overcome the above hard training distribution by improving learning dynamics rather than expressiveness.
5 code implementations • 19 Feb 2021 • Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh
We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the training examples can cause accuracy to vary from near chance to near state-of-the-art.
no code implementations • NAACL 2021 • Catherine Chen, Kevin Lin, Dan Klein
The tree reconciliation module treats the task as a graph optimization problem and outputs the maximum spanning tree of this graph.
no code implementations • NAACL 2021 • Rodolfo Corona, Daniel Fried, Coline Devin, Dan Klein, Trevor Darrell
In our approach, subgoal modules each carry out natural language instructions for a specific subgoal type.
no code implementations • EMNLP 2020 • Steven Cao, Nikita Kitaev, Dan Klein
We propose a method for unsupervised parsing based on the linguistic notion of a constituency test.
3 code implementations • EMNLP 2020 • Ruiqi Zhong, Tao Yu, Dan Klein
We propose test suite accuracy to approximate semantic accuracy for Text-to-SQL models.
1 code implementation • EMNLP 2020 • David Gaddy, Dan Klein
In this paper, we consider the task of digitally voicing silent speech, where silently mouthed words are converted to audible speech based on electromyography (EMG) sensor measurements that capture muscle impulses.
1 code implementation • EMNLP 2020 • Kevin Yang, Violet Yao, John DeNero, Dan Klein
We propose an efficient batching strategy for variable-length decoding on GPU architectures.
1 code implementation • 24 Sep 2020 • Semantic Machines, Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H Lin, Ilya Lintsbakh, Andy McGovern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Wolfe, Abby Wray, Yuchen Zhang, Alexander Zotov
We describe an approach to task-oriented dialogue in which dialogue state is represented as a dataflow graph.
no code implementations • 16 Aug 2020 • Charlie Snell, Ruiqi Zhong, Jacob Steinhardt, Dan Klein
If we ablate attention by fixing it to uniform, the output relevance still correlates with the attention of a normally trained model; but if we instead ablate output relevance, attention cannot be learned.
no code implementations • 2 Jul 2020 • Ruiqi Zhong, Tao Yu, Dan Klein
We propose test suite accuracy to approximate semantic accuracy for Text-to-SQL models, where a predicted query is semantically correct if its denotation is the same as the gold for every possible database.
1 code implementation • ACL 2020 • Ruiqi Zhong, Mitchell Stern, Dan Klein
We propose a method for program generation based on semantic scaffolds, lightweight structures representing the high-level semantic and syntactic composition of a program.
2 code implementations • 26 Feb 2020 • Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joseph E. Gonzalez
Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference.
1 code implementation • ICLR 2020 • Steven Cao, Nikita Kitaev, Dan Klein
We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT.
1 code implementation • IJCNLP 2019 • Nikita Srivatsan, Jonathan T. Barron, Dan Klein, Taylor Berg-Kirkpatrick
We propose a deep factorization model for typographic analysis that disentangles content from style.
1 code implementation • ACL 2019 • David Gaddy, Dan Klein
We consider the problem of learning to map from natural language instructions to state transitions (actions) in a data-efficient manner.
1 code implementation • ACL 2019 • Daniel Fried, Nikita Kitaev, Dan Klein
Neural parsers obtain state-of-the-art results on benchmark treebanks for constituency parsing -- but to what degree do they generalize to other domains?
no code implementations • ACL 2019 • Ronghang Hu, Daniel Fried, Anna Rohrbach, Dan Klein, Trevor Darrell, Kate Saenko
The actual grounding can connect language to the environment through multiple modalities, e. g. "stop at the door" might ground into visual objects, while "turn right" might rely only on the geometric structure of a route.
2 code implementations • ACL 2020 • Nikita Kitaev, Dan Klein
We present a constituency parsing algorithm that, like a supertagger, works by assigning labels to each word in a sentence.
Ranked #14 on
Constituency Parsing
on Penn Treebank
2 code implementations • NAACL 2019 • Sheng Shen, Daniel Fried, Jacob Andreas, Dan Klein
We improve the informativeness of models for conditional text generation using techniques from computational pragmatics.
Ranked #1 on
Data-to-Text Generation
on E2E NLG Challenge
Abstractive Text Summarization
Conditional Text Generation
+3
4 code implementations • ACL 2019 • Nikita Kitaev, Steven Cao, Dan Klein
We show that constituency parsing benefits from unsupervised pre-training across a variety of languages and a range of pre-training conditions.
Ranked #6 on
Constituency Parsing
on CTB5
no code implementations • ACL 2018 • Daniel Fried, Dan Klein
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system.
1 code implementation • NeurIPS 2018 • Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell
We use this speaker model to (1) synthesize new instructions for data augmentation and to (2) implement pragmatic reasoning, which evaluates how well candidate action sequences explain an instruction.
5 code implementations • ACL 2018 • Nikita Kitaev, Dan Klein
We demonstrate that replacing an LSTM encoder with a self-attentive architecture can lead to improvements to a state-of-the-art discriminative constituency parser.
Ranked #9 on
Constituency Parsing
on CTB5
1 code implementation • NAACL 2018 • David Gaddy, Mitchell Stern, Dan Klein
A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and feature-rich lexicons becoming less central while recurrent neural network representations rise in popularity.
1 code implementation • NAACL 2018 • Daniel Fried, Jacob Andreas, Dan Klein
We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks.
1 code implementation • NAACL 2018 • Jacob Andreas, Dan Klein, Sergey Levine
The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world.
1 code implementation • EMNLP 2017 • Nikita Kitaev, Dan Klein
We present a model for locating regions in space based on natural language descriptions.
no code implementations • EMNLP 2017 • Mitchell Stern, Daniel Fried, Dan Klein
Generative neural models have recently achieved state-of-the-art results for constituency parsing.
3 code implementations • EMNLP 2017 • Jacob Andreas, Dan Klein
We investigate the compositional structure of message vectors computed by a deep network trained on a communication game.
1 code implementation • 13 Jul 2017 • Jonathan K. Kummerfeld, Dan Klein
General treebank analyses are graph structured, but parsers are typically restricted to tree structures for efficiency and modeling reasons.
Ranked #2 on
Missing Elements
on Penn Treebank
no code implementations • ACL 2017 • Daniel Fried, Mitchell Stern, Dan Klein
Recent work has proposed several generative neural models for constituency parsing that achieve state-of-the-art results.
Ranked #16 on
Constituency Parsing
on Penn Treebank
no code implementations • ACL 2017 • Mitchell Stern, Jacob Andreas, Dan Klein
In this work, we present a minimal neural model for constituency parsing based on independent scoring of labels and spans.
no code implementations • ACL 2017 • Maxim Rabinovich, Dan Klein
As entity type systems become richer and more fine-grained, we expect the number of types assigned to a given entity to increase.
1 code implementation • ACL 2017 • Maxim Rabinovich, Mitchell Stern, Dan Klein
Tasks like code generation and semantic parsing require mapping unstructured (or partially structured) inputs to well-formed, executable outputs.
Ranked #3 on
Semantic Parsing
on ATIS
1 code implementation • ACL 2017 • Jacob Andreas, Anca Dragan, Dan Klein
Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel.
no code implementations • TACL 2017 • Jonathan K. Kummerfeld, Dan Klein
General treebank analyses are graph structured, but parsers are typically restricted to tree structures for efficiency and modeling reasons.
2 code implementations • ICML 2017 • Jacob Andreas, Dan Klein, Sergey Levine
We describe a framework for multitask deep reinforcement learning guided by policy sketches.
1 code implementation • NAACL 2016 • Matthew Francis-Landau, Greg Durrett, Dan Klein
A key challenge in entity linking is making effective use of contextual information to disambiguate mentions that might refer to different entities in different contexts.
1 code implementation • EMNLP 2016 • Jacob Andreas, Dan Klein
We present a model for pragmatically describing scenes, in which contrastive behavior results from a combination of inference-driven pragmatics and learned semantics.
no code implementations • ACL 2016 • Greg Durrett, Taylor Berg-Kirkpatrick, Dan Klein
We present a discriminative model for single-document summarization that integrally combines compression and anaphoricity constraints.
3 code implementations • NAACL 2016 • Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein
We describe a question answering model that applies to both images and structured knowledge bases.
2 code implementations • CVPR 2016 • Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein
Visual question answering is fundamentally compositional in nature---a question like "where is the dog?"
Ranked #6 on
Visual Question Answering (VQA)
on VQA v1 test-std
1 code implementation • EMNLP 2015 • Jacob Andreas, Dan Klein
This paper describes an alignment-based model for interpreting natural language instructions in context.
no code implementations • IJCNLP 2015 • Greg Durrett, Dan Klein
This paper describes a parsing model that combines the exact dynamic programming of CRF parsing with the rich nonlinear featurization of neural net approaches.
no code implementations • NeurIPS 2015 • Jacob Andreas, Maxim Rabinovich, Dan Klein, Michael. I. Jordan
Calculation of the log-normalizer is a major computational obstacle in applications of log-linear models with large output spaces.
no code implementations • NeurIPS 2014 • Taylor Berg-Kirkpatrick, Jacob Andreas, Dan Klein
We present a new probabilistic model for transcribing piano music from audio to a symbolic form.
no code implementations • TACL 2014 • Greg Durrett, Dan Klein
We present a joint model of three core tasks in the entity analysis stack: coreference resolution (within-document clustering), named entity recognition (coarse semantic typing), and entity linking (matching to Wikipedia entities).
Ranked #28 on
Named Entity Recognition (NER)
on Ontonotes v5 (English)
no code implementations • NeurIPS 2009 • Alexandre Bouchard-Côté, Slav Petrov, Dan Klein
Pruning can massively accelerate the computation of feature expectations in large models.
1 code implementation • 1 Aug 2009 • Percy Liang, Michael Jordan, Dan Klein
A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state.
no code implementations • NeurIPS 2008 • Alexandre Bouchard-Côté, Dan Klein, Michael. I. Jordan
Accurate and efficient inference in evolutionary trees is a central problem in computational biology.
no code implementations • NeurIPS 2007 • Alexandre Bouchard-Côté, Percy S. Liang, Dan Klein, Thomas L. Griffiths
We present a probabilistic approach to language change in which word forms are represented by phoneme sequences that undergo stochastic edits along the branches of a phylogenetic tree.
no code implementations • 1 Jul 2004 • Dan Klein, Christopher Manning
We present a generative model for the unsupervised learning of dependency structures.
Ranked #6 on
Unsupervised Dependency Parsing
on Penn Treebank