Search Results for author: Kuzman Ganchev

Found 17 papers, 6 papers with code

Text-Blueprint: An Interactive Platform for Plan-based Conditional Generation

no code implementations28 Apr 2023 Fantine Huot, Joshua Maynez, Shashi Narayan, Reinald Kim Amplayo, Kuzman Ganchev, Annie Louis, Anders Sandholm, Dipanjan Das, Mirella Lapata

While conditional generation models can now generate natural language well enough to create fluent text, it is still difficult to control the generation process, leading to irrelevant, repetitive, and hallucinated content.

Query-focused Summarization Text Generation

Towards Computationally Verifiable Semantic Grounding for Language Models

no code implementations16 Nov 2022 Chris Alberti, Kuzman Ganchev, Michael Collins, Sebastian Gehrmann, Ciprian Chelba

Compared to a baseline that generates text using greedy search, we demonstrate two techniques that improve the fluency and semantic accuracy of the generated text: The first technique samples multiple candidate text sequences from which the semantic parser chooses.

Language Modelling

QAmeleon: Multilingual QA with Only 5 Examples

1 code implementation15 Nov 2022 Priyanka Agrawal, Chris Alberti, Fantine Huot, Joshua Maynez, Ji Ma, Sebastian Ruder, Kuzman Ganchev, Dipanjan Das, Mirella Lapata

The availability of large, high-quality datasets has been one of the main drivers of recent progress in question answering (QA).

Few-Shot Learning Question Answering

Conditional Generation with a Question-Answering Blueprint

1 code implementation1 Jul 2022 Shashi Narayan, Joshua Maynez, Reinald Kim Amplayo, Kuzman Ganchev, Annie Louis, Fantine Huot, Anders Sandholm, Dipanjan Das, Mirella Lapata

The ability to convey relevant and faithful information is critical for many tasks in conditional generation and yet remains elusive for neural seq-to-seq models whose outputs often reveal hallucinations and fail to correctly cover important details.

Question Answering Question Generation +1

Feature-Rich Named Entity Recognition for Bulgarian Using Conditional Random Fields

no code implementations26 Sep 2021 Georgi Georgiev, Preslav Nakov, Kuzman Ganchev, Petya Osenova, Kiril Ivanov Simov

The paper presents a feature-rich approach to the automatic recognition and categorization of named entities (persons, organizations, locations, and miscellaneous) in news text for Bulgarian.

Miscellaneous named-entity-recognition +2

State-of-the-art Chinese Word Segmentation with Bi-LSTMs

1 code implementation EMNLP 2018 Ji Ma, Kuzman Ganchev, David Weiss

A wide variety of neural-network architectures have been proposed for the task of Chinese word segmentation.

Chinese Word Segmentation

Globally Normalized Transition-Based Neural Networks

1 code implementation ACL 2016 Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, Michael Collins

Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models.

Dependency Parsing Part-Of-Speech Tagging +2

Context-Dependent Fine-Grained Entity Type Tagging

4 code implementations3 Dec 2014 Dan Gillick, Nevena Lazic, Kuzman Ganchev, Jesse Kirchner, David Huynh

We propose the task of context-dependent fine type tagging, where the set of acceptable labels for a mention is restricted to only those deducible from the local context (e. g. sentence or document).

Sentence Vocal Bursts Type Prediction

Controlling Complexity in Part-of-Speech Induction

no code implementations16 Jan 2014 João V. Graça, Kuzman Ganchev, Luisa Coheur, Fernando Pereira, Ben Taskar

We consider the problem of fully unsupervised learning of grammatical (part-of-speech) categories from unlabeled text.

Inductive Bias

Posterior vs Parameter Sparsity in Latent Variable Models

no code implementations NeurIPS 2009 Kuzman Ganchev, Ben Taskar, Fernando Pereira, João Gama

We apply this new method to learn first-order HMMs for unsupervised part-of-speech (POS) tagging, and show that HMMs learned this way consistently and significantly out-performs both EM-trained HMMs, and HMMs with a sparsity-inducing Dirichlet prior trained by variational EM.

Part-Of-Speech Tagging POS +1

Cannot find the paper you are looking for? You can Submit a new open access paper.