Timeline Summarisation (TLS) aims to generate a concise, time-ordered list of events described in sources such as news articles.
This paper proposes a TLS system which can interactively learn from the user's feedback via reinforcement learning and generate timelines satisfying the user's interests.
no code implementations • 31 Aug 2022 • Marcos Treviso, Ji-Ung Lee, Tianchu Ji, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Colin Raffel, Pedro H. Martins, André F. T. Martins, Jessica Zosa Forde, Peter Milder, Edwin Simpson, Noam Slonim, Jesse Dodge, Emma Strubell, Niranjan Balasubramanian, Leon Derczynski, Iryna Gurevych, Roy Schwartz
Recent work in natural language processing (NLP) has yielded appealing results from scaling model parameters and training data; however, using only scale to improve performance means that resource consumption also grows.
Peer review is the primary means of quality control in academia; as an outcome of a peer review process, program and area chairs make acceptance decisions for each paper based on the review reports and scores they received.
Disagreement between coders is ubiquitous in virtually all datasets annotated with human judgements in both natural language processing and computer vision.
There is also a growing body of recent work arguing that following the convention and training with adjudicated labels ignores any uncertainty the labellers had in their classifications, which results in models with poorer generalisation capabilities.
We therefore adapt the DirectRanker to provide a new deep model for ranking creative language with small data.
Most humour processing systems to date make at best discrete, coarse-grained distinctions between the comical and the conventional, yet such notions are better conceptualized as a broad spectrum.
We compare different models for low resource multi-task sequence tagging that leverage dependencies between label sequences for different tasks.
Neural models for response generation produce responses that are semantically plausible but not necessarily factually consistent with facts describing the speaker's persona.
As previous solutions based on Gaussian processes do not scale to large numbers of users, items or pairwise labels, we propose a stochastic variational inference approach that limits computational and memory costs.
For many NLP applications, such as question answering and summarisation, the goal is to select the best solution from a large space of candidates to meet a particular user's needs.
The inability to quantify key aspects of creative language is a frequent obstacle to natural language understanding.
Such applications depend on classifying the situation across a region of interest, which can be depicted as a spatial "heatmap".
Visual modifications to text are often used to obfuscate offensive comments in social media (e. g., "! d10t") or as a writing style ("1337" in "leet speak"), among other scenarios.
Current methods for sequence tagging, a core task in NLP, are data hungry, which motivates the use of crowdsourcing as a cheap way to obtain labelled data.
We introduce a scalable Bayesian preference learning method for identifying convincing arguments in the absence of gold-standard rat- ings or rankings.
1 code implementation • 21 Apr 2015 • Philip J. Marshall, Aprajita Verma, Anupreeta More, Christopher P. Davis, Surhud More, Amit Kapadia, Michael Parrish, Chris Snyder, Julianne Wilcox, Elisabeth Baeten, Christine Macmillan, Claude Cornen, Michael Baumer, Edwin Simpson, Chris J. Lintott, David Miller, Edward Paget, Robert Simpson, Arfon M. Smith, Rafael Küng, Prasenjit Saha, Thomas E. Collett, Matthias Tecza
We describe Space Warps, a novel gravitational lens discovery service that yields samples of high purity and completeness through crowd-sourced visual inspection.
Instrumentation and Methods for Astrophysics Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies