no code implementations • COLING 2016 • Fan Zhang, Diane Litman, Katherine Forbes Riley
Penn Discourse Treebank (PDTB)-style annotation focuses on labeling local discourse relations between text spans and typically ignores larger discourse contexts.
1 code implementation • 3 Dec 2016 • Muthu Kumar Chandrasekaran, Carrie Demmans Epp, Min-Yen Kan, Diane Litman
We tackle the prediction of instructor intervention in student posts from discussion forums in Massive Open Online Courses (MOOCs).
no code implementations • 28 Feb 2017 • Fan Zhang, Diane Litman
This paper proposes an approach that identifies the revision location and the revision type jointly to solve the issue of error propagation.
no code implementations • ACL 2017 • Fan Zhang, Homa B. Hashemi, Rebecca Hwa, Diane Litman
This paper presents ArgRewrite, a corpus of between-draft revisions of argumentative essays.
no code implementations • NAACL 2016 • Wencan Luo, Fei Liu, Zitao Liu, Diane Litman
Student course feedback is generated daily in both classrooms and online course discussion forums.
no code implementations • COLING 2016 • Wencan Luo, Fei Liu, Diane Litman
Teaching large classes remains a great challenge, primarily because it is difficult to attend to all the student needs in a timely manner.
no code implementations • WS 2018 • Zahra Rahimi, Diane Litman
This paper proposes a new weighting method for extending a dyad-level measure of convergence to multi-party dialogues by considering group dynamics instead of simply averaging.
no code implementations • 25 Jul 2018 • Wencan Luo, Fei Liu, Zitao Liu, Diane Litman
Summarizing content contributed by individuals can be challenging, because people make different lexical choices even when describing the same events.
no code implementations • 6 Aug 2019 • Haoran Zhang, Ahmed Magooda, Diane Litman, Richard Correnti, Elaine Wang, Lindsay Clare Matsumura, Emily Howe, Rafael Quintana
Writing a good essay typically involves students revising an initial paper draft after receiving feedback.
no code implementations • ACL 2017 • Haoran Zhang, Diane Litman
Our long-term goal is to also use this scoring method to provide formative feedback to students and teachers about students' writing quality.
1 code implementation • WS 2018 • Haoran Zhang, Diane Litman
This paper presents an investigation of using a co-attention based neural network for source-dependent essay scoring.
no code implementations • 2 Sep 2019 • Mingzhi Yu, Emer Gilmartin, Diane Litman
Research on human spoken language has shown that speech plays an important role in identifying speaker personality traits.
no code implementations • 2 Sep 2019 • Mingzhi Yu, Diane Litman, Susannah Paletz
Multi-party linguistic entrainment refers to the phenomenon that speakers tend to speak more similarly during conversation.
no code implementations • 3 Sep 2019 • Tazin Afrin, Diane Litman
We present a method for identifying editor roles from students' revision behaviors during argumentative writing.
no code implementations • WS 2017 • Luca Lugini, Diane Litman
High quality classroom discussion is important to student development, enhancing abilities to express claims, reason about other students' claims, and retain information for longer periods of time.
no code implementations • WS 2018 • Tazin Afrin, Diane Litman
Studies of writing revisions rarely focus on revision quality.
no code implementations • WS 2018 • Luca Lugini, Diane Litman, Amanda Godley, Christopher Olshefski
Classroom discussions in English Language Arts have a positive effect on students' reading, writing and reasoning skills.
no code implementations • WS 2018 • Luca Lugini, Diane Litman
This paper focuses on argument component classification for transcribed spoken classroom discussions, with the goal of automatically classifying student utterances into claims, evidence, and warrants.
no code implementations • 9 Feb 2020 • Ahmed Magooda, Diane Litman
Evaluations demonstrated that summaries produced by the tuned model achieved higher ROUGE scores compared to model trained on just student reflection data or just newspaper data.
no code implementations • LREC 2020 • Christopher Olshefski, Luca Lugini, Ravneet Singh, Diane Litman, Amanda Godley
Although Natural Language Processing (NLP) research on argument mining has advanced considerably in recent years, most studies draw on corpora of asynchronous and written texts, often produced by individuals.
no code implementations • ACL 2020 • Haoran Zhang, Diane Litman
While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision.
no code implementations • COLING 2020 • Luca Lugini, Diane Litman
Argument mining systems often consider contextual information, i. e. information outside of an argumentative discourse unit, when trained to accomplish tasks such as argument component identification, classification, and relation extraction.
no code implementations • COLING 2020 • Luca Lugini, Christopher Olshefski, Ravneet Singh, Diane Litman, Amanda Godley
Teaching collaborative argumentation is an advanced skill that many K-12 teachers struggle to develop.
no code implementations • 27 May 2021 • Mingzhi Yu, Diane Litman
Retrieval-based dialogue systems select the best response from many candidates.
no code implementations • WS 2020 • Tazin Afrin, Elaine Wang, Diane Litman, Lindsay C. Matsumura, Richard Correnti
Automated writing evaluation systems can improve students' writing insofar as students attend to the feedback provided and revise their essay drafts in ways aligned with such feedback.
no code implementations • 4 Sep 2021 • Mingzhi Yu, Diane Litman, Shuang Ma, Jian Wu
Then we use the model to perform similarity measure in a corpus-based entrainment analysis.
no code implementations • Findings (EMNLP) 2021 • Ahmed Magooda, Diane Litman
This paper explores three simple data manipulation techniques (synthesis, augmentation, curriculum) for improving abstractive summarization models without the need for any additional data.
no code implementations • 17 Sep 2021 • Ahmed Magooda, Mohamed Elaraby, Diane Litman
In particular, we incorporate four different tasks (extractive summarization, language modeling, concept detection, and paraphrase detection) both individually and in combination, with the goal of enhancing the target task of abstractive summarization via multitask learning.
no code implementations • 3 Jun 2022 • Omid Kashefi, Tazin Afrin, Meghan Dale, Christopher Olshefski, Amanda Godley, Diane Litman, Rebecca Hwa
The variety of revision unit scope and purpose granularity levels in ArgRewrite, along with the inclusion of new types of meta-data, can make it a useful resource for research and applications that involve revision analysis.
1 code implementation • COLING 2022 • Mohamed Elaraby, Diane Litman
A challenging task when generating summaries of legal documents is the ability to address their argumentative nature.
1 code implementation • ArgMining (ACL) 2022 • Zhexiong Liu, Meiqi Guo, Yue Dai, Diane Litman
The growing interest in developing corpora of persuasive texts has promoted applications in automated systems, e. g., debating and essay scoring systems; however, there is little prior work mining image persuasiveness from an argumentative perspective.
no code implementations • SIGDIAL (ACL) 2022 • Yuya Asano, Diane Litman, Mingzhi Yu, Nikki Lobczowski, Timothy Nokes-Malach, Adriana Kovashka, Erin Walker
Speakers build rapport in the process of aligning conversational behaviors with each other.
1 code implementation • 6 Nov 2022 • Yang Zhong, Diane Litman
Though many algorithms can be used to automatically summarize legal case decisions, most fail to incorporate domain knowledge about how important sentences in a legal decision relate to a representation of its document structure.
Extractive Summarization Unsupervised Extractive Summarization
1 code implementation • 10 Feb 2023 • Tazin Afrin, Diane Litman
We develop models to classify desirable evidence and desirable reasoning revisions in student argumentative writing.
1 code implementation • 1 Jun 2023 • Mohamed Elaraby, Yang Zhong, Diane Litman
We propose a simple approach for the abstractive summarization of long legal opinions that considers the argument structure of the document.
1 code implementation • 1 Jun 2023 • Zhexiong Liu, Diane Litman, Elaine Wang, Lindsay Matsumura, Richard Correnti
The ability to revise in response to feedback is critical to students' writing success.
no code implementations • 11 Jun 2023 • Yuya Asano, Diane Litman, Mingzhi Yu, Nikki Lobczowski, Timothy Nokes-Malach, Adriana Kovashka, Erin Walker
While speech-enabled teachable agents have some advantages over typing-based ones, they are vulnerable to errors stemming from misrecognition by automatic speech recognition (ASR).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 21 Jun 2023 • Nhat Tran, Benjamin Pierce, Diane Litman, Richard Correnti, Lindsay Clare Matsumura
Rigorous and interactive class discussions that support students to engage in high-level thinking and reasoning are essential to learning and are a central component of most teaching interventions.
no code implementations • 13 Sep 2023 • Tazin Afrin, Diane Litman
We develop models to classify desirable reasoning revisions in argumentative writing.
1 code implementation • 29 Sep 2023 • Yang Zhong, Diane Litman
We propose an approach for the structure controllable summarization of long legal opinions that considers the argument structure of the document.
1 code implementation • 15 Oct 2023 • Zhexiong Liu, Mohamed Elaraby, Yang Zhong, Diane Litman
This paper presents an overview of the ImageArg shared task, the first multimodal Argument Mining shared task co-located with the 10th Workshop on Argument Mining at EMNLP 2023.
1 code implementation • 27 Mar 2024 • Yang Zhong, Mohamed Elaraby, Diane Litman, Ahmed Ashraf Butt, Muhsin Menekse
This paper introduces ReflectSumm, a novel summarization dataset specifically designed for summarizing students' reflective writing.
no code implementations • 1 Apr 2024 • Casey Kennington, Malihe Alikhani, Heather Pon-Barry, Katherine Atwell, Yonatan Bisk, Daniel Fried, Felix Gervits, Zhao Han, Mert Inan, Michael Johnston, Raj Korpan, Diane Litman, Matthew Marge, Cynthia Matuszek, Ross Mead, Shiwali Mohan, Raymond Mooney, Natalie Parde, Jivko Sinapov, Angela Stewart, Matthew Stone, Stefanie Tellex, Tom Williams
The ability to interact with machines using natural human language is becoming not just commonplace, but expected.
1 code implementation • Findings (EMNLP) 2021 • Ahmed Magooda, Diane Litman, Mohamed Elaraby
In particular, we incorporate four different tasks (extractive summarization, language modeling, concept detection, and paraphrase detection) both individually and in combination, with the goal of enhancing the target task of abstractive summarization via multitask learning.
no code implementations • EMNLP (ArgMining) 2021 • Nhat Tran, Diane Litman
We utilize multi-task learning to improve argument mining in persuasive online discussions, in which both micro-level and macro-level argumentation must be taken into consideration.
no code implementations • EMNLP (ArgMining) 2021 • Mohamed Elaraby, Diane Litman
We provide a thorough investigation on how to utilize pseudo labels effectively in the self-training scheme.
no code implementations • SIGDIAL (ACL) 2022 • Nhat Tran, Diane Litman
To build a goal-oriented dialogue system that can generate responses given a knowledge base, identifying the relevant pieces of information to be grounded in is vital.
no code implementations • EACL (BEA) 2021 • Haoran Zhang, Diane Litman
However, because AES typically uses supervised machine learning, a human-graded essay corpus is still required to train the AES model.