no code implementations • 4 Apr 2018 • Paul Azunre, Craig Corcoran, David Sullivan, Garrett Honke, Rebecca Ruppel, Sandeep Verma, Jonathon Morgan
This paper describes an abstractive summarization method for tabular data which employs a knowledge base semantic embedding to generate the summary.
no code implementations • 24 May 2019 • Numa Dhamani, Paul Azunre, Jeffrey L. Gleason, Craig Corcoran, Garrett Honke, Steve Kramer, Jonathon Morgan
We apply an ensemble pipeline composed of a character-level convolutional neural network (CNN) and a long short-term memory (LSTM) as a general tool for addressing a range of disinformation problems.
no code implementations • ICLR 2021 • Garrett Honke, Irina Higgins, Nina Thigpen, Vladimir Miskovic, Katie Link, Sunny Duan, Pramod Gupta, Julia Klawohn, Greg Hajcak
Despite extensive standardization, diagnostic interviews for mental health disorders encompass substantial subjective judgment.
no code implementations • 14 Nov 2020 • Ayse S. Cakmak, Nina Thigpen, Garrett Honke, Erick Perez Alday, Ali Bahrami Rad, Rebecca Adaimi, Chia Jung Chang, Qiao Li, Pramod Gupta, Thomas Neylan, Samuel A. McLean, Gari D. Clifford
The results indicate that the VAE model is a promising approach for actigraphy data analysis for mental health outcomes in long-term studies.
1 code implementation • 24 Jan 2019 • Paul Azunre, Craig Corcoran, Numa Dhamani, Jeffrey Gleason, Garrett Honke, David Sullivan, Rebecca Ruppel, Sandeep Verma, Jonathon Morgan
Simulated data containing a set of base classes is first used to learn an initial set of weights.
1 code implementation • 2 Apr 2024 • Joel Niklaus, Lucia Zheng, Arya D. McCarthy, Christopher Hahn, Brian M. Rosen, Peter Henderson, Daniel E. Ho, Garrett Honke, Percy Liang, Christopher Manning
In this work, we curate LawInstruct, a large legal instruction dataset, covering 17 jurisdictions, 24 languages and a total of 12M examples.
1 code implementation • 25 Apr 2022 • Carl Edwards, Tuan Lai, Kevin Ros, Garrett Honke, Kyunghyun Cho, Heng Ji
We present $\textbf{MolT5}$ $-$ a self-supervised learning framework for pretraining models on a vast amount of unlabeled natural language text and molecule strings.
Ranked #4 on Text-based de novo Molecule Generation on ChEBI-20