no code implementations • ACL (RepL4NLP) 2021 • Kevin Huang, Peng Qi, Guangtao Wang, Tengyu Ma, Jing Huang
In this paper, we propose a novel framework E2GRE (Entity and Evidence Guided Relation Extraction) that jointly extracts relations and the underlying evidence sentences by using large pretrained language model (LM) as input encoder.
no code implementations • EMNLP 2021 • Tim O’Gorman, Zach Jensen, Sheshera Mysore, Kevin Huang, Rubayyat Mahbub, Elsa Olivetti, Andrew McCallum
Material science synthesis procedures are a promising domain for scientific NLP, as proper modeling of these recipes could provide insight into new ways of creating materials.
1 code implementation • 23 Sep 2024 • Hong Nguyen, Sean Foley, Kevin Huang, Xuan Shi, Tiantian Feng, Shrikanth Narayanan
Understanding speech production both visually and kinematically can inform second language learning system designs, as well as the creation of speaking characters in video games and animations.
1 code implementation • 12 Jun 2024 • Anfeng Xu, Kevin Huang, Tiantian Feng, Lue Shen, Helen Tager-Flusberg, Shrikanth Narayanan
Speech foundation models, trained on vast datasets, have opened unique opportunities in addressing challenging low-resource speech understanding, such as child speech.
1 code implementation • 13 Oct 2023 • Kevin Huang, Rwik Rana, Alexander Spitzer, Guanya Shi, Byron Boots
Precise arbitrary trajectory tracking for quadrotors is challenging due to unknown nonlinear dynamics, trajectory infeasibility, and actuation limits.
1 code implementation • 6 Oct 2023 • Jacob Sacks, Rwik Rana, Kevin Huang, Alex Spitzer, Guanya Shi, Byron Boots
A major challenge in robotics is to design robust policies which enable complex and agile behaviors in the real world.
no code implementations • 3 Oct 2023 • Anfeng Xu, Kevin Huang, Tiantian Feng, Helen Tager-Flusberg, Shrikanth Narayanan
Building on the foundation of an audio-only child-adult speaker classification pipeline, we propose incorporating visual cues through active speaker detection and visual processing models.
1 code implementation • 11 Apr 2023 • Jeremy Bernstein, Chris Mingard, Kevin Huang, Navid Azizan, Yisong Yue
Automatic gradient descent trains both fully-connected and convolutional networks out-of-the-box and at ImageNet scale.
1 code implementation • 2 Jun 2022 • Hyeonsu B. Kang, Sheshera Mysore, Kevin Huang, Haw-Shiuan Chang, Thorben Prein, Andrew McCallum, Aniket Kittur, Elsa Olivetti
Exposure to ideas in domains outside a scientist's own may benefit her in reformulating existing research problems in novel ways and discovering new application domains for existing solution ideas.
1 code implementation • 14 Dec 2021 • Kevin Huang, Sahin Lale, Ugo Rosolia, Yuanyuan Shi, Anima Anandkumar
It then uses the top trajectories as initialization for gradient descent and applies gradient updates to each of these trajectories to find the optimal action sequence.
no code implementations • 24 Nov 2021 • Shlomo Dubnov, Kevin Huang, Cheng-i Wang
The framework is based on an Music Information Dynamics model, a Variable Markov Oracle (VMO), and is extended with a variational representation learning of audio.
no code implementations • NAACL 2021 • Lingxiao Wang, Kevin Huang, Tengyu Ma, Quanquan Gu, Jing Huang
The core of our algorithm is to introduce a novel variance reduction term to the gradient estimation when performing the task adaptation.
no code implementations • 13 Apr 2021 • Pablo Moscato, Hugh Craig, Gabriel Egan, Mohammad Nazmul Haque, Kevin Huang, Julia Sloan, Jon Corrales de Oliveira
In this study, we took a set of Shakespeare-era plays (181 plays from the period 1585--1610), added the best-guess dates for them from a standard reference work as metadata, and calculated a set of probabilities of individual words in these samples.
no code implementations • 4 Dec 2020 • Ji Eun Kim, Cory Henson, Kevin Huang, Tuan A. Tran, Wan-Yi Lin
We show that our knowledge graph approach can reduce sign search space by 98. 9%.
no code implementations • 27 Nov 2020 • Pablo Moscato, Mohammad Nazmul Haque, Kevin Huang, Julia Sloan, Jon C. de Oliveira
We refer to $S$ as the training set and aim to identify a low-complexity mathematical model that can effectively approximate this target function for new instances $\mathbf{x}$.
1 code implementation • 21 Oct 2020 • Wenxuan Zhou, Kevin Huang, Tengyu Ma, Jing Huang
In this paper, we propose two novel techniques, adaptive thresholding and localized context pooling, to solve the multi-label and multi-entity problems.
Ranked #6 on Relation Extraction on ReDocRED
Document-level Relation Extraction Multi-Label Classification +2
no code implementations • 27 Aug 2020 • Kevin Huang, Guangtao Wang, Tengyu Ma, Jing Huang
Document-level relation extraction is a challenging task which requires reasoning over multiple sentences in order to predict relations in a document.
Ranked #14 on Relation Extraction on DocRED
no code implementations • ICLR 2020 • Lingxiao Wang, Jing Huang, Kevin Huang, Ziniu Hu, Guangtao Wang, Quanquan Gu
Recent Transformer-based models such as Transformer-XL and BERT have achieved huge success on various natural language processing tasks.
1 code implementation • 1 Nov 2019 • Ming Tu, Kevin Huang, Guangtao Wang, Jing Huang, Xiaodong He, Bo-Wen Zhou
Interpretable multi-hop reading comprehension (RC) over multiple documents is a challenging problem because it demands reasoning over multiple information sources and explaining the answer prediction by providing supporting evidences.
no code implementations • CONLL 2019 • Kevin Huang, Yun Tang, Jing Huang, Xiaodong He, Bo-Wen Zhou
We test the relation module on the SQuAD 2. 0 dataset using both the BiDAF and BERT models as baseline readers.
no code implementations • 23 Oct 2019 • Kevin Huang, Yun Tang, Jing Huang, Xiaodong He, Bo-Wen Zhou
In this paper, we aim to improve a MRC model's ability to determine whether a question has an answer in a given context (e. g. the recently proposed SQuAD 2. 0 task).
no code implementations • WS 2019 • Sheshera Mysore, Zach Jensen, Edward Kim, Kevin Huang, Haw-Shiuan Chang, Emma Strubell, Jeffrey Flanigan, Andrew McCallum, Elsa Olivetti
Materials science literature contains millions of materials synthesis procedures described in unstructured natural language text.
1 code implementation • 31 Dec 2018 • Edward Kim, Zach Jensen, Alexander van Grootel, Kevin Huang, Matthew Staib, Sheshera Mysore, Haw-Shiuan Chang, Emma Strubell, Andrew McCallum, Stefanie Jegelka, Elsa Olivetti
Leveraging new data sources is a key step in accelerating the pace of materials design and discovery.
no code implementations • 24 Aug 2018 • Joseph Chuang, Eric Tsai, Kevin Huang, Jay Fetter
DenseNets have been shown to be a competitive model among recent convolutional network architectures.
no code implementations • 18 Nov 2017 • Sheshera Mysore, Edward Kim, Emma Strubell, Ao Liu, Haw-Shiuan Chang, Srikrishna Kompella, Kevin Huang, Andrew McCallum, Elsa Olivetti
In this work, we present a system for automatically extracting structured representations of synthesis procedures from the texts of materials science journal articles that describe explicit, experimental syntheses of inorganic compounds.