no code implementations • EMNLP (LAW, DMR) 2021 • Daniel Chen, Martha Palmer, Meagan Vigus
The annotation tool combines syntactic and semantic cues to assign aspects on a sentence-by-sentence basis, following a sequence of rules that each output a UMR aspect.
no code implementations • COLING 2022 • Daniel Chen, Alexis Palmer
For the task of classifying verbs in context as dynamic or stative, current models approach human performance, but only for particular data sets.
no code implementations • LREC 2022 • Daniel Chen, Mans Hulden
Adpositions and case markers contain a high degree of polysemy and participate in unique semantic role configurations.
no code implementations • 9 Jun 2024 • Daniel Chen, Anton Schlegel, Jeffrey A. Nanzer
We demonstrate an imageless method of concealed contraband detection using a real-time 75 GHz rotationally dynamic antenna array.
no code implementations • 22 Jun 2023 • Mark Nguyen, Peter Beidler, Joseph Tsai, August Anderson, Daniel Chen, Paul Kinahan, John Kang
Investigators, funders, and the public desire knowledge on topics and trends in publicly funded research but current efforts in manual categorization are limited in scale and understanding.
no code implementations • 2 Jan 2023 • Daniel Chen, Alexander G. Strang, Andrew W. Eckford, Peter J. Thomas
Many natural and engineered systems can be modeled as discrete state Markov processes.
no code implementations • 16 Oct 2020 • Daniel Chen, Yekun Xu, Betis Baheri, Chuan Bi, Ying Mao, Qiang Quan, Shuai Xu
In this work, we developed an algorithm for principal component regression that runs in time polylogarithmic to the number of data points, an exponential speed up over the state-of-the-art algorithm, under the mild assumption that the input is given in some data structure that supports a norm-based sampling procedure.
no code implementations • WS 2020 • Sarah Beemer, Zak Boston, April Bukoski, Daniel Chen, Princess Dickens, Andrew Gerlach, Torin Hopkins, an, Parth Jawale, Chris Koski, Akanksha Malhotra, Piyush Mishra, Saliha Muradoglu, Lan Sang, Tyler Short, Sagarika Shreevastava, Elizabeth Spaulding, Testumichi Umada, Beilei Xiang, Changbing Yang, Mans Hulden
Sequence-to-sequence models have proven to be highly successful in learning morphological inflection from examples as the series of SIGMORPHON/CoNLL shared tasks have shown.
no code implementations • 13 Sep 2018 • Daniel Chen, Weijie Zhong
An agent acquires information dynamically until her belief about a binary state reaches an upper or lower threshold.