no code implementations • 2 Dec 2024 • Colby Fronk, Linda Petzold
Stiff ordinary differential equations (ODEs) are common in many science and engineering fields, but standard neural ODE approaches struggle to accurately learn these stiff systems, posing a significant barrier to widespread adoption of neural ODEs.
no code implementations • 8 Oct 2024 • Colby Fronk, Linda Petzold
Stiff systems of ordinary differential equations (ODEs) are pervasive in many science and engineering fields, yet standard neural ODE approaches struggle to learn them.
1 code implementation • 2 May 2024 • Zhiyu Zoey Chen, Jing Ma, Xinlu Zhang, Nan Hao, An Yan, Armineh Nourbakhsh, Xianjun Yang, Julian McAuley, Linda Petzold, William Yang Wang
In the fast-evolving domain of artificial intelligence, large language models (LLMs) such as GPT-3 and GPT-4 are revolutionizing the landscapes of finance, healthcare, and law: domains characterized by their reliance on professional expertise, challenging data acquisition, high-stakes, and stringent regulatory compliance.
1 code implementation • 2 Jan 2024 • Xianjun Yang, Stephen D. Wilson, Linda Petzold
This paper presents the development of a specialized chatbot for materials science, leveraging the Llama-2 language model, and continuing pre-training on the expansive research articles in the materials science domain from the S2ORC dataset.
1 code implementation • 24 Oct 2023 • Xianjun Yang, Liangming Pan, Xuandong Zhao, Haifeng Chen, Linda Petzold, William Yang Wang, Wei Cheng
The burgeoning capabilities of advanced large language models (LLMs) such as ChatGPT have led to an increase in synthetic content generation with implications across a variety of sectors, including media, cybersecurity, public discourse, and education.
1 code implementation • 8 Oct 2023 • Xianjun Yang, Kexun Zhang, Haifeng Chen, Linda Petzold, William Yang Wang, Wei Cheng
We then modify the previous zero-shot text detection method, DetectGPT (Mitchell et al., 2023) by utilizing a surrogate white-box model to estimate the probability of the rightmost tokens, allowing us to identify code snippets generated by language models.
no code implementations • 4 Oct 2023 • Xianjun Yang, Xiao Wang, Qi Zhang, Linda Petzold, William Yang Wang, Xun Zhao, Dahua Lin
This study serves as a clarion call for a collective effort to overhaul and fortify the safety of open-source LLMs against malicious attackers.
no code implementations • 17 Aug 2023 • Colby Fronk, Jaewoong Yun, Prashant Singh, Linda Petzold
Symbolic regression with polynomial neural networks and polynomial neural ordinary differential equations (ODEs) are two recent and powerful approaches for equation recovery of many science and engineering problems.
1 code implementation • 27 May 2023 • Xianjun Yang, Wei Cheng, Yue Wu, Linda Petzold, William Yang Wang, Haifeng Chen
However, this progress also presents a significant challenge in detecting the origin of a given text, and current research on detection methods lags behind the rapid evolution of LLMs.
no code implementations • 10 May 2023 • Yuqing Wang, Yun Zhao, Linda Petzold
The Segment Anything Model (SAM) is a foundation model for general image segmentation.
1 code implementation • 9 Apr 2023 • Yuqing Wang, Yun Zhao, Linda Petzold
In this study, we conduct a comprehensive evaluation of state-of-the-art LLMs, namely GPT-3. 5, GPT-4, and Bard, within the realm of clinical language understanding tasks.
1 code implementation • 6 Mar 2023 • Xianjun Yang, Wei Cheng, Xujiang Zhao, Wenchao Yu, Linda Petzold, Haifeng Chen
Experimental results underscore the significant performance improvement achieved by dynamic prompt tuning across a wide range of tasks, including NLP tasks, vision recognition tasks, and vision-language tasks.
1 code implementation • 11 Feb 2023 • Xianjun Yang, Stephen Wilson, Linda Petzold
In this paper, we present a novel approach to knowledge extraction and retrieval using Natural Language Processing (NLP) techniques for material science.
1 code implementation • 19 Dec 2022 • Xianjun Yang, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Xiaoman Pan, Linda Petzold, Dong Yu
Specifically, zero/few-shot and fine-tuning results show that the model pre-trained on our corpus demonstrates a strong aspect or query-focused generation ability compared with the backbone model.
1 code implementation • 22 Oct 2022 • Xianjun Yang, Ya Zhuo, Julia Zuo, Xinlu Zhang, Stephen Wilson, Linda Petzold
Scientific action graphs extraction from materials synthesis procedures is important for reproducible research, machine automation, and material prediction.
1 code implementation • 18 Oct 2022 • Xinlu Zhang, Shiyang Li, Zhiyu Chen, Xifeng Yan, Linda Petzold
Our method first addresses irregularity in each single modality by (1) modeling irregular time series by dynamically incorporating hand-crafted imputation embeddings into learned interpolation embeddings via a gating mechanism, and (2) casting a series of clinical note representations as multivariate irregular time series and tackling irregularity via a time attention mechanism.
no code implementations • 28 Sep 2022 • Haotian Xia, Rhys Tracy, Yun Zhao, Erwan Fraisse, Yuan-Fang Wang, Linda Petzold
The second goal is to introduce a volleyball descriptive language to fully describe the rally processes in the games and apply the language to our dataset.
1 code implementation • 6 Sep 2022 • Xianjun Yang, Yujie Lu, Linda Petzold
To fill this gap, we present FewDocAE, a Few-Shot Document-Level Event Argument Extraction benchmark, based on the existing document-level event extraction dataset.
Document-level Event Extraction Event Argument Extraction +2
no code implementations • 9 Aug 2022 • Colby Fronk, Linda Petzold
Neural networks have the ability to serve as universal function approximators, but they are not interpretable and don't generalize well outside of their training region.
no code implementations • 26 Jun 2022 • Yuqing Wang, Yun Zhao, Linda Petzold
As critically ill patients frequently develop anemia or coagulopathy, transfusion of blood products is a frequent intervention in the Intensive Care Units (ICU).
no code implementations • 28 Mar 2022 • Yuqing Wang, Yun Zhao, Linda Petzold
Most current multivariate time series (MTS) classification algorithms focus on improving the predictive accuracy.
no code implementations • 28 Mar 2022 • Yuqing Wang, Yun Zhao, Rachael Callcut, Linda Petzold
In this paper, we propose a multimodal Transformer model for early sepsis prediction, using the physiological time series data and clinical notes for each patient within $36$ hours of ICU admission.
no code implementations • 1 Oct 2021 • Yun Zhao, Yuqing Wang, Junfeng Liu, Haotian Xia, Zhenni Xu, Qinghang Hong, Zhiyang Zhou, Linda Petzold
In this paper, we perform quantitative analysis of COVID-19 forecasting of confirmed cases and deaths across different regions in the United States with different forecasting horizons, and evaluate the relative impacts of the following three dimensions on the predictive performance (improvement and variation) through different evaluation metrics: model selection, hyperparameter tuning, and the length of time series required for training.
no code implementations • 22 Jun 2021 • Xinlu Zhang, Yun Zhao, Rachael Callcut, Linda Petzold
Multiple organ failure (MOF) is a severe syndrome with a high mortality rate among Intensive Care Unit (ICU) patients.
no code implementations • 19 Mar 2021 • Yun Zhao, Qinghang Hong, Xinlu Zhang, Yu Deng, Yuqing Wang, Linda Petzold
However, there is a lack of deep learning methods that can model the relationship between measurements, clinical notes and mortality outcomes.
no code implementations • 19 Mar 2021 • Yuqing Wang, Yun Zhao, Rachael Callcut, Linda Petzold
However, blindly pursuing complex classifiers is unwise as it also brings the risk of greater performance variation.
no code implementations • 12 Feb 2021 • Fredrik Wrede, Robin Eriksson, Richard Jiang, Linda Petzold, Stefan Engblom, Andreas Hellander, Prashant Singh
State-of-the-art neural network-based methods for learning summary statistics have delivered promising results for simulation-based likelihood-free parameter inference.
no code implementations • 28 Dec 2020 • James Bird, Kellan Colburn, Linda Petzold, Philip Lubin
Machine learning, and eventually true artificial intelligence techniques, are extremely important advancements in astrophysics and astronomy.
no code implementations • 22 Sep 2020 • Yun Zhao, Franklin Ly, Qinghang Hong, Zhuowei Cheng, Tyler Santander, Henry T. Yang, Paul K. Hansma, Linda Petzold
Chronic pain is defined as pain that lasts or recurs for more than 3 to 6 months, often long after the injury or illness that initially caused the pain has healed.
no code implementations • 10 Feb 2020 • James Bird, Linda Petzold, Philip Lubin, Julia Deacon
The StarLight program conceptualizes fast interstellar travel via small wafer satellites (wafersats) that are propelled by directed energy.
no code implementations • 5 Jun 2019 • Yun Zhao, Elmer Guzman, Morgane Audouard, Zhuowei Cheng, PaulK. Hansma, Kenneth S. Kosik, Linda Petzold
In this paper, we address the problem of classifying in vitro MEA recordings of mouse and human neuronal cultures from different genotypes, where there is no easy way to directly utilize raw sequences as inputs to train an end-to-end classification model.
Cultural Vocal Bursts Intensity Prediction General Classification
1 code implementation • 28 May 2019 • Ben Bales, Arya Pourzanjani, Aki Vehtari, Linda Petzold
We present a selection criterion for the Euclidean metric adapted during warmup in a Hamiltonian Monte Carlo sampler that makes it possible for a sampler to automatically pick the metric based on the model and the availability of warmup draws.
Computation Methodology