1 code implementation • 19 Aug 2024 • Ran Liu, Ming Liu, Min Yu, Jianguo Jiang, Gang Li, Dan Zhang, Jingyuan Li, Xiang Meng, Weiqing Huang
Pre-trained language models are increasingly being used in multi-document summarization tasks.
no code implementations • 25 Jul 2024 • Jingping Nie, Ran Liu, Behrooz Mahasseni, Erdrin Azemi, Vikramjit Mitra
Acoustic signals are crucial for health monitoring, particularly heart sounds which provide essential data like heart rate and detect cardiac anomalies such as murmurs.
1 code implementation • 22 Apr 2024 • Ming Liu, Ran Liu, Ye Zhu, Hua Wang, Youyang Qu, Rongsheng Li, Yongpan Sheng, Wray Buntine
ChatGPT has changed the AI community and an active research line is the performance evaluation of ChatGPT.
no code implementations • 18 Feb 2024 • Chiraag Kaushik, Ran Liu, Chi-Heng Lin, Amrit Khera, Matthew Y Jin, Wenrui Ma, Vidya Muthukumar, Eva L Dyer
Classification models are expected to perform equally well for different classes, yet in practice, there are often large gaps in their performance.
1 code implementation • 12 Sep 2023 • Ran Liu, Ellen L. Zippi, Hadi Pouransari, Chris Sandino, Jingping Nie, Hanlin Goh, Erdrin Azemi, Ali Moin
To achieve effective pretraining in the presence of potential distributional shifts, we propose a frequency-aware masked autoencoder ($\texttt{bio}$FAME) that learns to parameterize the representation of biosignals in the frequency space.
1 code implementation • 28 Aug 2023 • Ran Liu, Sahil Khose, Jingyun Xiao, Lakshmi Sathidevi, Keerthan Ramnath, Zsolt Kira, Eva L. Dyer
To address this challenge, we propose a novel approach for distribution-aware latent augmentation that leverages the relationships across samples to guide the augmentation procedure.
1 code implementation • 17 Aug 2023 • Mehdi Azabou, Venkataramana Ganesh, Shantanu Thakoor, Chi-Heng Lin, Lakshmi Sathidevi, Ran Liu, Michal Valko, Petar Veličković, Eva L. Dyer
Message passing neural networks have shown a lot of success on graph-structured data.
Ranked #1 on Node Classification on AMZ Comp
no code implementations • 9 Aug 2023 • Ran Liu, Charles Nicholas
Machine learning (ML)-based malware detection systems are becoming increasingly important as malware threats increase and get more sophisticated.
no code implementations • 3 May 2023 • Ran Liu, Maksim Eren, Charles Nicholas
With the increasing number and sophistication of malware attacks, malware detection systems based on machine learning (ML) grow in importance.
1 code implementation • 1 Jan 2023 • Jorge Quesada, Lakshmi Sathidevi, Ran Liu, Nauman Ahad, Joy M. Jackson, Mehdi Azabou, Jingyun Xiao, Christopher Liding, Matthew Jin, Carolina Urzay, William Gray-Roncal, Erik C. Johnson, Eva L. Dyer
To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image.
no code implementations • 1 Dec 2022 • Zann Koh, Yuren Zhou, Billy Pik Lik Lau, Ran Liu, Keng Hua Chong, Chau Yuen
We propose a new mobility metric, Daily Characteristic Distance, and use it to generate features for each user together with Origin-Destination matrix features.
1 code implementation • 10 Jun 2022 • Ran Liu, Mehdi Azabou, Max Dabagia, Jingyun Xiao, Eva L. Dyer
By enabling flexible pre-training that can be transferred to neural recordings of different size and order, our work provides a first step towards creating a foundation model for neural decoding.
1 code implementation • Conference On Robot Learning (CoRL) 2021 • Andrew Hundt, Aditya Murali, Priyanka Hubli, Ran Liu, Nakul Gopalan, Matthew Gombolay, Gregory D. Hager
Based upon this insight, we propose See-SPOT-Run (SSR), a new computational approach to robot learning that enables a robot to complete a variety of real robot tasks in novel problem domains without task-specific training.
no code implementations • 5 Nov 2021 • Ran Liu, Daniel N. Aloi
In this paper, a low-cost small size dual-band ceramic GNSS patch antenna is presented from design to real sample.
1 code implementation • NeurIPS 2021 • Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith B. Hengen, Michal Valko, Eva L. Dyer
Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state).
no code implementations • 19 Jul 2021 • Zhiqiang Cao, Ran Liu, Chau Yuen, Achala Athukorala, Benny Kai Kiat Ng, Muraleetharan Mathanraj, U-Xuan Tan
We propose an approach to estimate the relative pose between a group of robots by equipping each robot with multiple UWB ranging nodes.
no code implementations • 9 Jul 2021 • Ran Liu, Joseph L. Greenstein, James C. Fackler, Jules Bergmann, Melania M. Bembea, Raimond L. Winslow
Guideline-based treatment for sepsis and septic shock is difficult because sepsis is a disparate range of life-threatening organ dysfunctions whose pathophysiology is not fully understood.
no code implementations • 6 Jun 2021 • Ran Liu
Traditional supervised learning methods are hitting a bottleneck because of their dependency on expensive manually labeled data and their weaknesses such as limited generalization ability and vulnerability to adversarial attacks.
no code implementations • 4 May 2021 • Sumudu HasalaMarakkalage, Billy Pik Lik Lau, Yuren Zhou, Ran Liu, Chau Yuen, Wei Quin Yow, Keng Hua Chong
We propose a system architecture to scan the surrounding WiFi AP, and perform unsupervised learning to demonstrate that it is possible to identify three major insights, namely the indoor POI within a building, neighbourhood activity, and micro-mobility of the users.
1 code implementation • 19 Feb 2021 • Mehdi Azabou, Mohammad Gheshlaghi Azar, Ran Liu, Chi-Heng Lin, Erik C. Johnson, Kiran Bhaskaran-Nair, Max Dabagia, Bernardo Avila-Pires, Lindsey Kitchell, Keith B. Hengen, William Gray-Roncal, Michal Valko, Eva L. Dyer
State-of-the-art methods for self-supervised learning (SSL) build representations by maximizing the similarity between different transformed "views" of a sample.
1 code implementation • 9 Feb 2020 • Han B. Kim, Hieu Nguyen, Qingchu Jin, Sharmila Tamby, Tatiana Gelaf Romer, Eric Sung, Ran Liu, Joseph Greenstein, Jose I. Suarez, Christian Storm, Raimond Winslow, Robert D. Stevens
Combined EHR-PTS24 models had higher discrimination (area under the receiver operating characteristic curve [AUC]) than models which used either EHR or PTS24 alone, for the prediction of survival (AUC 0. 85, 0. 80 and 0. 68 respectively) and neurological outcome (0. 87, 0. 83 and 0. 78).