Search Results for author: Mo Li

Found 7 papers, 4 papers with code

Advancing Multi-Modal Sensing Through Expandable Modality Alignment

no code implementations25 Jul 2024 Shenghong Dai, Shiqi Jiang, Yifan Yang, Ting Cao, Mo Li, Suman Banerjee, Lili Qiu

To tackle this challenge, we introduce the Babel framework, encompassing the neural network architecture, data preparation and processing, as well as the training strategies.

Human Activity Recognition

NeedleBench: Can LLMs Do Retrieval and Reasoning in 1 Million Context Window?

2 code implementations16 Jul 2024 Mo Li, Songyang Zhang, Yunxin Liu, Kai Chen

In evaluating the long-context capabilities of large language models (LLMs), identifying content relevant to a user's query from original long documents is a crucial prerequisite for any LLM to answer questions based on long text.

4k 8k +2

Penetrative AI: Making LLMs Comprehend the Physical World

no code implementations14 Oct 2023 Huatao Xu, Liying Han, Qirui Yang, Mo Li, Mani Srivastava

Recent developments in Large Language Models (LLMs) have demonstrated their remarkable capabilities across a range of tasks.

Common Sense Reasoning World Knowledge

PriMask: Cascadable and Collusion-Resilient Data Masking for Mobile Cloud Inference

1 code implementation12 Nov 2022 Linshan Jiang, Qun Song, Rui Tan, Mo Li

This paper presents the design of a system called PriMask, in which the mobile device uses a secret small-scale neural network called MaskNet to mask the data before transmission.

Human Activity Recognition

Enhancing Deep Learning Performance of Massive MIMO CSI Feedback

1 code implementation24 Aug 2022 Sijie Ji, Mo Li

In this paper, we propose a jigsaw puzzles aided training strategy (JPTS) to enhance the deep learning-based Massive MIMO CSI feedback approaches by maximizing mutual information between the original CSI and the compressed CSI.

CLNet: Complex Input Lightweight Neural Network designed for Massive MIMO CSI Feedback

1 code implementation15 Feb 2021 Sijie Ji, Mo Li

Numerous deep learning for massive MIMO CSI feedback approaches have demonstrated their efficiency and potential.

Compressive Sensing Data Compression +1

On-the-fly Closed-loop Autonomous Materials Discovery via Bayesian Active Learning

no code implementations11 Jun 2020 A. Gilad Kusne, Heshan Yu, Changming Wu, Huairuo Zhang, Jason Hattrick-Simpers, Brian DeCost, Suchismita Sarker, Corey Oses, Cormac Toher, Stefano Curtarolo, Albert V. Davydov, Ritesh Agarwal, Leonid A. Bendersky, Mo Li, Apurva Mehta, Ichiro Takeuchi

Active learning - the field of machine learning (ML) dedicated to optimal experiment design, has played a part in science as far back as the 18th century when Laplace used it to guide his discovery of celestial mechanics [1].

Active Learning BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.