Search Results for author: Wenda Li

Found 17 papers, 7 papers with code

Multilingual Mathematical Autoformalization

1 code implementation7 Nov 2023 Albert Q. Jiang, Wenda Li, Mateja Jamnik

In this work, we create $\texttt{MMA}$, a large, flexible, multilingual, and multi-domain dataset of informal-formal pairs, by using a language model to translate in the reverse direction, that is, from formal mathematical statements into corresponding informal ones.

Few-Shot Learning Language Acquisition +1

Message-passing selection: Towards interpretable GNNs for graph classification

no code implementations3 Jun 2023 Wenda Li, KaiXuan Chen, Shunyu Liu, Wenjie Huang, Haofei Zhang, Yingjie Tian, Yun Su, Mingli Song

In this paper, we strive to develop an interpretable GNNs' inference paradigm, termed MSInterpreter, which can serve as a plug-and-play scheme readily applicable to various GNNs' baselines.

Graph Classification

Decomposing the Enigma: Subgoal-based Demonstration Learning for Formal Theorem Proving

1 code implementation25 May 2023 Xueliang Zhao, Wenda Li, Lingpeng Kong

Large language models~(LLMs) present an intriguing avenue of exploration in the domain of formal theorem proving.

Ranked #3 on Automated Theorem Proving on miniF2F-test (Pass@100 metric)

Automated Theorem Proving

Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs

3 code implementations21 Oct 2022 Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée Lacroix, Yuhuai Wu, Guillaume Lample

In this work, we introduce Draft, Sketch, and Prove (DSP), a method that maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems.

Ranked #3 on Automated Theorem Proving on miniF2F-valid (Pass@100 metric)

Automated Theorem Proving Language Modelling

Autoformalization with Large Language Models

no code implementations25 May 2022 Yuhuai Wu, Albert Q. Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, Christian Szegedy

Autoformalization is the process of automatically translating from natural language mathematics to formal specifications and proofs.

 Ranked #1 on Automated Theorem Proving on miniF2F-test (using extra training data)

Automated Theorem Proving Program Synthesis

Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers

no code implementations22 May 2022 Albert Q. Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygóźdź, Piotr Miłoś, Yuhuai Wu, Mateja Jamnik

Thor increases a language model's success rate on the PISA dataset from $39\%$ to $57\%$, while solving $8. 2\%$ of problems neither language models nor automated theorem provers are able to solve on their own.

Automated Theorem Proving

MDPose: Human Skeletal Motion Reconstruction Using WiFi Micro-Doppler Signatures

no code implementations11 Jan 2022 Chong Tang, Wenda Li, Shelly Vishwakarma, Fangzhan Shi, Simon Julier, Kevin Chetty

It provides an effective solution to track human activities by reconstructing a skeleton model with 17 key points, which can assist with the interpretation of conventional RF sensing outputs in a more understandable way.

Denoising RF-based Pose Estimation

OPERAnet: A Multimodal Activity Recognition Dataset Acquired from Radio Frequency and Vision-based Sensors

1 code implementation8 Oct 2021 Mohammud J. Bocus, Wenda Li, Shelly Vishwakarma, Roget Kou, Chong Tang, Karl Woodbridge, Ian Craddock, Ryan McConville, Raul Santos-Rodriguez, Kevin Chetty, Robert Piechocki

This dataset can be exploited to advance WiFi and vision-based HAR, for example, using pattern recognition, skeletal representation, deep learning algorithms or other novel approaches to accurately recognize human activities.

Human Activity Recognition Multimodal Activity Recognition

Neural Style Transfer Enhanced Training Support For Human Activity Recognition

no code implementations27 Jul 2021 Shelly Vishwakarma, Wenda Li, Chong Tang, Karl Woodbridge, Raviraj Adve, Kevin Chetty

Further, we benchmark the data augmentation performance of the style transferred signatures with three other synthetic datasets -- clean simulated spectrograms (no environmental effects), simulated data with added AWGN noise, and simulated data with GAN generated noise.

Data Augmentation Human Activity Recognition +1

FMNet: Latent Feature-wise Mapping Network for Cleaning up Noisy Micro-Doppler Spectrogram

no code implementations9 Jul 2021 Chong Tang, Wenda Li, Shelly Vishwakarma, Fangzhan Shi, Simon Julier, Kevin Chetty

On the other hand, we also propose a novel idea which trains a classifier with only simulated data and predicts new measured samples after cleaning them up with the FMNet.

Unsupervised Doppler Radar-Based Activity Recognition for e-Healthcare

no code implementations18 Mar 2021 Yordanka Karayaneva, Sara Sharifzadeh, Wenda Li, Yanguo Jing, Bo Tan

This study proposes two unsupervised feature extraction methods for the purpose of human activity monitoring using Doppler-streams.

Activity Recognition Texture Classification

SimHumalator: An Open Source WiFi Based Passive Radar Human Simulator For Activity Recognition

no code implementations2 Mar 2021 Shelly Vishwakarma, Wenda Li, Chong Tang, Karl Woodbridge, Raviraj Adve, Kevin Chetty

We integrate WiFi transmission signals with the human animation data to generate the micro-Doppler features that incorporate the diversity of human motion characteristics, and the sensor parameters.

Activity Recognition Classification +1

Learning from Natural Noise to Denoise Micro-Doppler Spectrogram

no code implementations13 Feb 2021 Chong Tang, Wenda Li, Shelly Vishwakarma, Karl Woodbridge, Simon Julier, Kevin Chetty

However, noisy time-frequency spectrograms can significantly affect the performance of the classifier and must be tackled using appropriate denoising algorithms.

Denoising Generative Adversarial Network

LIME: Learning Inductive Bias for Primitives of Mathematical Reasoning

1 code implementation15 Jan 2021 Yuhuai Wu, Markus Rabe, Wenda Li, Jimmy Ba, Roger Grosse, Christian Szegedy

While designing inductive bias in neural architectures has been widely studied, we hypothesize that transformer networks are flexible enough to learn inductive bias from suitable generic tasks.

Inductive Bias Mathematical Reasoning

IsarStep: a Benchmark for High-level Mathematical Reasoning

2 code implementations ICLR 2021 Wenda Li, Lei Yu, Yuhuai Wu, Lawrence C. Paulson

In this paper, we present a benchmark for high-level mathematical reasoning and study the reasoning capabilities of neural sequence-to-sequence models.

Mathematical Proofs Mathematical Reasoning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.