Search Results for author: Zhenwen Liang

Found 14 papers, 7 papers with code

Data-Driven State Estimation for Light-Emitting Diode Underwater Optical Communication

no code implementations30 Dec 2021 Yingquan Li, Zhenwen Liang, Ibrahima N'Doye, Xiangliang Zhang, Mohamed-Slim Alouini, Taous-Meriem Laleg-Kirati

Light-Emitting Diodes (LEDs) based underwater optical wireless communications (UOWCs), a technology with low latency and high data rates, have attracted significant importance for underwater robots.

Generalizing Math Word Problem Solvers via Solution Diversification

1 code implementation1 Dec 2022 Zhenwen Liang, Jipeng Zhang, Lei Wang, Yan Wang, Jie Shao, Xiangliang Zhang

In this paper, we design a new training framework for an MWP solver by introducing a solution buffer and a solution discriminator.

Math

Analogical Math Word Problems Solving with Enhanced Problem-Solution Association

1 code implementation1 Dec 2022 Zhenwen Liang, Jipeng Zhang, Xiangliang Zhang

In this paper, we propose to build a novel MWP solver by leveraging analogical MWPs, which advance the solver's generalization ability across different kinds of MWPs.

Math Question Answering

Let GPT be a Math Tutor: Teaching Math Word Problem Solvers with Customized Exercise Generation

no code implementations22 May 2023 Zhenwen Liang, Wenhao Yu, Tanmay Rajpurohit, Peter Clark, Xiangliang Zhang, Ashwin Kaylan

In this paper, we present a novel approach for distilling math word problem solving capabilities from large language models (LLMs) into smaller, more efficient student models.

Knowledge Tracing Math +1

Improving Language Models via Plug-and-Play Retrieval Feedback

no code implementations23 May 2023 Wenhao Yu, Zhihan Zhang, Zhenwen Liang, Meng Jiang, Ashish Sabharwal

ReFeed first generates initial outputs, then utilizes a retrieval model to acquire relevant information from large document collections, and finally incorporates the retrieved information into the in-context demonstration for output refinement, thereby addressing the limitations of LLMs in a more efficient and cost-effective manner.

Retrieval

What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks

1 code implementation NeurIPS 2023 Taicheng Guo, Kehan Guo, Bozhao Nan, Zhenwen Liang, Zhichun Guo, Nitesh V. Chawla, Olaf Wiest, Xiangliang Zhang

In this paper, rather than pursuing state-of-the-art performance, we aim to evaluate capabilities of LLMs in a wide range of tasks across the chemistry domain.

In-Context Learning

Manipulating Predictions over Discrete Inputs in Machine Teaching

no code implementations31 Jan 2024 Xiaodong Wu, Yufei Han, Hayssam Dahrouj, Jianbing Ni, Zhenwen Liang, Xiangliang Zhang

Machine teaching often involves the creation of an optimal (typically minimal) dataset to help a model (referred to as the `student') achieve specific goals given by a teacher.

Combinatorial Optimization

Defending Jailbreak Prompts via In-Context Adversarial Game

no code implementations20 Feb 2024 Yujun Zhou, Yufei Han, Haomin Zhuang, Taicheng Guo, Kehan Guo, Zhenwen Liang, Hongyan Bao, Xiangliang Zhang

Large Language Models (LLMs) demonstrate remarkable capabilities across diverse applications.

ArMATH: a Dataset for Solving Arabic Math Word Problems

1 code implementation LREC 2022 Reem Alghamdi, Zhenwen Liang, Xiangliang Zhang

In addition, a transfer learning model is built to let the high-resource Chinese MWP solver promote the performance of the low-resource Arabic MWP solver.

Math Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.