1 code implementation • 17 Dec 2024 • Zihao Lin, Zichao Wang, Yuanting Pan, Varun Manjunatha, Ryan Rossi, Angela Lau, Lifu Huang, Tong Sun
Suggested questions (SQs) provide an effective initial interface for users to engage with their documents in AI-powered reading applications.
no code implementations • 21 Oct 2024 • Zhehao Zhang, Ryan Rossi, Tong Yu, Franck Dernoncourt, Ruiyi Zhang, Jiuxiang Gu, Sungchul Kim, Xiang Chen, Zichao Wang, Nedim Lipka
In this paper, we present VipAct, an agent framework that enhances VLMs by integrating multi-agent collaboration and vision expert models, enabling more precise visual understanding and comprehensive reasoning.
no code implementations • 19 Jun 2024 • Naiming Liu, Zichao Wang, Richard Baraniuk
Despite rapid advancements in large language models (LLMs), QG remains a challenging problem due to its complicated process, open-ended nature, and the diverse settings in which question generation occurs.
no code implementations • 13 Jun 2024 • Yufan Zhou, Ruiyi Zhang, Kaizhi Zheng, Nanxuan Zhao, Jiuxiang Gu, Zichao Wang, Xin Eric Wang, Tong Sun
Our dataset is 5 times the size of previous largest dataset, yet our cost is tens of thousands of GPU hours lower.
no code implementations • 5 May 2024 • Zhendong Chu, Zichao Wang, Ruiyi Zhang, Yangfeng Ji, Hongning Wang, Tong Sun
Large language models (LLMs) have demonstrated impressive zero-shot abilities in solving a wide range of general-purpose tasks.
1 code implementation • 23 Oct 2023 • Sicheng Zhu, Ruiyi Zhang, Bang An, Gang Wu, Joe Barrow, Zichao Wang, Furong Huang, Ani Nenkova, Tong Sun
Safety alignment of Large Language Models (LLMs) can be compromised with manual jailbreak attacks and (automatic) adversarial attacks.
no code implementations • 3 Oct 2023 • Naiming Liu, Shashank Sonkar, Zichao Wang, Simon Woodhead, Richard G. Baraniuk
We propose novel evaluations for mathematical reasoning capabilities of Large Language Models (LLMs) based on mathematical misconceptions.
1 code implementation • 7 Jul 2023 • Zichao Wang, Richard Baraniuk
We study the new problem of automatic question generation (QG) from multi-modal sources containing images and texts, significantly expanding the scope of most of the existing work that focuses exclusively on QG from only textual sources.
1 code implementation • 15 Jun 2023 • Nischal Ashok Kumar, Nigel Fernandez, Zichao Wang, Andrew Lan
Reading comprehension is a crucial skill in many aspects of education, including language learning, cognitive development, and fostering early literacy skills in children.
no code implementations • 1 Jun 2023 • Mengxue Zhang, Zichao Wang, Zhichao Yang, Weiqi Feng, Andrew Lan
We propose a step-by-step planning approach for intermediate solution generation, which strategically plans the generation of the next solution step based on the MWP and the previous solution steps.
no code implementations • 19 Dec 2022 • Shashank Sonkar, Zichao Wang, Richard G. Baraniuk
MANER re-purposes the <mask> token for NER prediction.
1 code implementation • 1 Nov 2022 • Lorenzo Luzi, Daniel LeJeune, Ali Siahkoohi, Sina AlEMohammad, Vishwanath Saragadam, Hossein Babaei, Naiming Liu, Zichao Wang, Richard G. Baraniuk
We study the interpolation capabilities of implicit neural representations (INRs) of images.
2 code implementations • 23 Aug 2022 • Zichao Wang, Weili Nie, Zhuoran Qiao, Chaowei Xiao, Richard Baraniuk, Anima Anandkumar
On various tasks ranging from simple design criteria to a challenging real-world scenario for designing lead compounds that bind to the SARS-CoV-2 main protease, we demonstrate our approach extrapolates well beyond the retrieval database, and achieves better performance and wider applicability than previous methods.
no code implementations • 17 Aug 2022 • Wenbo Gong, Digory Smith, Zichao Wang, Craig Barton, Simon Woodhead, Nick Pawlowski, Joel Jennings, Cheng Zhang
In this competition, participants will address two fundamental causal challenges in machine learning in the context of education using time-series data.
1 code implementation • 19 May 2022 • Nigel Fernandez, Aritra Ghosh, Naiming Liu, Zichao Wang, Benoît Choffin, Richard Baraniuk, Andrew Lan
Our approach, in-context BERT fine-tuning, produces a single shared scoring model for all items with a carefully-designed input structure to provide contextual information on each item.
1 code implementation • 21 Feb 2022 • Naiming Liu, Zichao Wang, Richard G. Baraniuk, Andrew Lan
In education applications, knowledge tracing refers to the problem of estimating students' time-varying concept/skill mastery level from their past responses to questions and predicting their future performance.
no code implementations • 29 Sep 2021 • Zichao Wang, Weili Nie, Zhenwei Dai, Richard Baraniuk
Many existing approaches either require extensive training/fine-tuning of the LM for each single attribute under control or are slow to generate text.
no code implementations • EMNLP 2021 • Zichao Wang, Andrew S. Lan, Richard G. Baraniuk
We study the problem of generating arithmetic math word problems (MWPs) given a math equation that specifies the mathematical computation and a context that specifies the problem scenario.
no code implementations • 25 Apr 2021 • Mengxue Zhang, Zichao Wang, Richard Baraniuk, Andrew Lan
Feedback on student answers and even during intermediate steps in their solutions to open-ended questions is an important element in math education.
2 code implementations • 9 Dec 2020 • Sina AlEMohammad, Randall Balestriero, Zichao Wang, Richard Baraniuk
Kernels derived from deep neural networks (DNNs) in the infinite-width regime provide not only high performance in a range of machine learning tasks but also new theoretical insights into DNN training dynamics and generalization.
1 code implementation • 27 Oct 2020 • Sina AlEMohammad, Hossein Babaei, Randall Balestriero, Matt Y. Cheung, Ahmed Imtiaz Humayun, Daniel LeJeune, Naiming Liu, Lorenzo Luzi, Jasper Tan, Zichao Wang, Richard G. Baraniuk
High dimensionality poses many challenges to the use of data, from visualization and interpretation, to prediction and storage for historical preservation.
no code implementations • 23 Jul 2020 • Zichao Wang, Angus Lamb, Evgeny Saveliev, Pashmina Cameron, Yordan Zaykov, José Miguel Hernández-Lobato, Richard E. Turner, Richard G. Baraniuk, Craig Barton, Simon Peyton Jones, Simon Woodhead, Cheng Zhang
In this competition, participants will focus on the students' answer records to these multiple-choice diagnostic questions, with the aim of 1) accurately predicting which answers the students provide; 2) accurately predicting which questions have high quality; and 3) determining a personalized sequence of questions for each student that best predicts the student's answers.
no code implementations • ICLR 2021 • Sina Al-E-Mohammad, Zichao Wang, Randall Balestriero, Richard Baraniuk
The study of deep neural networks (DNNs) in the infinite-width limit, via the so-called neural tangent kernel (NTK) approach, has provided new insights into the dynamics of learning, generalization, and the impact of initialization.
no code implementations • 12 Jun 2020 • Weili Nie, Zichao Wang, Ankit B. Patel, Richard G. Baraniuk
Learning interpretable and disentangled representations is a crucial yet challenging task in representation learning.
no code implementations • 27 May 2020 • Zichao Wang, Yi Gu, Andrew Lan, Richard Baraniuk
We propose VarFA, a variational inference factor analysis framework that extends existing factor analysis models for educational data mining to efficiently output uncertainty estimation in the model's estimated factors.
no code implementations • 12 Mar 2020 • Zichao Wang, Sebastian Tschiatschek, Simon Woodhead, Jose Miguel Hernandez-Lobato, Simon Peyton Jones, Richard G. Baraniuk, Cheng Zhang
Online education platforms enable teachers to share a large number of educational resources such as questions to form exercises and quizzes for students.
no code implementations • ICLR 2019 • Zichao Wang, Randall Balestriero, Richard Baraniuk
Second, we show that the affine parameter of an RNN corresponds to an input-specific template, from which we can interpret an RNN as performing a simple template matching (matched filtering) given the input.