1 code implementation • 18 May 2023 • Wanyong Feng, Aritra Ghosh, Stephen Sireci, Andrew S. Lan
Computerized adaptive testing (CAT) is a form of personalized testing that accurately measures students' knowledge levels while reducing test length.
1 code implementation • 11 May 2023 • Nischal Ashok Kumar, Wanyong Feng, Jaewook Lee, Hunter McNichols, Aritra Ghosh, Andrew Lan
In this paper, we take a preliminary step towards solving the problem of causal discovery in knowledge tracing, i. e., finding the underlying causal relationship among different skills from real-world student response data.
1 code implementation • 20 Dec 2022 • Chuan Tian, C. Megan Urry, Aritra Ghosh, Ryan Ofman, Tonima Tasnim Ananna, Connor Auge, Nico Cappelluti, Meredith C. Powell, David B. Sanders, Kevin Schawinski, Dominic Stark, Grant R. Tremblay
The classification precision of our models has a noticeable dependency on host galaxy radius and magnitude.
1 code implementation • 19 May 2022 • Nigel Fernandez, Aritra Ghosh, Naiming Liu, Zichao Wang, Benoît Choffin, Richard Baraniuk, Andrew Lan
Our approach, in-context BERT fine-tuning, produces a single shared scoring model for all items with a carefully-designed input structure to provide contextual information on each item.
no code implementations • 8 Dec 2021 • Aritra Ghosh, Saayan Mitra, Andrew Lan
In sequential recommender system applications, it is important to develop models that can capture users' evolving interest over time to successfully recommend future items that they are likely to interact with.
2 code implementations • 17 Aug 2021 • Aritra Ghosh, Andrew Lan
Computerized adaptive testing (CAT) refers to a form of tests that are personalized to every student/test taker.
2 code implementations • 19 Apr 2021 • Aritra Ghosh, Andrew Lan
Consequently, several recently proposed methods, such as Meta-Weight-Net (MW-Net), use a small number of unbiased, clean samples to learn a weighting function that downweights samples that are likely to have corrupted labels under the meta-learning framework.
1 code implementation • 19 Apr 2021 • Aritra Ghosh, Andrew Lan
One common type of method that can mitigate the impact of label noise can be viewed as supervised robust methods; one can simply replace the CCE loss with a loss that is robust to label noise, or re-weight training samples and down-weight those with higher loss values.
Ranked #28 on Image Classification on Clothing1M
2 code implementations • 19 Apr 2021 • Aritra Ghosh, Jay Raspat, Andrew Lan
Knowledge tracing refers to a family of methods that estimate each student's knowledge component/skill mastery level from their past responses to questions.
1 code implementation • 11 Dec 2020 • Aritra Ghosh, Andrew S. Lan
This paper details our solutions to Tasks 1&2 of the NeurIPS 2020 Education Challenge. 1 Knowledge tracing, a family of methods to estimate each student’s mastery levels on skills/knowledge components from their past responses to assessment questions, is useful for progress monitoring, personalization, and helping teachers to deliver personalized and targeted feedback to students to improve their learning outcomes.
1 code implementation • 24 Jul 2020 • Aritra Ghosh, Neil Heffernan, Andrew S. Lan
We also conduct several case studies and show that AKT exhibits excellent interpretability and thus has potential for automated feedback and personalization in real-world educational settings.
1 code implementation • 25 Jun 2020 • Aritra Ghosh, C. Megan Urry, Zhengdong Wang, Kevin Schawinski, Dennis Turp, Meredith C. Powell
This inferred difference in quenching mechanism is in agreement with previous studies that used other morphology classification techniques on much smaller samples at $z\sim0$ and $z\sim1$.
Astrophysics of Galaxies Instrumentation and Methods for Astrophysics
no code implementations • 31 Mar 2020 • Aritra Ghosh, Saayan Mitra, Somdeb Sarkhel, Viswanathan Swaminathan
Earlier works on optimal bidding strategy apply model-based batch reinforcement learning methods which can not generalize to unknown budget and time constraint.
no code implementations • 18 Jan 2020 • Aritra Ghosh, Saayan Mitra, Somdeb Sarkhel, Jason Xie, Gang Wu, Viswanathan Swaminathan
The highest bidding advertiser wins but pays only the second-highest bid (known as the winning price).
1 code implementation • 27 Dec 2017 • Aritra Ghosh, Himanshu Kumar, P. S. Sastry
For binary classification there exist theoretical results on loss functions that are robust to label noise.
no code implementations • 20 May 2016 • Aritra Ghosh, Naresh Manwani, P. S. Sastry
In most practical problems of classifier learning, the training data suffers from the label noise.
no code implementations • 14 Mar 2014 • Aritra Ghosh, Naresh Manwani, P. S. Sastry
Through extensive empirical studies, we show that risk minimization under the $0-1$ loss, the sigmoid loss and the ramp loss has much better robustness to label noise when compared to the SVM algorithm.