no code implementations • 21 Jun 2024 • Mucong Ding, Souradip Chakraborty, Vibhu Agrawal, Zora Che, Alec Koppel, Mengdi Wang, Amrit Bedi, Furong Huang
Reinforcement Learning from Human Feedback (RLHF) is a key method for aligning large language models (LLMs) with human preferences.
no code implementations • 21 Jun 2024 • Mucong Ding, Tahseen Rabbani, Bang An, Evan Z Wang, Furong Huang
Graph Neural Networks (GNNs) are widely applied to graph learning problems such as node classification.
no code implementations • 27 May 2024 • Mucong Ding, Yinhan He, Jundong Li, Furong Huang
However, owing to the interdependence of graph nodes, coreset selection, which selects subsets of the data examples, has not been successfully applied to speed up GNN training on large graphs, warranting special treatment.
no code implementations • 27 May 2024 • Mucong Ding, Yuancheng Xu, Tahseen Rabbani, Xiaoyu Liu, Brian Gravelle, Teresa Ranadive, Tai-Ching Tuan, Furong Huang
We aim to generate a synthetic validation dataset so that the validation-performance rankings of the models, with different hyperparameters, on the condensed and original datasets are comparable.
1 code implementation • 16 Jan 2024 • Bang An, Mucong Ding, Tahseen Rabbani, Aakriti Agrawal, Yuancheng Xu, ChengHao Deng, Sicheng Zhu, Abdirisak Mohamed, Yuxin Wen, Tom Goldstein, Furong Huang
Our evaluation examines two pivotal dimensions: the degree of image quality degradation and the efficacy of watermark detection after attacks.
1 code implementation • 26 Jun 2022 • Bang An, Zora Che, Mucong Ding, Furong Huang
In many real-world applications, however, such an assumption is often violated as previously trained fair models are often deployed in a different environment, and the fairness of such models has been observed to collapse.
1 code implementation • NeurIPS 2021 • Mucong Ding, Kezhi Kong, Jingling Li, Chen Zhu, John P Dickerson, Furong Huang, Tom Goldstein
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
Ranked #12 on Node Classification on Reddit
no code implementations • 29 Sep 2021 • Mucong Ding, Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Micah Goldblum, David Wipf, Furong Huang, Tom Goldstein
We observe that in most cases, we need both a suitable domain generalization algorithm and a strong GNN backbone model to optimize out-of-distribution test performance.
no code implementations • 12 Apr 2021 • Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi
We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets.
no code implementations • ICLR 2021 • Yogesh Balaji, Mohammadmahdi Sajedi, Neha Mukund Kalibhat, Mucong Ding, Dominik Stöger, Mahdi Soltanolkotabi, Soheil Feizi
In this work, we present a comprehensive analysis of the importance of model over-parameterization in GANs both theoretically and empirically.
3 code implementations • CVPR 2022 • Kezhi Kong, Guohao Li, Mucong Ding, Zuxuan Wu, Chen Zhu, Bernard Ghanem, Gavin Taylor, Tom Goldstein
Data augmentation helps neural networks generalize better by enlarging the training set, but it remains an open question how to effectively augment graph data to enhance the performance of GNNs (Graph Neural Networks).
Ranked #1 on Graph Property Prediction on ogbg-ppa
no code implementations • 2 Mar 2020 • Mucong Ding, Constantinos Daskalakis, Soheil Feizi
GANs, however, are designed in a model-free fashion where no additional information about the underlying distribution is available.
no code implementations • 12 Dec 2018 • Mucong Ding, Kai Yang, Dit-yan Yeung, Ting-Chuen Pong
A major challenge that has to be addressed when building such models is to design handcrafted features that are effective for the prediction task at hand.
no code implementations • 12 Dec 2018 • Mucong Ding, Yanbang Wang, Erik Hemberg, Una-May O'Reilly
It consists of two alternative transfer methods based on representation learning with auto-encoders: a passive approach using transductive principal component analysis and an active approach that uses a correlation alignment loss term.
no code implementations • 12 Dec 2018 • Mucong Ding, Kwok Yip Szeto
We design a method to optimize the global mean first-passage time (GMFPT) of multiple random walkers searching in complex networks for a general target, without specifying the property of the target node.