no code implementations • 20 May 2025 • Agam Goyal, Vedant Rathi, William Yeh, Yian Wang, Yuen Chen, Hari Sundaram
Large language models (LLMs) are now ubiquitous in user-facing applications, yet they still generate undesirable toxic outputs, including profanity, vulgarity, and derogatory remarks.
no code implementations • 30 Apr 2025 • Aditya Karan, Nicholas Vincent, Karrie Karahalios, Hari Sundaram
We find that the unintentional interactions between collectives can be quite significant; a collective acting in isolation may be able to achieve their objective (e. g., improve classification outcomes for themselves or promote a particular item), but when a second collective acts simultaneously, the efficacy of the first group drops by as much as $75\%$.
no code implementations • 2 Mar 2025 • Ali Ebrahimpour-Boroojeny, Hari Sundaram, Varun Chandrasekaran
Adversarial examples naturally belong to the distribution imposed by the model on the input space; fine-tuning the model on the adversarial examples closest to the corresponding forget samples (a) localizes the changes to the decision boundary of the model around each forget sample and (b) avoids drastic changes to the global behavior of the model, thereby preserving the model's accuracy on test samples.
no code implementations • 31 Jan 2025 • Yunzhe Li, Junting Wang, Hari Sundaram, Zhining Liu
By mitigating domain bias and enhancing the transferability of sequential patterns, our method provides a scalable and robust approach for achieving more effective zero-shot recommendations across domains.
no code implementations • 30 Oct 2024 • Vinay Koshy, Frederick Choi, Yi-shyuan Chiang, Hari Sundaram, Eshwar Chandrasekharan, Karrie Karahalios
The problem is not the root disagreements themselves.
no code implementations • 7 Oct 2024 • Ali Ebrahimpour-Boroojeny, Hari Sundaram, Varun Chandrasekaran
We show that although a lower Lipschitz constant increases the robustness of a single model, it is not as beneficial in training robust ensembles as it increases the transferability rate of adversarial examples across models in the ensemble.
1 code implementation • 25 Feb 2024 • Ali Ebrahimpour Boroojeny, Matus Telgarsky, Hari Sundaram
We show the effectiveness of automatic differentiation in efficiently and correctly computing and controlling the spectrum of implicitly linear operators, a rich family of layer types including all standard convolutional and dense layers.
1 code implementation • 22 Feb 2024 • Samraj Moorjani, Adit Krishnan, Hari Sundaram
As large-scale language models become the standard for text generation, there is a greater need to tailor the generations to be more or less concise, targeted, and informative, depending on the audience/application.
1 code implementation • 3 Jan 2024 • Junting Wang, Praneet Rathi, Hari Sundaram
In addition, with a simple post-hoc interpolation, PrepRec can improve the performance of existing sequential recommenders on average by 13. 8\% in Recall@10 and 29. 5% in NDCG@10.
1 code implementation • 14 Nov 2023 • Weixiang Yan, Haitian Liu, Yunkun Wang, Yunzhe Li, Qian Chen, Wen Wang, Tingyu Lin, Weishan Zhao, Li Zhu, Hari Sundaram, Shuiguang Deng
Finally, we systematically evaluate and analyze eight mainstream LLMs and demonstrate the superior breadth and challenges of CodeScope for evaluating LLMs on code understanding and generation tasks compared to other benchmarks.
no code implementations • 3 Sep 2023 • Junting Wang, Adit Krishnan, Hari Sundaram, Yunzhe Li
Thus, we use the statistical characteristics of the user-item interaction matrix to identify dataset-independent representations for users and items.
no code implementations • 24 May 2023 • Nishant Balepur, Jie Huang, Samraj Moorjani, Hari Sundaram, Kevin Chen-Chuan Chang
When answering complex questions, large language models (LLMs) may produce answers that do not satisfy all criteria of the question.
no code implementations • 23 May 2023 • Yunzhe Li, Qian Chen, Weixiang Yan, Wen Wang, Qinglin Zhang, Hari Sundaram
Furthermore, we identify an issue of imbalanced utilization of the outline information in the precise outline-conditioned generation, which is ubiquitously observed across fine-tuned models and zero-shot inference models.
no code implementations • 2 Feb 2023 • Ziang Xiao, Tiffany Wenting Li, Karrie Karahalios, Hari Sundaram
By comparing the chatbot with form-based interaction, we found the chatbot improved consent form reading, promoted participants' feelings of agency, and closed the power gap between the participant and the researcher.
1 code implementation • 24 Jan 2023 • Samraj Moorjani, Adit Krishnan, Hari Sundaram, Ewa Maslowska, Aravind Sankar
While existing approaches demonstrate textual style transfer with large volumes of parallel or non-parallel data, we argue that grounding style on audience-independent external factors is innately limiting for two reasons.
no code implementations • 23 May 2022 • Yubin Ge, Ziang Xiao, Jana Diesner, Heng Ji, Karrie Karahalios, Hari Sundaram
We constructed a new human-annotated dataset of human-written follow-up questions with dialogue history and labeled knowledge in the context of conversational surveys.
1 code implementation • 11 Sep 2020 • Aravind Sankar, Junting Wang, Adit Krishnan, Hari Sundaram
We present InfoMotif, a new semi-supervised, motif-regularized, learning framework over graphs.
1 code implementation • 5 Jun 2020 • Aravind Sankar, Yanhong Wu, Yuhang Wu, Wei zhang, Hao Yang, Hari Sundaram
We study the problem of making item recommendations to ephemeral groups, which comprise users with limited or no historical activities together.
no code implementations • 21 May 2020 • Adit Krishnan, Mahashweta Das, Mangesh Bendre, Hao Yang, Hari Sundaram
The rapid proliferation of new users and items on the social web has aggravated the gray-sheep user/long-tail item challenge in recommender systems.
1 code implementation • 7 Mar 2020 • Yuxin Xiao, Adit Krishnan, Hari Sundaram
Different strategies give rise to different social payoffs, the best performing individuals exhibit stability in their preference over the discovered strategies, which indicates the emergence of strategic behavior, and the stability of strategy preference is correlated with high payoffs.
Social and Information Networks Physics and Society
no code implementations • 16 Nov 2019 • Kanika Narang, Chaoqi Yang, Adit Krishnan, Junting Wang, Hari Sundaram, Carolyn Sutter
We develop a novel induced relational graph convolutional network (IR-GCN) framework to address the question.
no code implementations • 11 May 2019 • Suhansanu Kumar, Heting Gao, Changyu Wang, Hari Sundaram, Kevin Chen-Chuan Chang
When the property of the target entities is not directly queryable via the API, we refer to the property as `hidden' and the population as a hidden population.
1 code implementation • 29 Dec 2017 • Harshay Shah, Suhansanu Kumar, Hari Sundaram
Despite the knowledge that individuals use limited resources to form connections to similar others, we lack an understanding of how local and resource-constrained mechanisms explain the emergence of rich structural properties found in real-world networks.
Social and Information Networks
no code implementations • 30 Nov 2017 • Adit Krishnan, ASHISH SHARMA, Hari Sundaram
Modern social platforms are characterized by the presence of rich user-behavior data associated with the publication, sharing and consumption of textual content.