1 code implementation • ICML 2020 • Hai Phan, My T. Thai, Han Hu, Ruoming Jin, Tong Sun, Dejing Dou
In this paper, we aim to develop a scalable algorithm to preserve differential privacy (DP) in adversarial learning for deep neural networks (DNNs), with certified robustness to adversarial examples.
no code implementations • 7 Feb 2025 • Hailong Jiang, Jianfeng Zhu, Yao Wan, Bo Fang, Hongyu Zhang, Ruoming Jin, Qiang Guan
Intermediate Representations (IRs) are essential in compiler design and program analysis, yet their comprehension by Large Language Models (LLMs) remains underexplored.
no code implementations • 23 Jan 2025 • Jianfeng Zhu, Ruoming Jin, Hailong Jiang, Yulan Wang, Xinyu Zhang, Karin G. Coifman
This study applies Large Language Models (LLMs) to analyze adolescents' social media posts, uncovering emotional patterns (e. g., sadness, guilt, fear, joy) and contextual factors (e. g., family, peers, school) related to substance use.
no code implementations • 13 Jan 2025 • Jianfeng Zhu, Ruoming Jin, Karin G. Coifman
Large Language Models (LLMs) are demonstrating remarkable human like capabilities across diverse domains, including psychological assessment.
no code implementations • 20 Nov 2024 • Xiang Li, Gagan Agrawal, Ruoming Jin, Rajiv Ramnath
We consider the problem of constructing embeddings of large attributed graphs and supporting multiple downstream learning tasks.
no code implementations • 18 Nov 2024 • Xiang Li, Gagan Agrawal, Rajiv Ramnath, Ruoming Jin
This points to the need for federated learning for graph-level representations, a topic that has not been explored much, especially in an unsupervised setting.
no code implementations • 30 Sep 2024 • Ji Liu, Jiaxiang Ren, Ruoming Jin, Zijie Zhang, Yang Zhou, Patrick Valduriez, Dejing Dou
First, we propose a fisher information-based method to adaptively sample data within each device to improve the effectiveness of the FL fine-tuning process.
no code implementations • 11 Sep 2024 • Khiem Ton, Nhi Nguyen, Mahmoud Nazzal, Abdallah Khreishah, Cristian Borcea, NhatHai Phan, Ruoming Jin, Issa Khalil, Yelong Shen
This paper introduces SGCode, a flexible prompt-optimizing system to generate secure code with large language models (LLMs).
no code implementations • 18 Dec 2023 • Ji Liu, Tianshi Che, Yang Zhou, Ruoming Jin, Huaiyu Dai, Dejing Dou, Patrick Valduriez
First, we propose an asynchronous FL system model with an efficient model aggregation method for improving the FL convergence.
no code implementations • 13 Dec 2023 • Dong Li, Ruoming Jin, Bin Ren
Inspired by the success of contrastive learning, we systematically examine recommendation losses, including listwise (softmax), pairwise (BPR), and pointwise (MSE and CCL) losses.
no code implementations • 13 Dec 2023 • Ruoming Jin, Dong Li
In this paper, we perform a systemic examination of the recommendation losses, including listwise (softmax), pairwise(BPR), and pointwise (mean-squared error, MSE, and Cosine Contrastive Loss, CCL) losses through the lens of contrastive learning.
no code implementations • 10 Mar 2023 • Xiaopeng Jiang, Thinh On, NhatHai Phan, Hessamaldin Mohammadi, Vijaya Datta Mayyuri, An Chen, Ruoming Jin, Cristian Borcea
However, currently there is no mobile sensing DL system that simultaneously achieves good model accuracy while adapting to user mobility behavior, scales well as the number of users increases, and protects user data privacy.
no code implementations • 20 Dec 2022 • Dong Li, Yelong Shen, Ruoming Jin, Yi Mao, Kuan Wang, Weizhu Chen
Pre-trained language models have achieved promising success in code retrieval tasks, where a natural language documentation query is given to find the most relevant existing code snippet.
no code implementations • 28 Nov 2022 • Dong Li, Ruoming Jin, Zhenming Liu, Bin Ren, Jing Gao, Zhi Liu
Since Rendle and Krichene argued that commonly used sampling-based evaluation metrics are "inconsistent" with respect to the global metrics (even in expectation), there have been a few studies on the sampling-based recommender system evaluation.
1 code implementation • 26 Jul 2022 • Phung Lai, Han Hu, NhatHai Phan, Ruoming Jin, My T. Thai, An M. Chen
In this paper, we show that the process of continually learning new tasks and memorizing previous tasks introduces unknown privacy risks and challenges to bound the privacy loss.
no code implementations • 31 Dec 2021 • Xiang Li, Dong Li, Ruoming Jin, Gagan Agrawal, Rajiv Ramnath
Though other methods (particularly those based on Laplacian Smoothing) have reported better accuracy, a fundamental limitation of all the work is a lack of scalability.
no code implementations • NeurIPS 2021 • Zeru Zhang, Jiayin Jin, Zijie Zhang, Yang Zhou, Xin Zhao, Jiaxiang Ren, Ji Liu, Lingfei Wu, Ruoming Jin, Dejing Dou
Despite achieving remarkable efficiency, traditional network pruning techniques often follow manually-crafted heuristics to generate pruned sparse networks.
no code implementations • 17 Nov 2021 • Xiaopeng Jiang, Han Hu, Vijaya Datta Mayyuri, An Chen, Devu M. Shila, Adriaan Larmuseau, Ruoming Jin, Cristian Borcea, NhatHai Phan
This article presents the design, implementation, and evaluation of FLSys, a mobile-cloud federated learning (FL) system, which can be a key component for an open ecosystem of FL models and apps.
no code implementations • 29 Sep 2021 • Dong Li, Zhenming Liu, Ruoming Jin, Zhi Liu, Jing Gao, Bin Ren
Recently, a wide range of recommendation algorithms inspired by deep learning techniques have emerged as the performance leaders several standard recommendation benchmarks.
no code implementations • 20 Jun 2021 • Dong Li, Ruoming Jin, Jing Gao, Zhi Liu
Recently, Rendle has warned that the use of sampling-based top-$k$ metrics might not suffice.
no code implementations • 27 May 2021 • Ruoming Jin, Dong Li, Jing Gao, Zhi Liu, Li Chen, Yang Zhou
Through the derivation and analysis of the closed-form solutions for two basic regression and matrix factorization approaches, we found these two approaches are indeed inherently related but also diverge in how they "scale-down" the singular values of the original user-item interaction matrix.
no code implementations • 2 Mar 2021 • Ruoming Jin, Dong Li, Benjamin Mudrak, Jing Gao, Zhi Liu
The proposed approaches either are rather uninformative (linking sampling to metric evaluation) or can only work on simple metrics, such as Recall/Precision (Krichene and Rendle 2020; Li et al. 2020).
no code implementations • NeurIPS 2020 • Zijie Zhang, Zeru Zhang, Yang Zhou, Yelong Shen, Ruoming Jin, Dejing Dou
Despite achieving remarkable performance, deep graph learning models, such as node classification and network embedding, suffer from harassment caused by small adversarial perturbations.
no code implementations • 25 Sep 2019 • NhatHai Phan, My T. Thai, Ruoming Jin, Han Hu, Dejing Dou
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples.
4 code implementations • 2 Jun 2019 • NhatHai Phan, Minh Vu, Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, My T. Thai
In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples.
no code implementations • 23 Mar 2019 • NhatHai Phan, My T. Thai, Ruoming Jin, Han Hu, Dejing Dou
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples.
Cryptography and Security
no code implementations • 11 Apr 2015 • Yelong Shen, Ruoming Jin, Jianshu Chen, Xiaodong He, Jianfeng Gao, Li Deng
Co-occurrence Data is a common and important information source in many areas, such as the word co-occurrence in the sentences, friends co-occurrence in social networks and products co-occurrence in commercial transaction data, etc, which contains rich correlation and clustering information about the items.
4 code implementations • 18 Feb 2011 • Ruoming Jin, Victor E. Lee, Hui Hong
For the first problem, we justify several axiomatic properties necessary for a role similarity measure or metric: range, maximal similarity, automorphic equivalence, transitive similarity, and the triangle inequality.
Social and Information Networks Physics and Society H.2.8