no code implementations • 30 Mar 2024 • Keyuan Cheng, Gang Lin, Haoyang Fei, Yuxuan zhai, Lu Yu, Muhammad Asif Ali, Lijie Hu, Di Wang
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant attention in the era of large language models.
no code implementations • 30 Mar 2024 • Muhammad Asif Ali, ZhengPing Li, Shu Yang, Keyuan Cheng, Yang Cao, Tianhao Huang, Lijie Hu, Lu Yu, Di Wang
Large language models (LLMs) have shown exceptional abilities for multiple different natural language processing tasks.
no code implementations • 30 Mar 2024 • Shu Yang, Jiayuan Su, Han Jiang, Mengdi Li, Keyuan Cheng, Muhammad Asif Ali, Lijie Hu, Di Wang
With the rise of large language models (LLMs), ensuring they embody the principles of being helpful, honest, and harmless (3H), known as Human Alignment, becomes crucial.
no code implementations • 17 Feb 2024 • Shu Yang, Muhammad Asif Ali, Cheng-Long Wang, Lijie Hu, Di Wang
Adapting large language models (LLMs) to new domains/tasks and enabling them to be efficient lifelong learners is a pivotal challenge.
no code implementations • 17 Feb 2024 • Shu Yang, Muhammad Asif Ali, Lu Yu, Lijie Hu, Di Wang
The increasing significance of large models and their multi-modal variants in societal information processing has ignited debates on social safety and ethics.
no code implementations • 29 Nov 2023 • Lijie Hu, Yixin Liu, Ninghao Liu, Mengdi Huai, Lichao Sun, Di Wang
However, ViTs suffer from issues with explanation faithfulness, as their focal points are fragile to adversarial attacks and can be easily changed with even slight perturbations on the input image.
no code implementations • 29 Nov 2023 • Jia Li, Lijie Hu, Jingfeng Zhang, Tianhang Zheng, Hua Zhang, Di Wang
In this paper, we address the limitations of existing text-to-image diffusion models in generating demographically fair results when given human-related descriptions.
no code implementations • 22 Jan 2023 • Lijie Hu, Ivan Habernal, Lei Shen, Di Wang
In this paper, we provide the first systematic review of recent advances in DP deep learning models in NLP.
no code implementations • 23 Nov 2022 • Lijie Hu, Yixin Liu, Ninghao Liu, Mengdi Huai, Lichao Sun, Di Wang
Results show that SEAT is more stable against different perturbations and randomness while also keeps the explainability of attention, which indicates it is a more faithful explanation.
no code implementations • 31 Jul 2021 • Jinyan Su, Lijie Hu, Di Wang
Specifically, we first show that under some mild assumptions on the loss functions, there is an algorithm whose output could achieve an upper bound of $\tilde{O}((\frac{1}{\sqrt{n}}+\frac{\sqrt{d\log \frac{1}{\delta}}}{n\epsilon})^\frac{\theta}{\theta-1})$ for $(\epsilon, \delta)$-DP when $\theta\geq 2$, here $n$ is the sample size and $d$ is the dimension of the space.
no code implementations • 23 Jul 2021 • Lijie Hu, Shuo Ni, Hanshen Xiao, Di Wang
To better understand the challenges arising from irregular data distribution, in this paper we provide the first study on the problem of DP-SCO with heavy-tailed data in the high dimensional space.
no code implementations • 22 Oct 2020 • Di Wang, Jiahao Ding, Lijie Hu, Zejun Xie, Miao Pan, Jinhui Xu
To address this issue, we propose in this paper the first DP version of (Gradient) EM algorithm with statistical guarantees.
no code implementations • 1 Oct 2019 • Di Wang, Lijie Hu, Huanyu Zhang, Marco Gaboardi, Jinhui Xu
In the second part of the paper, we extend our idea to the problem of estimating non-linear regressions and show similar results as in GLMs for both multivariate Gaussian and sub-Gaussian cases.