1 code implementation • ICML 2020 • Xu Chu, Yang Lin, Xiting Wang, Xin Gao, Qi Tong, Hailong Yu, Yasha Wang
Distance metric learning (DML) is to learn a representation space equipped with a metric, such that examples from the same class are closer than examples from different classes with respect to the metric.
1 code implementation • 29 Aug 2024 • Qian Cao, Xu Chen, Ruihua Song, Xiting Wang, Xinting Huang, Yuchen Ren
Image captioning, which generates natural language descriptions of the visual information in an image, is a crucial task in vision-language research.
no code implementations • 6 Jun 2024 • Jinghan Zhang, Xiting Wang, Yiqiao Jin, Changyu Chen, Xinhao Zhang, Kunpeng Liu
The reward model for Reinforcement Learning from Human Feedback (RLHF) has proven effective in fine-tuning Large Language Models (LLMs).
1 code implementation • 4 Jun 2024 • Jinghan Zhang, Xiting Wang, Weijieying Ren, Lu Jiang, Dongjie Wang, Kunpeng Liu
To address these limitations, we introduce the Retrieval Augmented Thought Tree (RATT), a novel thought structure that considers both overall logical soundness and factual correctness at each step of the thinking process.
1 code implementation • 29 Apr 2024 • Meng Li, Haoran Jin, Ruixuan Huang, Zhihao Xu, Defu Lian, Zijia Lin, Di Zhang, Xiting Wang
Based on this, we quantify faithfulness via the difference in the output upon perturbation.
no code implementations • 18 Apr 2024 • Zhihao Xu, Ruixuan Huang, Changyu Chen, Shuai Wang, Xiting Wang
Despite careful safety alignment, current large language models (LLMs) remain vulnerable to various attacks.
1 code implementation • 4 Mar 2024 • Changyu Chen, Xiting Wang, Ting-En Lin, Ang Lv, Yuchuan Wu, Xin Gao, Ji-Rong Wen, Rui Yan, Yongbin Li
Furthermore, it is complementary to existing methods.
no code implementations • 16 Nov 2023 • Jing Yao, Wei Xu, Jianxun Lian, Xiting Wang, Xiaoyuan Yi, Xing Xie
In this paper, we propose a general paradigm that augments LLMs with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
no code implementations • 15 Nov 2023 • Jing Yao, Xiaoyuan Yi, Xiting Wang, Yifan Gong, Xing Xie
The rapid advancement of Large Language Models (LLMs) has attracted much attention to value alignment for their responsible development.
no code implementations • 26 Oct 2023 • Xiaoyuan Yi, Jing Yao, Xiting Wang, Xing Xie
Big models have greatly advanced AI's ability to understand, generate, and manipulate information and content, enabling numerous applications.
no code implementations • 25 Oct 2023 • Xiting Wang, Liming Jiang, Jose Hernandez-Orallo, David Stillwell, Luning Sun, Fang Luo, Xing Xie
Comprehensive and accurate evaluation of general-purpose AI systems such as large language models allows for effective mitigation of their risks and deepened understanding of their capabilities.
no code implementations • 23 Aug 2023 • Jing Yao, Xiaoyuan Yi, Xiting Wang, Jindong Wang, Xing Xie
Big models, exemplified by Large Language Models (LLMs), are models typically pre-trained on massive data and comprised of enormous parameters, which not only obtain significantly improved performance across diverse tasks but also present emergent capabilities absent in smaller models.
1 code implementation • 16 Jun 2023 • Changyu Chen, Xiting Wang, Yiqiao Jin, Victor Ye Dong, Li Dong, Jie Cao, Yi Liu, Rui Yan
In reinforcement learning (RL), there are two major settings for interacting with the environment: online and offline.
1 code implementation • 27 Apr 2023 • Yuntao Du, Jianxun Lian, Jing Yao, Xiting Wang, Mingqi Wu, Lu Chen, Yunjun Gao, Xing Xie
In recent decades, there have been significant advancements in latent embedding-based CF methods for improved accuracy, such as matrix factorization, neural collaborative filtering, and LightGCN.
1 code implementation • 15 Mar 2023 • Sungwon Han, Seungeon Lee, Fangzhao Wu, Sundong Kim, Chuhan Wu, Xiting Wang, Xing Xie, Meeyoung Cha
Algorithmic fairness has become an important machine learning problem, especially for mission-critical Web applications.
1 code implementation • 21 Dec 2022 • Dongmin Hyun, Xiting Wang, Chanyoung Park, Xing Xie, Hwanjo Yu
We formulate the unsupervised summarization based on the Markov decision process with rewards representing the summary quality.
1 code implementation • 16 Dec 2022 • Yuxi Feng, Xiaoyuan Yi, Xiting Wang, Laks V. S. Lakshmanan, Xing Xie
Augmented by only self-generated pseudo text, generation models over-emphasize exploitation of the previously learned space, suffering from a constrained generalization boundary.
no code implementations • 24 Nov 2022 • Yiqiao Jin, Xiting Wang, Yaru Hao, Yizhou Sun, Xing Xie
In this paper, we move towards combining large parametric models with non-parametric prototypical networks.
1 code implementation • 13 Oct 2022 • Seungeon Lee, Xiting Wang, Sungwon Han, Xiaoyuan Yi, Xing Xie, Meeyoung Cha
We present SELOR, a framework for integrating self-explaining capabilities into a given deep model to achieve both high prediction performance and human precision.
no code implementations • 19 Jun 2022 • Zhen Li, Xiting Wang, Weikai Yang, Jing Wu, Zhengyan Zhang, Zhiyuan Liu, Maosong Sun, HUI ZHANG, Shixia Liu
The rapid development of deep natural language processing (NLP) models for text classification has led to an urgent need for a unified understanding of these models proposed individually.
1 code implementation • 10 Apr 2022 • Tao Qi, Fangzhao Wu, Chuhan Wu, Peijie Sun, Le Wu, Xiting Wang, Yongfeng Huang, Xing Xie
To learn provider-fair representations from biased data, we employ provider-biased representations to inherit provider bias from data.
no code implementations • 23 Jan 2022 • Chao Feng, Defu Lian, Xiting Wang, Zheng Liu, Xing Xie, Enhong Chen
Instead of searching the nearest neighbor for the query, we search the item with maximum inner product with query on the proximity graph.
1 code implementation • 13 Sep 2021 • Yiqiao Jin, Xiting Wang, Ruichao Yang, Yizhou Sun, Wei Wang, Hao Liao, Xing Xie
The detection of fake news often requires sophisticated reasoning skills, such as logically combining information by considering word-level subtle clues.
1 code implementation • ACL 2021 • Xiang Ao, Xiting Wang, Ling Luo, Ying Qiao, Qing He, Xing Xie
To build up a benchmark for this problem, we publicize a large-scale dataset named PENS (PErsonalized News headlineS).
1 code implementation • 18 Feb 2021 • Le Wu, Lei Chen, Pengyang Shao, Richang Hong, Xiting Wang, Meng Wang
For each user, this transformation is achieved under the adversarial learning of a user-centric graph, in order to obfuscate each sensitive feature between both the filtered user embedding and the sub graph structures of this user.
no code implementations • 21 Sep 2020 • Weikai Yang, Xiting Wang, Jie Lu, Wenwen Dou, Shixia Liu
The novelty of our approach includes 1) automatically constructing constraints for hierarchical clustering using knowledge (knowledge-driven) and intrinsic data distribution (data-driven), and 2) enabling the interactive steering of clustering through a visual interface (user-driven).
no code implementations • 30 Jun 2020 • Chuhan Wu, Fangzhao Wu, Xiting Wang, Yongfeng Huang, Xing Xie
In this paper, we propose a fairness-aware news recommendation approach with decomposed adversarial learning and orthogonality regularization, which can alleviate unfairness in news recommendation brought by the biases of sensitive user attributes.
no code implementations • 23 Dec 2019 • Chi Xu, Hao Feng, Guoxin Yu, Min Yang, Xiting Wang, Xiang Ao
In this paper, we aim to improve ATSA by discovering the potential aspect terms of the predicted sentiment polarity when the aspect terms of a test sentence are unknown.
2 code implementations • 20 Apr 2019 • Le Wu, Peijie Sun, Yanjie Fu, Richang Hong, Xiting Wang, Meng Wang
The key idea of our proposed model is that we design a layer-wise influence propagation structure to model how users' latent embeddings evolve as the social diffusion process continues.
no code implementations • 7 Nov 2018 • Le Wu, Peijie Sun, Richang Hong, Yanjie Fu, Xiting Wang, Meng Wang
Based on a classical CF model, the key idea of our proposed model is that we borrow the strengths of GCNs to capture how users' preferences are influenced by the social diffusion process in social networks.
no code implementations • 4 Feb 2017 • Shixia Liu, Xiting Wang, Mengchen Liu, Jun Zhu
Interactive model analysis, the process of understanding, diagnosing, and refining a machine learning model with the help of interactive visualization, is very important for users to efficiently solve real-world artificial intelligence and data mining problems.
no code implementations • 13 Dec 2015 • Yangxin Zhong, Shixia Liu, Xiting Wang, Jiannan Xiao, Yangqiu Song
To facilitate users in analyzing the flow, we present a method to model the flow behaviors that aims at identifying the lead-lag relationships between word clusters of different user groups.
1 code implementation • 13 Dec 2015 • Shixia Liu, Jialun Yin, Xiting Wang, Weiwei Cui, Kelei Cao, Jian Pei
To this end, we learn a set of streaming tree cuts from topic trees based on user-selected focus nodes.