no code implementations • 18 Mar 2025 • Dipin Khati, Yijin Liu, David N. Palacio, Yixuan Zhang, Denys Poshyvanyk
To bring clarity to the current research status and identify opportunities for future work, we conducted a comprehensive review of $88$ papers: a systematic literature review of $18$ papers focused on LLMs in SE, complemented by an analysis of 70 papers from broader trust literature.
no code implementations • 14 Mar 2025 • Yixuan Zhang, Qing Chang, Yuxi Wang, Guang Chen, Zhaoxiang Zhang, Junran Peng
Speech-driven 3D facial animation seeks to produce lifelike facial expressions that are synchronized with the speech content and its emotional nuances, finding applications in various multimedia fields.
1 code implementation • 18 Feb 2025 • Zenan Zhai, Hao Li, Xudong Han, Zhenxuan Zhang, Yixuan Zhang, Timothy Baldwin, Haonan Li
Recent advances in large language models (LLMs) have shown that they can answer questions requiring complex reasoning.
no code implementations • 11 Feb 2025 • Quyu Kong, Yixuan Zhang, Yang Liu, Panrong Tong, Enqi Liu, Feng Zhou
Temporal Point Processes (TPPs) have been widely used for event sequence modeling, but they often struggle to incorporate rich textual event descriptions effectively.
no code implementations • 29 Jan 2025 • Franklin Alvarez, Yixuan Zhang, Daniel Kipping, Waldo Nogueira
A computational model that uses a three-dimensional (3D) representation of the peripheral auditory system of CI users was developed to predict categorical loudness from the simulated peripheral neural activity.
no code implementations • 24 Jan 2025 • Feng Zhou, Quyu Kong, Yixuan Zhang
Temporal point processes (TPPs) are stochastic process models used to characterize event sequences occurring in continuous time.
no code implementations • 25 Dec 2024 • Ruiqi Liu, Xingyu Liu, Xiaohao Xu, Yixuan Zhang, Yongxin Ge, Lubin Weng
Group Re-identification (G-ReID) faces greater complexity than individual Re-identification (ReID) due to challenges like mutual occlusion, dynamic member interactions, and evolving group structures.
no code implementations • 15 Dec 2024 • Yixuan Zhang, Zhidong Li, Yang Wang, Fang Chen, Xuhui Fan, Feng Zhou
Machine learning algorithms often struggle to eliminate inherent data biases, particularly those arising from unreliable labels, which poses a significant challenge in ensuring fairness.
1 code implementation • 14 Dec 2024 • Junliang Lyu, Yixuan Zhang, Xiaoling Lu, Feng Zhou
This work addresses a key limitation in current federated learning approaches, which predominantly focus on homogeneous tasks, neglecting the task diversity on local devices.
no code implementations • 27 Nov 2024 • Yixuan Zhang, Hui Yang, Chuanchen Luo, Junran Peng, Yuxi Wang, Zhaoxiang Zhang
Generating realistic 3D human-object interactions (HOIs) from text descriptions is a active research topic with potential applications in virtual and augmented reality, robotics, and animation.
no code implementations • 10 Nov 2024 • Yuewen Sun, Lingjing Kong, Guangyi Chen, Loka Li, Gongxu Luo, Zijian Li, Yixuan Zhang, Yujia Zheng, Mengyue Yang, Petar Stojanov, Eran Segal, Eric P. Xing, Kun Zhang
Theoretically, we consider a nonparametric latent distribution (c. f., parametric assumptions in previous work) that allows for causal relationships across potentially different modalities.
1 code implementation • 4 Oct 2024 • Zicheng Sun, Yixuan Zhang, Zenan Ling, Xuhui Fan, Feng Zhou
Existing permanental processes often impose constraints on kernel types or stationarity, limiting the model's expressiveness.
no code implementations • 27 May 2024 • Dongyan Huo, Yixuan Zhang, Yudong Chen, Qiaomin Xie
By leveraging the smoothness and recurrence properties of the SA updates, we develop a fine-grained analysis of the correlation between the SA iterates $\theta_k$ and Markovian data $x_k$.
no code implementations • 10 Apr 2024 • Murong Yue, Wenhan Lyu, Wijdane Mifdal, Jennifer Suh, Yixuan Zhang, Ziyu Yao
Mathematical modeling (MM) is considered a fundamental skill for students in STEM disciplines.
no code implementations • 9 Apr 2024 • Yixuan Zhang, Dongyan Huo, Yudong Chen, Qiaomin Xie
Motivated by Q-learning, we study nonsmooth contractive stochastic approximation (SA) with constant stepsize.
1 code implementation • 31 Mar 2024 • Lizhi Lin, Honglin Mu, Zenan Zhai, Minghan Wang, Yuxia Wang, Renxi Wang, Junjie Gao, Yixuan Zhang, Wanxiang Che, Timothy Baldwin, Xudong Han, Haonan Li
Generative models are rapidly gaining popularity and being integrated into everyday applications, raising concerns over their safe use as various vulnerabilities are exposed.
no code implementations • 1 Mar 2024 • Yixuan Zhang, Feng Zhou
Fine-tuning pre-trained models is a widely employed technique in numerous real-world applications.
1 code implementation • 19 Feb 2024 • Loka Li, Zhenhao Chen, Guangyi Chen, Yixuan Zhang, Yusheng Su, Eric Xing, Kun Zhang
We have experimentally observed that LLMs possess the capability to understand the "confidence" in their own responses.
1 code implementation • 18 Feb 2024 • Renxi Wang, Haonan Li, Xudong Han, Yixuan Zhang, Timothy Baldwin
However, LLMs are optimized for language generation instead of tool use during training or alignment, limiting their effectiveness as agents.
1 code implementation • 5 Feb 2024 • Zenan Ling, Longbo Li, Zhanbo Feng, Yixuan Zhang, Feng Zhou, Robert C. Qiu, Zhenyu Liao
Deep equilibrium models (DEQs), as a typical implicit neural network, have demonstrated remarkable success on various tasks.
no code implementations • 25 Jan 2024 • Yixuan Zhang, Qiaomin Xie
By connecting the constant stepsize Q-learning to a time-homogeneous Markov chain, we show the distributional convergence of the iterates in Wasserstein distance and establish its exponential convergence rate.
1 code implementation • 10 Jan 2024 • Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao liu, Heng Ji, Hongyi Wang, huan zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, Yue Zhao
This paper introduces TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions.
no code implementations • 2023 2023 • Yaqin Ye, Yue Xiao, Yuxuan Zhou, Shengwen Li, Yuanfei Zang, Yixuan Zhang
Second, we designed a dynamic graph adjustment module to update the adjacency matrix used in each training step.
no code implementations • 18 Dec 2023 • Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie
Through extensive experiments involving language and multi-modal models on semantic understanding, logical reasoning, and generation tasks, we demonstrate that both textual and visual EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
no code implementations • 14 Dec 2023 • Yixuan Zhang, Boyu Li, Zenan Ling, Feng Zhou
In this paper, we demonstrate that despite only having access to the biased labels, it is possible to eliminate bias by filtering the fairest instances within the framework of confident learning.
1 code implementation • 5 Dec 2023 • Yixuan Zhang, Heming Wang, DeLiang Wang
Accurately detecting voiced intervals in speech signals is a critical step in pitch tracking and has numerous applications.
1 code implementation • 14 Nov 2023 • Zhilin Zhao, Longbing Cao, Yixuan Zhang, Kun-Yu Lin, Wei-Shi Zheng
This paper introduces OOD knowledge distillation, a pioneering learning framework applicable whether or not training ID data is available, given a standard network.
1 code implementation • 26 Oct 2023 • Qinlin Zhao, Jindong Wang, Yixuan Zhang, Yiqiao Jin, Kaijie Zhu, Hao Chen, Xing Xie
We hope that the framework and environment can be a promising testbed to study competition that fosters understanding of society.
1 code implementation • 14 Oct 2023 • Yixuan Zhang, Haonan Li
To bridge this gap, we present ACLUE, an evaluation benchmark designed to assess the capability of language models in comprehending ancient Chinese.
1 code implementation • 10 Oct 2023 • Yuan Li, Yixuan Zhang, Lichao Sun
We propose a novel framework that equips collaborative generative agents with human-like reasoning abilities and specialized skills.
no code implementations • 27 Sep 2023 • Hao Zhang, Yixuan Zhang, Meng Yu, Dong Yu
In this paper, we introduce a novel training framework designed to comprehensively address the acoustic howling issue by examining its fundamental formation process.
no code implementations • 27 Sep 2023 • Yixuan Zhang, Hao Zhang, Meng Yu, Dong Yu
Acoustic howling suppression (AHS) is a critical challenge in audio communication systems.
no code implementations • 16 Sep 2023 • Heming Wang, Meng Yu, Hao Zhang, Chunlei Zhang, Zhongweiyang Xu, Muqiao Yang, Yixuan Zhang, Dong Yu
Enhancing speech signal quality in adverse acoustic environments is a persistent challenge in speech processing.
no code implementations • 14 Jul 2023 • Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie
In addition to those deterministic tasks that can be automatically evaluated using existing metrics, we conducted a human study with 106 participants to assess the quality of generative tasks using both vanilla and emotional prompts.
1 code implementation • 15 Jun 2023 • Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging.
no code implementations • 11 Feb 2023 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
In recent years, by utilizing optimization techniques to formulate the propagation of deep model, a variety of so-called Optimization-Derived Learning (ODL) approaches have been proposed to address diverse learning and vision tasks.
no code implementations • 29 Jan 2023 • Yixuan Zhang, Meng Yu, Hao Zhang, Dong Yu, DeLiang Wang
The robustness of the Kalman filter to double talk and its rapid convergence make it a popular approach for addressing acoustic echo cancellation (AEC) challenges.
no code implementations • 1 Aug 2022 • Yixuan Zhang, Feng Zhou, Zhidong Li, Yang Wang, Fang Chen
In other words, the fair pre-processing methods ignore the discrimination encoded in the labels either during the learning procedure or the evaluation stage.
no code implementations • 16 Jun 2022 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
Recently, Optimization-Derived Learning (ODL) has attracted attention from learning and vision areas, which designs learning models from the perspective of optimization.
no code implementations • 28 Oct 2021 • Yixuan Zhang, Zhuo Chen, Jian Wu, Takuya Yoshioka, Peidong Wang, Zhong Meng, Jinyu Li
In this paper, we propose to apply recurrent selective attention network (RSAN) to CSS, which generates a variable number of output channels based on active speaker counting.
1 code implementation • 11 Oct 2021 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
We also extend BVFSM to address BLO with additional functional constraints.
no code implementations • 7 Jul 2021 • Yixuan Zhang, Feng Zhou, Zhidong Li, Yang Wang, Fang Chen
Therefore, we propose a Bias-TolerantFAirRegularizedLoss (B-FARL), which tries to regain the benefits using data affected by label bias and selection bias.
no code implementations • 9 Jun 2021 • Feng Zhou, Quyu Kong, Yixuan Zhang, Cheng Feng, Jun Zhu
Hawkes processes are a class of point processes that have the ability to model the self- and mutual-exciting phenomena.
no code implementations • ICLR 2021 • Feng Zhou, Yixuan Zhang, Jun Zhu
Hawkes process provides an effective statistical framework for analyzing the time-dependent interaction of neuronal spiking activities.
no code implementations • 3 Mar 2020 • Aditeya Pandey, Yixuan Zhang, John A. Guerra-Gomez, Andrea G. Parker, Michelle A. Borkin
In the task abstraction phase of the visualization design process, including in "design studies", a practitioner maps the observed domain goals to generalizable abstract tasks using visualization theory in order to better understand and address the users needs.
no code implementations • CVPR 2020 • Wei Xiong, Yutong He, Yixuan Zhang, Wenhan Luo, Lin Ma, Jiebo Luo
In this paper, we aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image, which can thereby benefit the subsequent fine-grained image recognition and few-shot learning tasks.
no code implementations • CVPR 2020 • Haoye Dong, Xiaodan Liang, Yixuan Zhang, Xujie Zhang, Zhenyu Xie, Bowen Wu, Ziqi Zhang, Xiaohui Shen, Jian Yin
Interactive fashion image manipulation, which enables users to edit images with sketches and color strokes, is an interesting research problem with great application value.
1 code implementation • 27 Apr 2019 • Zhengyuan Yang, Yixuan Zhang, Jiebo Luo
The framework consists of a facial attention module and a hierarchical segment temporal module.
1 code implementation • 20 Jan 2018 • Zhengyuan Yang, Yixuan Zhang, Jerry Yu, Junjie Cai, Jiebo Luo
In this work, we propose a multi-task learning framework to predict the steering angle and speed control simultaneously in an end-to-end manner.
1 code implementation • 20 Jan 2018 • Zhongping Zhang, Yixuan Zhang, Zheng Zhou, Jiebo Luo
In this paper, we substantiate that Fast SCNN can detect drastic change of chroma and saturation.