1 code implementation • 14 Nov 2023 • Zhilin Zhao, Longbing Cao, Yixuan Zhang
This paper introduces OOD knowledge distillation, a pioneering learning framework applicable whether or not training ID data is available, given a standard network.
no code implementations • 26 Oct 2023 • Qinlin Zhao, Jindong Wang, Yixuan Zhang, Yiqiao Jin, Kaijie Zhu, Hao Chen, Xing Xie
Large language models (LLMs) have been widely used as agents to complete different tasks, such as personal assistance or event planning.
no code implementations • 14 Oct 2023 • Yixuan Zhang, Haonan Li
To bridge this gap, we present ACLUE, an evaluation benchmark designed to assess the capability of language models in comprehending ancient Chinese.
no code implementations • 10 Oct 2023 • Yuan Li, Yixuan Zhang, Lichao Sun
We propose a novel framework that equips collaborative generative agents with human-like reasoning abilities and specialized skills.
no code implementations • 27 Sep 2023 • Hao Zhang, Yixuan Zhang, Meng Yu, Dong Yu
In this paper, we introduce a novel training framework designed to comprehensively address the acoustic howling issue by examining its fundamental formation process.
no code implementations • 27 Sep 2023 • Yixuan Zhang, Hao Zhang, Meng Yu, Dong Yu
Acoustic howling suppression (AHS) is a critical challenge in audio communication systems.
no code implementations • 16 Sep 2023 • Heming Wang, Meng Yu, Hao Zhang, Chunlei Zhang, Zhongweiyang Xu, Muqiao Yang, Yixuan Zhang, Dong Yu
Enhancing speech signal quality in adverse acoustic environments is a persistent challenge in speech processing.
no code implementations • 14 Jul 2023 • Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie
In addition to those deterministic tasks that can be automatically evaluated using existing metrics, we conducted a human study with 106 participants to assess the quality of generative tasks using both vanilla and emotional prompts.
1 code implementation • 15 Jun 2023 • Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging.
no code implementations • 11 Feb 2023 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
In recent years, by utilizing optimization techniques to formulate the propagation of deep model, a variety of so-called Optimization-Derived Learning (ODL) approaches have been proposed to address diverse learning and vision tasks.
no code implementations • 29 Jan 2023 • Yixuan Zhang, Meng Yu, Hao Zhang, Dong Yu, DeLiang Wang
The Kalman filter is widely used for addressing acoustic echo cancellation (AEC) problems due to their robustness to double-talk and fast convergence.
no code implementations • 1 Aug 2022 • Yixuan Zhang, Feng Zhou, Zhidong Li, Yang Wang, Fang Chen
In other words, the fair pre-processing methods ignore the discrimination encoded in the labels either during the learning procedure or the evaluation stage.
no code implementations • 16 Jun 2022 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
Recently, Optimization-Derived Learning (ODL) has attracted attention from learning and vision areas, which designs learning models from the perspective of optimization.
no code implementations • 28 Oct 2021 • Yixuan Zhang, Zhuo Chen, Jian Wu, Takuya Yoshioka, Peidong Wang, Zhong Meng, Jinyu Li
In this paper, we propose to apply recurrent selective attention network (RSAN) to CSS, which generates a variable number of output channels based on active speaker counting.
1 code implementation • 11 Oct 2021 • Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
We also extend BVFSM to address BLO with additional functional constraints.
no code implementations • 7 Jul 2021 • Yixuan Zhang, Feng Zhou, Zhidong Li, Yang Wang, Fang Chen
Therefore, we propose a Bias-TolerantFAirRegularizedLoss (B-FARL), which tries to regain the benefits using data affected by label bias and selection bias.
no code implementations • 9 Jun 2021 • Feng Zhou, Quyu Kong, Yixuan Zhang, Cheng Feng, Jun Zhu
Hawkes processes are a class of point processes that have the ability to model the self- and mutual-exciting phenomena.
no code implementations • ICLR 2021 • Feng Zhou, Yixuan Zhang, Jun Zhu
Hawkes process provides an effective statistical framework for analyzing the time-dependent interaction of neuronal spiking activities.
no code implementations • 3 Mar 2020 • Aditeya Pandey, Yixuan Zhang, John A. Guerra-Gomez, Andrea G. Parker, Michelle A. Borkin
In the task abstraction phase of the visualization design process, including in "design studies", a practitioner maps the observed domain goals to generalizable abstract tasks using visualization theory in order to better understand and address the users needs.
no code implementations • CVPR 2020 • Wei Xiong, Yutong He, Yixuan Zhang, Wenhan Luo, Lin Ma, Jiebo Luo
In this paper, we aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image, which can thereby benefit the subsequent fine-grained image recognition and few-shot learning tasks.
no code implementations • CVPR 2020 • Haoye Dong, Xiaodan Liang, Yixuan Zhang, Xujie Zhang, Zhenyu Xie, Bowen Wu, Ziqi Zhang, Xiaohui Shen, Jian Yin
Interactive fashion image manipulation, which enables users to edit images with sketches and color strokes, is an interesting research problem with great application value.
1 code implementation • 27 Apr 2019 • Zhengyuan Yang, Yixuan Zhang, Jiebo Luo
The framework consists of a facial attention module and a hierarchical segment temporal module.
1 code implementation • 20 Jan 2018 • Zhongping Zhang, Yixuan Zhang, Zheng Zhou, Jiebo Luo
In this paper, we substantiate that Fast SCNN can detect drastic change of chroma and saturation.
1 code implementation • 20 Jan 2018 • Zhengyuan Yang, Yixuan Zhang, Jerry Yu, Junjie Cai, Jiebo Luo
In this work, we propose a multi-task learning framework to predict the steering angle and speed control simultaneously in an end-to-end manner.