Search Results for author: Xinghua Qu

Found 14 papers, 5 papers with code

DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation

2 code implementations22 Jun 2022 Zhu Sun, Hui Fang, Jie Yang, Xinghua Qu, Hongyang Liu, Di Yu, Yew-Soon Ong, Jie Zhang

Recently, one critical issue looms large in the field of recommender systems -- there are no effective benchmarks for rigorous evaluation -- which consequently leads to unreproducible evaluation and unfair comparison.

Benchmarking Recommendation Systems

Unsupervised Video Domain Adaptation for Action Recognition: A Disentanglement Perspective

1 code implementation NeurIPS 2023 Pengfei Wei, Lingdong Kong, Xinghua Qu, Yi Ren, Zhiqiang Xu, Jing Jiang, Xiang Yin

Specifically, we consider the generation of cross-domain videos from two sets of latent factors, one encoding the static information and another encoding the dynamic information.

Action Recognition Disentanglement +1

Towards Building Voice-based Conversational Recommender Systems: Datasets, Potential Solutions, and Prospects

1 code implementation14 Jun 2023 Xinghua Qu, Hongyang Liu, Zhu Sun, Xiang Yin, Yew Soon Ong, Lu Lu, Zejun Ma

Conversational recommender systems (CRSs) have become crucial emerging research topics in the field of RSs, thanks to their natural advantages of explicitly acquiring user preferences via interactive conversations and revealing the reasons behind recommendations.

Recommendation Systems

Large Language Models for Intent-Driven Session Recommendations

1 code implementation7 Dec 2023 Zhu Sun, Hongyang Liu, Xinghua Qu, Kaidong Feng, Yan Wang, Yew-Soon Ong

Intent-aware session recommendation (ISR) is pivotal in discerning user intents within sessions for precise predictions.

Importance Prioritized Policy Distillation

1 code implementation KDD 2022 Xinghua Qu, Yew-Soon Ong, Abhishek Gupta, Pengfei Wei, Zhu Sun, Zejun Ma

Given such an issue, we denote the \emph{frame importance} as its contribution to the expected reward on a particular frame, and hypothesize that adapting such frame importance could benefit the performance of the distilled student policy.

Atari Games Decision Making +1

Subdomain Adaptation with Manifolds Discrepancy Alignment

no code implementations6 May 2020 Pengfei Wei, Yiping Ke, Xinghua Qu, Tze-Yun Leong

Specifically, we propose to use low-dimensional manifold to represent subdomain, and align the local data distribution discrepancy in each manifold across domains.

Subdomain adaptation Transfer Learning

Adversary Agnostic Robust Deep Reinforcement Learning

no code implementations14 Aug 2020 Xinghua Qu, Yew-Soon Ong, Abhishek Gupta, Zhu Sun

Motivated by this finding, we propose a new policy distillation loss with two terms: 1) a prescription gap maximization loss aiming at simultaneously maximizing the likelihood of the action selected by the teacher policy and the entropy over the remaining actions; 2) a corresponding Jacobian regularization loss that minimizes the magnitude of the gradient with respect to the input state.

Adversarial Robustness Atari Games +2

An Improved Transfer Model: Randomized Transferable Machine

no code implementations27 Nov 2020 Pengfei Wei, Xinghua Qu, Yew Soon Ong, Zejun Ma

Existing studies usually assume that the learned new feature representation is \emph{domain-invariant}, and thus train a transfer model $\mathcal{M}$ on the source domain.

Transfer Learning

Synthesising Audio Adversarial Examples for Automatic Speech Recognition

no code implementations29 Sep 2021 Xinghua Qu, Pengfei Wei, Mingyong Gao, Zhu Sun, Yew-Soon Ong, Zejun Ma

Adversarial examples in automatic speech recognition (ASR) are naturally sounded by humans yet capable of fooling well trained ASR models to transcribe incorrectly.

Audio Synthesis Automatic Speech Recognition +2

Language Adaptive Cross-lingual Speech Representation Learning with Sparse Sharing Sub-networks

no code implementations9 Mar 2022 Yizhou Lu, Mingkun Huang, Xinghua Qu, Pengfei Wei, Zejun Ma

It makes room for language specific modeling by pruning out unimportant parameters for each language, without requiring any manually designed language specific component.

Representation Learning speech-recognition +1

Large Language Models as Evolutionary Optimizers

no code implementations29 Oct 2023 Shengcai Liu, Caishun Chen, Xinghua Qu, Ke Tang, Yew-Soon Ong

Specifically, in each generation of the evolutionary search, LMEA instructs the LLM to select parent solutions from current population, and perform crossover and mutation to generate offspring solutions.

Combinatorial Optimization Evolutionary Algorithms

Dynamic In-Context Learning from Nearest Neighbors for Bundle Generation

no code implementations26 Dec 2023 Zhu Sun, Kaidong Feng, Jie Yang, Xinghua Qu, Hui Fang, Yew-Soon Ong, Wenyuan Liu

To enhance reliability and mitigate the hallucination issue, we develop (1) a self-correction strategy to foster mutual improvement in both tasks without supervision signals; and (2) an auto-feedback mechanism to recurrently offer dynamic supervision based on the distinct mistakes made by ChatGPT on various neighbor sessions.

Hallucination In-Context Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.