no code implementations • 26 Feb 2025 • Yuxin Liu, M. Amin Rahimian
Learning with continuous signals in the private regime is more efficient, as smooth randomized response enhances the log-likelihood ratio over time, improving information aggregation.
no code implementations • 24 Feb 2025 • Jiahe Li, Xin Chen, Fanqi Shen, Junru Chen, Yuxin Liu, Daoze Zhang, Zhizhang Yuan, Fang Zhao, Meng Li, Yang Yang
Neurological disorders represent significant global health challenges, driving the advancement of brain signal analysis methods.
no code implementations • 16 Feb 2025 • Yuxin Liu, Zhenxi Song, Guoyang Xu, ZiRui Wang, Feng Wan, Yong Hu, Min Zhang, Zhiguo Zhang
Brain-computer interface (BCI) based on steady-state visual evoked potentials (SSVEP) is a popular paradigm for its simplicity and high information transfer rate (ITR).
no code implementations • 15 Jan 2025 • Weizhen Wang, Chenda Duan, Zhenghao Peng, Yuxin Liu, Bolei Zhou
Vision Language Models (VLMs) demonstrate significant potential as embodied AI agents for various mobility applications.
no code implementations • 14 Jan 2025 • ZiRui Wang, Zhenxi Song, Yi Guo, Yuxin Liu, Guoyang Xu, Min Zhang, Zhiguo Zhang
The development of EEG decoding algorithms confronts challenges such as data sparsity, subject variability, and the need for precise annotations, all of which are vital for advancing brain-computer interfaces and enhancing the diagnosis of diseases.
1 code implementation • 2 Oct 2024 • Yuxin Liu, Rongjun Ge, Yuting He, Zhan Wu, Shangwen Yang, Yuan Gao, Chenyu You, Ge Wang, Yang Chen, Shuo Li
However, the resulting trade-offs degrade image quality, limiting clinical acceptability.
1 code implementation • 30 Aug 2024 • Guoyang Xu, Junqi Xue, Yuxin Liu, ZiRui Wang, Min Zhang, Zhenxi Song, Zhiguo Zhang
Multimodal sentiment analysis aims to learn representations from different modalities to identify human emotions.
no code implementations • 2 Aug 2024 • Yuxin Liu, Jimin Lin, Achintya Gopal
Traditional approaches to estimating beta in finance often involve rigid assumptions and fail to adequately capture beta dynamics, limiting their effectiveness in use cases like hedging.
no code implementations • 24 Mar 2024 • Lijin Wu, Shanshan Lei, Feilong Liao, Yuanjun Zheng, Yuxin Liu, Wentao Fu, Hao Song, Jiajun Zhou
As the number of IoT devices increases, security concerns become more prominent.
1 code implementation • 31 Jan 2024 • Benoit Baudry, Khashayar Etemadi, Sen Fang, Yogya Gamage, Yi Liu, Yuxin Liu, Martin Monperrus, Javier Ron, André Silva, Deepika Tiwari
The results show that LLMs can successfully generate realistic test data generators in a wide range of domains at all three levels of integrability.
no code implementations • 30 Jan 2024 • Thomas Degris, Khurram Javed, Arsalan SharifNassab, Yuxin Liu, Richard Sutton
We conclude by suggesting that combining both approaches could be a promising future direction to improve the performance of neural networks in continual learning.
no code implementations • 21 Nov 2023 • Yuxin Liu, Minshan Xie, Hanyuan Liu, Tien-Tsin Wong
In this paper, we propose a synchronized multi-view diffusion approach that allows the diffusion processes from different views to reach a consensus of the generated content early in the process, and hence ensures the texture consistency.
no code implementations • 4 Oct 2023 • Zhihao Zong, Fazhi He, Rubin Fan, Yuxin Liu
Computer Aided Design (CAD), especially the feature-based parametric CAD, plays an important role in modern industry and society.
no code implementations • 1 Jun 2023 • Jinbo Xing, Menghan Xia, Yuxin Liu, Yuechen Zhang, Yong Zhang, Yingqing He, Hanyuan Liu, Haoxin Chen, Xiaodong Cun, Xintao Wang, Ying Shan, Tien-Tsin Wong
Our method, dubbed Make-Your-Video, involves joint-conditional video generation using a Latent Diffusion Model that is pre-trained for still image synthesis and then promoted for video generation with the introduction of temporal modules.
1 code implementation • 17 Oct 2022 • Yongchang Hao, Yuxin Liu, Lili Mou
We additionally propose a simple modification to stabilize the RL training on non-parallel datasets with our induced reward function.
no code implementations • 12 Oct 2022 • Yuxin Liu, Yawen Li, Yingxia Shao, Zeli Guan
Therefore, a hypergraph neural network model based on dual channel convolution is proposed.
no code implementations • 30 Mar 2021 • Weiqing Min, Zhiling Wang, Yuxin Liu, Mengjiang Luo, Liping Kang, Xiaoming Wei, Xiaolin Wei, Shuqiang Jiang
Food2K can be further explored to benefit more food-relevant tasks including emerging and more complex ones (e. g., nutritional understanding of food), and the trained models on Food2K can be expected as backbones to improve the performance of more food-relevant tasks.
1 code implementation • 10 Oct 2020 • Qiansheng Wang, Yuxin Liu, Chengguo Lv, Zhen Wang, Guohong Fu
Open-domain response generation is the task of generating sensible and informative re-sponses to the source sentence.
no code implementations • 13 Mar 2020 • Gabriel F. N. Gonçalves, Assen Batchvarov, Yuyi Liu, Yuxin Liu, Lachlan Mason, Indranil Pan, Omar K. Matar
In chemical process engineering, surrogate models of complex systems are often necessary for tasks of domain exploration, sensitivity analysis of the design parameters, and optimization.