no code implementations • 14 Aug 2024 • Jingyi Yang, Zitong Yu, Xiuming Ni, Jia He, Hui Li
In videos containing spoofed faces, we may uncover the spoofing evidence based on either photometric or dynamic abnormality, even a combination of both.
no code implementations • 30 Jul 2024 • Sara Abdali, Jia He, CJ Barberan, Richard Anarfi
The advent of Large Language Models (LLMs) has garnered significant popularity and wielded immense power across various domains within Natural Language Processing (NLP).
no code implementations • 11 Jul 2024 • Jingyi Yang, Zitong Yu, Xiuming Ni, Jia He, Hui Li
Adversarial learning and meta-learning techniques have been adopted to learn domain-invariant representations.
no code implementations • 24 May 2024 • Jia He, Bonan Li, Ge Yang, Ziwen Liu
Solving 3D medical inverse problems such as image restoration and reconstruction is crucial in modern medical field.
no code implementations • 29 Mar 2024 • Huiyuan Yu, Jia He, Maggie Cheng
This paper advances OMP in two fronts: it offers a fast algorithm for the orthogonal projection of the input signal at each iteration, and a new selection criterion for making the greedy choice, which reduces the number of iterations it takes to recover the signal.
no code implementations • 19 Mar 2024 • Sara Abdali, Richard Anarfi, CJ Barberan, Jia He
Large language models (LLMs) have significantly transformed the landscape of Natural Language Processing (NLP).
no code implementations • 9 Mar 2024 • Sara Abdali, Richard Anarfi, CJ Barberan, Jia He
Large Language Models (LLMs) have revolutionized the field of Natural Language Generation (NLG) by demonstrating an impressive ability to generate human-like text.
no code implementations • 21 Sep 2023 • Jia He, Ziye Jia, Chao Dong, Junyu Liu, Qihui Wu, Jingxian Liu
Unmanned aerial vehicles (UAVs) are recognized as promising technologies for area coverage due to the flexibility and adaptability.
1 code implementation • 25 May 2023 • Shuo Yu, Hongyan Xue, Xiang Ao, Feiyang Pan, Jia He, Dandan Tu, Qing He
In practice, a set of formulaic alphas is often used together for better modeling precision, so we need to find synergistic formulaic alpha sets that work well together.
no code implementations • 21 Mar 2023 • Dapeng Li, Feiyang Pan, Jia He, Zhiwei Xu, Dandan Tu, Guoliang Fan
In high-dimensional time-series analysis, it is essential to have a set of key factors (namely, the style factors) that explain the change of the observed variable.
no code implementations • 22 Jul 2022 • Feiyang Pan, Tongzhe Zhang, Ling Luo, Jia He, Shuoling Liu
On the one hand, the continuous action space using percentage changes in prices is preferred for generalization.
no code implementations • 29 Sep 2021 • Mengda Huang, Feiyang Pan, Jia He, Xiang Ao, Qing He
Constrained Reinforcement Learning (CRL) burgeons broad interest in recent years, which pursues both goals of maximizing long-term returns and constraining costs.
no code implementations • NeurIPS 2020 • Feiyang Pan, Jia He, Dandan Tu, Qing He
In complex and noisy settings, model-based RL tends to have trouble using the model if it does not know when to trust the model.
no code implementations • 11 Oct 2019 • Changying Du, Jia He, Changde Du, Fuzhen Zhuang, Qing He, Guoping Long
Existing multi-view learning methods based on kernel function either require the user to select and tune a single predefined kernel or have to compute and store many Gram matrices to perform multiple kernel learning.
no code implementations • 10 Oct 2019 • Changying Du, Fuzhen Zhuang, Jia He, Qing He, Guoping Long
In real world machine learning applications, testing data may contain some meaningful new categories that have not been seen in labeled training data.