no code implementations • 17 Dec 2024 • Tiankai Xie, Jiaqing Chen, Yaoqing Yang, Caleb Geniesse, Ge Shi, Ajinkya Chaudhari, John Kevin Cava, Michael W. Mahoney, Talita Perciano, Gunther H. Weber, Ross Maciejewski
Modern machine learning often relies on optimizing a neural network's parameters using a loss function to learn complex features.
no code implementations • 4 Dec 2024 • Fu Lei, Ge Shi
In today's complex and volatile financial market environment, risk management of multi-asset portfolios faces significant challenges.
no code implementations • 19 Nov 2024 • Caleb Geniesse, Jiaqing Chen, Tiankai Xie, Ge Shi, Yaoqing Yang, Dmitriy Morozov, Talita Perciano, Michael W. Mahoney, Ross Maciejewski, Gunther H. Weber
After describing this new topological landscape profile representation, we show how the shape of loss landscapes can reveal new details about model performance and learning dynamics, highlighting several use cases, including image segmentation (e. g., UNet) and scientific machine learning (e. g., physics-informed neural networks).
no code implementations • 6 Sep 2024 • Xiaoyi Liu, Zhou Yu, Lianghao Tan, Yafeng Yan, Ge Shi
To further enhance classification accuracy, we developed ensemble models employing max voting, average voting, and stacking, resulting in accuracies of 0. 803, 0. 82, and 0. 83.
no code implementations • 2 Sep 2024 • Menglin Liu, Ge Shi
Recent advancements in large language models (LLMs) have opened new avenues for enhancing text classification efficiency in political science, surpassing traditional machine learning methods that often require extensive feature engineering, human labeling, and task-specific training.
1 code implementation • 17 Jun 2024 • Ge Shi, Ziwen Kan, Jason Smucny, Ian Davidson
In this study, we examine the efficacy of post-hoc local attribution methods in identifying features with predictive power from irrelevant ones in domains characterized by a low signal-to-noise ratio (SNR), a common scenario in real-world machine learning applications.
no code implementations • 23 Mar 2024 • Hongzheng Li, Ruojin Wang, Ge Shi, Xing Lv, Lei Lei, Chong Feng, Fang Liu, JinKun Lin, Yangguang Mei, Lingnan Xu
In this paper, we introduce RAAMove, a comprehensive multi-domain corpus dedicated to the annotation of move structures in RA abstracts.
no code implementations • 14 Feb 2024 • Ge Shi, Zhili Yang
Then we render the output of optical flow net to a fully convolutional SegNet model.
no code implementations • 25 Jan 2024 • Zeyu Xi, Ge Shi, Xuefen Li, Junchi Yan, Zun Li, Lifang Wu, Zilin Liu, Liang Wang
We develop a knowledge guided entity-aware video captioning network (KEANet) based on a candidate player list in encoder-decoder form for basketball live text broadcast.
no code implementations • 7 Oct 2023 • Shuyang Liu, Zixuan Chen, Ge Shi, Ji Wang, Changjie Fan, Yu Xiong, Runze Wu Yujing Hu, Ze Ji, Yang Gao
The selection of appropriate baselines in IG is crucial for crafting meaningful and unbiased explanations of model predictions in diverse settings.
no code implementations • 16 May 2023 • Bo wang, Heyan Huang, Xiaochi Wei, Ge Shi, Xiao Liu, Chong Feng, Tong Zhou, Shuaiqiang Wang, Dawei Yin
Event extraction aims to recognize pre-defined event triggers and arguments from texts, which suffer from the lack of high-quality annotations.
no code implementations • 28 Feb 2023 • Xianglong Lang, Zhuming Wang, Zun Li, Meng Tian, Ge Shi, Lifang Wu, Liang Wang
Specifically, the framework consists of a Visual Representation Module to extract individual appearance features, a Knowledge Augmented Semantic Relation Module explore semantic representations of individual actions, and a Knowledge-Semantic-Visual Interaction Module aims to integrate visual and semantic information by the knowledge.
no code implementations • ACL 2022 • Xiao Liu, Heyan Huang, Ge Shi, Bo wang
We consider event extraction in a generative manner with template-based conditional generation.
no code implementations • 26 Jan 2022 • Sinuo Deng, Lifang Wu, Ge Shi, Lehao Xing, Meng Jian, Ye Xiang
We first introduce a prompt tuning method that mimics the pretraining objective of CLIP and thus can leverage the rich image and text semantics entailed in CLIP.
no code implementations • EMNLP 2018 • Ge Shi, Chong Feng, Lifu Huang, Boliang Zhang, Heng Ji, Lejian Liao, He-Yan Huang
Relation Extraction suffers from dramatical performance decrease when training a model on one genre and directly applying it to a new genre, due to the distinct feature distributions.