no code implementations • 13 Feb 2025 • Zihao Li, Xiao Lin, Zhining Liu, Jiaru Zou, Ziwei Wu, Lecheng Zheng, Dongqi Fu, Yada Zhu, Hendrik Hamann, Hanghang Tong, Jingrui He
While many advances in time series models focus exclusively on numerical data, research on multimodal time series, particularly those involving contextual textual information commonly encountered in real-world scenarios, remains in its infancy.
2 code implementations • 11 Oct 2024 • Bing Zhang, Mikio Takeuchi, Ryo Kawahara, Shubhi Asthana, Md. Maruf Hossain, Guang-jie Ren, Kate Soule, Yada Zhu
The advancement of large language models (LLMs) has led to a greater challenge of having a rigorous and systematic evaluation of complex tasks performed, especially in enterprise applications.
no code implementations • 8 Aug 2024 • Dongqi Fu, Yada Zhu, Hanghang Tong, Kommy Weldemariam, Onkar Bhardwaj, Jingrui He
Understanding the causal interaction of time series variables can contribute to time series data analysis for many real-world applications, such as climate forecasting and extreme weather alerts.
1 code implementation • 15 Jul 2024 • Junhong Lin, Xiaojie Guo, Shuaicheng Zhang, Dawei Zhou, Yada Zhu, Julian Shun
However, existing benchmarks for graph learning often focus on heterogeneous graphs with homophily or homogeneous graphs with heterophily, leaving a gap in understanding how methods perform on graphs that are both heterogeneous and heterophilic.
1 code implementation • 13 Jun 2024 • Zhining Liu, Ruizhong Qiu, Zhichen Zeng, Yada Zhu, Hendrik Hamann, Hanghang Tong
Data collected in the real world often encapsulates historical discrimination against disadvantaged groups and individuals.
no code implementations • 18 Apr 2024 • Yikun Ban, Ishika Agarwal, Ziwei Wu, Yada Zhu, Kommy Weldemariam, Hanghang Tong, Jingrui He
We study both stream-based and pool-based active learning with neural network approximations.
1 code implementation • 17 Apr 2024 • Yue Zhou, Yada Zhu, Diego Antognini, Yoon Kim, Yang Zhang
This paper studies the relationship between the surface form of a mathematical problem and its solvability by large language models.
1 code implementation • 20 Oct 2023 • Binchi Zhang, Yushun Dong, Chen Chen, Yada Zhu, Minnan Luo, Jundong Li
Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group (e. g., female) in graph-based applications.
no code implementations • 29 Sep 2023 • Junmo Kang, Hongyin Luo, Yada Zhu, Jacob Hansen, James Glass, David Cox, Alan Ritter, Rogerio Feris, Leonid Karlinsky
Recent works have demonstrated the effectiveness of self-alignment in which a large language model is aligned to follow general instructions using instructional data generated from the model itself starting from a handful of human-written seeds.
1 code implementation • 27 Aug 2023 • Zhining Liu, Ruizhong Qiu, Zhichen Zeng, Hyunsik Yoo, David Zhou, Zhe Xu, Yada Zhu, Kommy Weldemariam, Jingrui He, Hanghang Tong
In this work, we approach the root cause of class-imbalance bias from an topological paradigm.
no code implementations • 29 May 2023 • Dingsu Wang, Yuchen Yan, Ruizhong Qiu, Yada Zhu, Kaiyu Guan, Andrew J Margenot, Hanghang Tong
First, we define the problem of imputation over NTS which contains missing values in both node time series features and graph structures.
no code implementations • 17 May 2023 • Haohui Wang, Baoyu Jing, Kaize Ding, Yada Zhu, Wei Cheng, Si Zhang, Yonghui Fan, Liqing Zhang, Dawei Zhou
To bridge this gap, we propose a generalization bound for long-tail classification on graphs by formulating the problem in the fashion of multi-task learning, i. e., each task corresponds to the prediction of one particular class.
no code implementations • 30 Mar 2023 • Lecheng Zheng, Dawei Zhou, Hanghang Tong, Jiejun Xu, Yada Zhu, Jingrui He
In addition, we propose a generic context sampling strategy for graph generative models, which is proven to be capable of fairly capturing the contextual information of each group with a high probability.
1 code implementation • 11 Feb 2023 • Lecheng Zheng, Yada Zhu, Jingrui He
We also derive insights regarding the relative performance of the proposed regularizers in various scenarios.
no code implementations • 25 Jan 2023 • Baoyu Jing, Yuchen Yan, Kaize Ding, Chanyoung Park, Yada Zhu, Huan Liu, Hanghang Tong
Most recent bipartite graph SSL methods are based on contrastive learning which learns embeddings by discriminating positive and negative node pairs.
no code implementations • 27 Sep 2022 • Baoyu Jing, Si Zhang, Yada Zhu, Bin Peng, Kaiyu Guan, Andrew Margenot, Hanghang Tong
In this paper, we show both theoretically and empirically that the uncertainty could be effectively reduced by retrieving relevant time series as references.
1 code implementation • 15 Aug 2022 • Shengyu Feng, Baoyu Jing, Yada Zhu, Hanghang Tong
In this work, by introducing an adversarial graph view for data augmentation, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL), to extract informative contrastive samples within reasonable constraints.
no code implementations • 31 May 2022 • Baoyu Jing, Yuchen Yan, Yada Zhu, Hanghang Tong
We theoretically prove that COIN is able to effectively increase the mutual information of node embeddings and COIN is upper-bounded by the prior distributions of nodes.
1 code implementation • 14 Feb 2022 • Shengyu Feng, Baoyu Jing, Yada Zhu, Hanghang Tong
Contrastive learning is an effective unsupervised method in graph representation learning.
no code implementations • 2 Dec 2021 • Zixuan Yuan, Yada Zhu, Wei zhang, Ziming Huang, Guangnan Ye, Hui Xiong
Earnings call (EC), as a periodic teleconference of a publicly-traded company, has been extensively studied as an essential market indicator because of its high analytical value in corporate fundamentals.
no code implementations • ACL 2021 • Wei zhang, Ziming Huang, Yada Zhu, Guangnan Ye, Xiaodong Cui, Fan Zhang
In the recent advances of natural language processing, the scale of the state-of-the-art models and datasets is usually extensive, which challenges the application of sample-based explanation methods in many aspects, such as explanation interpretability, efficiency, and faithfulness.
1 code implementation • 9 Jun 2021 • Wei zhang, Ziming Huang, Yada Zhu, Guangnan Ye, Xiaodong Cui, Fan Zhang
In the recent advances of natural language processing, the scale of the state-of-the-art models and datasets is usually extensive, which challenges the application of sample-based explanation methods in many aspects, such as explanation interpretability, efficiency, and faithfulness.
1 code implementation • 19 May 2021 • Lecheng Zheng, JinJun Xiong, Yada Zhu, Jingrui He
We first provide a theoretical analysis showing that the vanilla contrastive learning loss easily leads to the sub-optimal solution in the presence of false negative pairs, whereas the proposed weighted loss could automatically adjust the weight based on the similarity of the learned representations to mitigate this issue.
1 code implementation • 15 Feb 2021 • Baoyu Jing, Hanghang Tong, Yada Zhu
We propose a novel model called Network of Tensor Time Series, which is comprised of two modules, including Tensor Graph Convolutional Network (TGCN) and Tensor Recurrent Neural Network (TRNN).
1 code implementation • 9 Feb 2020 • Yunan Ye, Hengzhi Pei, Boxin Wang, Pin-Yu Chen, Yada Zhu, Jun Xiao, Bo Li
Our framework aims to address two unique challenges in financial PM: (1) data heterogeneity -- the collected information for each asset is usually diverse, noisy and imbalanced (e. g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.
no code implementations • 17 Oct 2019 • Di Chen, Yada Zhu, Xiaodong Cui, Carla P. Gomes
Real-world applications often involve domain-specific and task-based performance objectives that are not captured by the standard machine learning losses, but are critical for decision making.
no code implementations • 19 Sep 2019 • Giovanni Mariani, Yada Zhu, Jianbo Li, Florian Scheidegger, Roxana Istrate, Costas Bekas, A. Cristiano I. Malossi
Sound financial theories demonstrate that in an efficient marketplace all information available today, including expectations on future events, are represented in today prices whereas future price trend is driven by the uncertainty.
Computational Finance Statistical Finance