1 code implementation • 26 Dec 2024 • Senbin Zhu, Chenyuan He, Hongde Liu, Pengcheng Dong, Hanjie Zhao, Yuchen Yan, Yuxiang Jia, Hongying Zan, Min Peng
To address this, we have constructed the largest English and Chinese financial entity-level sentiment analysis datasets to date.
1 code implementation • 12 Dec 2024 • Ben Liu, Jihai Zhang, Fangquan Lin, Cheng Yang, Min Peng
Fundamentally, applying LLMs on KGC introduces several critical challenges, including a vast set of entity candidates, hallucination issue of LLMs, and under-exploitation of the graph structure.
no code implementations • 20 Aug 2024 • Qianqian Xie, Dong Li, Mengxi Xiao, Zihao Jiang, Ruoyu Xiang, Xiao Zhang, Zhengyu Chen, Yueru He, Weiguang Han, Yuzhe Yang, Shunian Chen, Yifei Zhang, Lihang Shen, Daniel Kim, Zhiwei Liu, Zheheng Luo, Yangyang Yu, Yupeng Cao, Zhiyang Deng, Zhiyuan Yao, Haohang Li, Duanyu Feng, Yongfu Dai, VijayaSai Somasundaram, Peng Lu, Yilun Zhao, Yitao Long, Guojun Xiong, Kaleb Smith, Honghai Yu, Yanzhao Lai, Min Peng, Jianyun Nie, Jordan W. Suchow, Xiao-Yang Liu, Benyou Wang, Alejandro Lopez-Lira, Jimin Huang, Sophia Ananiadou
We begin with FinLLaMA, pre-trained on a 52 billion token financial corpus, incorporating text, tables, and time-series data to embed comprehensive financial knowledge.
2 code implementations • 10 Mar 2024 • Gang Hu, Ke Qin, Chenhan Yuan, Min Peng, Alejandro Lopez-Lira, Benyou Wang, Sophia Ananiadou, Jimin Huang, Qianqian Xie
While the progression of Large Language Models (LLMs) has notably propelled financial analysis, their application has largely been confined to singular language realms, leaving untapped the potential of bilingual Chinese-English capacity.
1 code implementation • 26 Feb 2024 • Mengxi Xiao, Qianqian Xie, Ziyan Kuang, Zhicheng Liu, Kailai Yang, Min Peng, Weiguang Han, Jimin Huang
Large Language Models (LLMs) can play a vital role in psychotherapy by adeptly handling the crucial task of cognitive reframing and overcoming challenges such as shame, distrust, therapist skill variability, and resource scarcity.
2 code implementations • 20 Feb 2024 • Qianqian Xie, Weiguang Han, Zhengyu Chen, Ruoyu Xiang, Xiao Zhang, Yueru He, Mengxi Xiao, Dong Li, Yongfu Dai, Duanyu Feng, Yijing Xu, Haoqiang Kang, Ziyan Kuang, Chenhan Yuan, Kailai Yang, Zheheng Luo, Tianlin Zhang, Zhiwei Liu, Guojun Xiong, Zhiyang Deng, Yuechen Jiang, Zhiyuan Yao, Haohang Li, Yangyang Yu, Gang Hu, Jiajia Huang, Xiao-Yang Liu, Alejandro Lopez-Lira, Benyou Wang, Yanzhao Lai, Hao Wang, Min Peng, Sophia Ananiadou, Jimin Huang
Our evaluation of 15 representative LLMs, including GPT-4, ChatGPT, and the latest Gemini, reveals several key findings: While LLMs excel in IE and textual analysis, they struggle with advanced reasoning and complex tasks like text generation and forecasting.
1 code implementation • 12 Feb 2024 • Xiao Zhang, Ruoyu Xiang, Chenhan Yuan, Duanyu Feng, Weiguang Han, Alejandro Lopez-Lira, Xiao-Yang Liu, Sophia Ananiadou, Min Peng, Jimin Huang, Qianqian Xie
We evaluate our model and existing LLMs using FLARE-ES, the first comprehensive bilingual evaluation benchmark with 21 datasets covering 9 tasks.
2 code implementations • 8 Jun 2023 • Qianqian Xie, Weiguang Han, Xiao Zhang, Yanzhao Lai, Min Peng, Alejandro Lopez-Lira, Jimin Huang
This paper introduces PIXIU, a comprehensive framework including the first financial LLM based on fine-tuning LLaMA with instruction data, the first instruction data with 136K data samples to support the fine-tuning, and an evaluation benchmark with 5 tasks and 9 datasets.
1 code implementation • 13 May 2023 • Wenjie Xu, Ben Liu, Miao Peng, Xu Jia, Min Peng
We train our model with a masking strategy to convert TKGC task into a masked token prediction task, which can leverage the semantic information in pre-trained language models.
no code implementations • 10 Apr 2023 • Qianqian Xie, Weiguang Han, Yanzhao Lai, Min Peng, Jimin Huang
Recently, large language models (LLMs) like ChatGPT have demonstrated remarkable performance across a variety of natural language processing tasks.
no code implementations • 1 Apr 2023 • Weiguang Han, Jimin Huang, Qianqian Xie, Boyi Zhang, Yanzhao Lai, Min Peng
Although pair trading is the simplest hedging strategy for an investor to eliminate market risk, it is still a great challenge for reinforcement learning (RL) methods to perform pair trading as human expertise.
1 code implementation • 4 Feb 2023 • Min Peng, Chongyang Wang, Yu Shi, Xiang-Dong Zhou
This paper presents a new method for end-to-end Video Question Answering (VideoQA), aside from the current popularity of using large-scale pre-training with huge feature extractors.
1 code implementation • 25 Jan 2023 • Weiguang Han, Boyi Zhang, Qianqian Xie, Min Peng, Yanzhao Lai, Jimin Huang
For pair selection, ignoring the trading performance results in the wrong assets being selected with irrelevant price movements, while the agent trained for trading can overfit to the selected assets without any historical information of other assets.
Ranked #1 on
PAIR TRADING
on S&P 500 Pair Trading
1 code implementation • 10 Oct 2022 • Miao Peng, Ben Liu, Qianqian Xie, Wenjie Xu, Hua Wang, Min Peng
Specifically, we first exploit network schema as the prior constraint to sample negatives and pre-train our model by employing a multi-level contrastive learning method to yield both prior schema and contextual information.
1 code implementation • 9 May 2022 • Min Peng, Chongyang Wang, Yuan Gao, Yu Shi, Xiang-Dong Zhou
With a multiscale sampling, RMI iterates the interaction of appearance-motion information at each scale and the question embeddings to build the multilevel question-guided visual representations.
1 code implementation • 10 Sep 2021 • Min Peng, Chongyang Wang, Yuan Gao, Yu Shi, Xiang-Dong Zhou
Targeting these issues, this paper proposes a novel Temporal Pyramid Transformer (TPT) model with multimodal interaction for VideoQA.
no code implementations • NAACL 2021 • Qianqian Xie, Jimin Huang, Pan Du, Min Peng, Jian-Yun Nie
T-VGAE inherits the interpretability of the topic model and the efficient information propagation mechanism of VGAE.
no code implementations • 15 Mar 2021 • Jiaxin Pan, Min Peng, Yiyan Zhang
How to build the dependency of entities from different sentences in a document remains to be a great challenge.
no code implementations • 30 Sep 2020 • Guoqing Luo, Jiaxin Pan, Min Peng
Distant supervision has been widely used for relation extraction but suffers from noise labeling problem.
1 code implementation • 19 Sep 2020 • Min Peng, Chongyang Wang, Yuan Gao, Tao Bi, Tong Chen, Yu Shi, Xiang-Dong Zhou
As a spontaneous expression of emotion on face, micro-expression reveals the underlying emotion that cannot be controlled by human.
1 code implementation • 24 Apr 2019 • Chongyang Wang, Min Peng, Temitayo A. Olugbade, Nicholas D. Lane, Amanda C. De C. Williams, Nadia Bianchi-Berthouze
For people with chronic pain, the assessment of protective behavior during physical functioning is essential to understand their subjective pain-related experiences (e. g., fear and anxiety toward pain and injury) and how they deal with such experiences (avoidance or reliance on specific body joints), with the ultimate goal of guiding intervention.
1 code implementation • 7 Apr 2019 • Min Peng, Chongyang Wang, Tao Bi, Tong Chen, Xiangdong Zhou, Yu Shi
As researchers working on such topics are moving to learn from the nature of micro-expression, the practice of using deep learning techniques has evolved from processing the entire video clip of micro-expression to the recognition on apex frame.
1 code implementation • 6 Nov 2018 • Chongyang Wang, Min Peng, Tao Bi, Tong Chen
The existence of micro expression in small-local areas on face and limited size of available databases still constrain the recognition accuracy on such emotional facial behavior.
Micro Expression Recognition
Micro-Expression Recognition
+1
no code implementations • ACL 2018 • Min Peng, Qianqian Xie, Yanchun Zhang, Hua Wang, Xiuzhen Zhang, Jimin Huang, Gang Tian
Topic models with sparsity enhancement have been proven to be effective at learning discriminative and coherent latent topics of short texts, which is critical to many scientific and engineering applications.