Search Results for author: Xiaofen Xing

Found 18 papers, 12 papers with code

Modeling Compositionality with Dependency Graph for Dialogue Generation

no code implementations NAACL (SUKI) 2022 Xiaofeng Chen, YiRong Chen, Xiaofen Xing, Xiangmin Xu, Wenjing Han, Qianfeng Tie

Because of the compositionality of natural language, syntactic structure which contains the information about the relationship between words is a key factor for semantic understanding.

Dialogue Generation

FinReport: Explainable Stock Earnings Forecasting via News Factor Analyzing Model

1 code implementation5 Mar 2024 Xiangyu Li, Xinjie Shen, Yawen Zeng, Xiaofen Xing, Jin Xu

However, compared with financial institutions, it is not easy for ordinary investors to mine factors and analyze news.

Stock Market Prediction

PointCore: Efficient Unsupervised Point Cloud Anomaly Detector Using Local-Global Features

1 code implementation4 Mar 2024 Baozhu Zhao, Qiwei Xiong, Xiaohan Zhang, Jingfeng Guo, Qi Liu, Xiaofen Xing, Xiangmin Xu

Three-dimensional point cloud anomaly detection that aims to detect anomaly data points from a training set serves as the foundation for a variety of applications, including industrial inspection and autonomous driving.

Anomaly Detection Autonomous Driving

SoulChat: Improving LLMs' Empathy, Listening, and Comfort Abilities through Fine-tuning with Multi-turn Empathy Conversations

no code implementations1 Nov 2023 YiRong Chen, Xiaofen Xing, Jingkai Lin, huimin zheng, Zhenyu Wang, Qi Liu, Xiangmin Xu

Large language models (LLMs) have been widely applied in various fields due to their excellent capability for memorizing knowledge and chain of thought (CoT).

BianQue: Balancing the Questioning and Suggestion Ability of Health LLMs with Multi-turn Health Conversations Polished by ChatGPT

1 code implementation24 Oct 2023 YiRong Chen, Zhenyu Wang, Xiaofen Xing, huimin zheng, Zhipei Xu, Kai Fang, Junhong Wang, Sihang Li, Jieling Wu, Qi Liu, Xiangmin Xu

Large language models (LLMs) have performed well in providing general and extensive health suggestions in single-turn conversations, exemplified by systems such as ChatGPT, ChatGLM, ChatDoctor, DoctorGLM, and etc.

CorrTalk: Correlation Between Hierarchical Speech and Facial Activity Variances for 3D Animation

no code implementations17 Oct 2023 Zhaojie Chu, Kailing Guo, Xiaofen Xing, Yilin Lan, Bolun Cai, Xiangmin Xu

In this study, we propose a novel framework, CorrTalk, which effectively establishes the temporal correlation between hierarchical speech features and facial activities of different intensities across distinct regions.

LAPP: Layer Adaptive Progressive Pruning for Compressing CNNs from Scratch

no code implementations25 Sep 2023 Pucheng Zhai, Kailing Guo, Fang Liu, Xiaofen Xing, Xiangmin Xu

Therefore the pruning strategy can gradually prune the network and automatically determine the appropriate pruning rates for each layer.

Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition

1 code implementation20 Jul 2023 Weidong Chen, Xiaofen Xing, Peihao Chen, Xiangmin Xu

Although PTMs shed new light on artificial general intelligence, they are constructed with general tasks in mind, and thus, their efficacy for specific tasks can be further improved.

Speech Emotion Recognition

DWFormer: Dynamic Window transFormer for Speech Emotion Recognition

1 code implementation3 Mar 2023 Shuaiqi Chen, Xiaofen Xing, Weibin Zhang, Weidong Chen, Xiangmin Xu

Self-attention mechanism is applied within windows for capturing temporal important information locally in a fine-grained way.

Speech Emotion Recognition

SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic Speech Processing

1 code implementation27 Feb 2023 Weidong Chen, Xiaofen Xing, Xiangmin Xu, Jianxin Pang, Lan Du

Paralinguistic speech processing is important in addressing many issues, such as sentiment and neurocognitive disorder analyses.

Alzheimer's Disease Detection Speech Emotion Recognition

Compact Model Training by Low-Rank Projection with Energy Transfer

1 code implementation12 Apr 2022 Kailing Guo, Zhenquan Lin, Xiaofen Xing, Fang Liu, Xiangmin Xu

In this paper, we devise a new training method, low-rank projection with energy transfer (LRPET), that trains low-rank compressed networks from scratch and achieves competitive performance.

Low-rank compression

Weight Evolution: Improving Deep Neural Networks Training through Evolving Inferior Weight Values

1 code implementation9 Oct 2021 Zhenquan Lin, Kailing Guo, Xiaofen Xing, Xiangmin Xu

Comprehensive experiments show that WE outperforms the other reactivation methods and plug-in training methods with typical convolutional neural networks, especially lightweight networks.

Listwise View Ranking for Image Cropping

1 code implementation14 May 2019 Weirui Lu, Xiaofen Xing, Bolun Cai, Xiangmin Xu

However, the performance of ranking-based methods is often poor and this is mainly due to two reasons: 1) image cropping is a listwise ranking task rather than pairwise comparison; 2) the rescaling caused by pooling layer and the deformation in view generation damage the performance of composition learning.

Image Cropping

BIT: Biologically Inspired Tracker

1 code implementation23 Apr 2019 Bolun Cai, Xiangmin Xu, Xiaofen Xing, Kui Jia, Jie Miao, DaCheng Tao

Visual tracking is challenging due to image variations caused by various factors, such as object deformation, scale change, illumination change and occlusion.

Visual Tracking

Manifold Regularized Slow Feature Analysis for Dynamic Texture Recognition

no code implementations9 Jun 2017 Jie Miao, Xiangmin Xu, Xiaofen Xing, DaCheng Tao

However, complex temporal variations require high-level semantic representations to fully achieve temporal slowness, and thus it is impractical to learn a high-level representation from dynamic textures directly by SFA.

Dynamic Texture Recognition Scene Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.