Search Results for author: Bing Xu

Found 29 papers, 15 papers with code

HITMI&T at SemEval-2022 Task 4: Investigating Task-Adaptive Pretraining And Attention Mechanism On PCL Detection

no code implementations SemEval (NAACL) 2022 Zihang Liu, Yancheng He, Feiqing Zhuang, Bing Xu

Respectively, for subtask 1, that is, to judge whether a sentence is PCL, the method of retraining the model with specific task data is adopted, and the method of splicing [CLS] and the keyword representation of the last three layers as the representation of the sentence; for subtask 2, that is, to judge the PCL type of the sentence, in addition to using the same method as task1, the method of selecting a special loss for Multi-label text classification is applied.

Multi Label Text Classification Multi-Label Text Classification +2

CAN-GRU: a Hierarchical Model for Emotion Recognition in Dialogue

no code implementations CCL 2020 Ting Jiang, Bing Xu, Tiejun Zhao, Sheng Li

In the first layer, in order to extract textual features of utterances, we propose a convolutional self-attention network(CAN).

Emotion Recognition Opinion Mining

5G Direct Position Estimation for Precise Localization in Dense Urban Area

no code implementations25 Feb 2025 Sijia Li, Sergio Vicenzo, Bing Xu

In recent years, the fifth-generation (5G) new radio (NR) signals have emerged as a promising supplementary resource for urban navigation.

Position

Multipath Mitigation Technology-integrated GNSS Direct Position Estimation Plug-in Module

no code implementations20 Nov 2024 Sergio Vicenzo, Bing Xu

To encourage further research on DPE by the GNSS community, we propose a DPE plug-in module that can be integrated into the conventional 2SP software-defined receivers (SDRs).

Position

PMoL: Parameter Efficient MoE for Preference Mixing of LLM Alignment

no code implementations2 Nov 2024 Dongxu Liu, Bing Xu, Yinzhuo Chen, Bufan Xu, Wenpeng Lu, Muyun Yang, Tiejun Zhao

Reinforcement Learning from Human Feedback (RLHF) has been proven to be an effective method for preference alignment of large language models (LLMs) and is widely used in the post-training process of LLMs.

Mitigating the Bias of Large Language Model Evaluation

1 code implementation25 Sep 2024 Hongli Zhou, Hui Huang, Yunfei Long, Bing Xu, Conghui Zhu, Hailong Cao, Muyun Yang, Tiejun Zhao

Recently, there has been a trend of evaluating the Large Language Model (LLM) quality in the flavor of LLM-as-a-Judge, namely leveraging another LLM to evaluate the current output quality.

Instruction Following Language Modeling +3

Exploring Public Attention in the Circular Economy through Topic Modelling with Twin Hyperparameter Optimisation

1 code implementation16 May 2024 Junhao Song, Yingfang Yuan, Kaiwen Chang, Bing Xu, Jin Xuan, Wei Pang

To advance the circular economy (CE), it is crucial to gain insights into the evolution of public attention, cognitive pathways of the masses concerning circular products, and to identify primary concerns.

Dynamic Topic Modeling Hyperparameter Optimization +1

Self-Evaluation of Large Language Model based on Glass-box Features

1 code implementation7 Mar 2024 Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Bing Xu, Tiejun Zhao, Wenpeng Lu

The proliferation of open-source Large Language Models (LLMs) underscores the pressing need for evaluation methods.

Language Modeling Language Modelling +1

Robust Causal Graph Representation Learning against Confounding Effects

1 code implementation18 Aug 2022 Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Bing Xu, Changwen Zheng, Fuchun Sun

This observation reveals that there exist confounders in graphs, which may interfere with the model learning semantic information, and current graph representation learning methods have not eliminated their influence.

Graph Neural Network Graph Representation Learning

Multi-Grained Knowledge Distillation for Named Entity Recognition

no code implementations NAACL 2021 Xuan Zhou, Xiao Zhang, Chenyang Tao, Junya Chen, Bing Xu, Wei Wang, Jing Xiao

To maximally assimilate knowledge into the student model, we propose a multi-grained distillation scheme, which integrates cross entropy involved in conditional random field (CRF) and fuzzy learning. To validate the effectiveness of our proposal, we conducted a comprehensive evaluation on five NER benchmarks, reporting cross-the-board performance gains relative to competing prior-arts.

Knowledge Distillation named-entity-recognition +2

Biometric Blockchain: A Better Solution for the Security and Trust of Food Logistics

no code implementations21 Jul 2019 Bing Xu, Tobechukwu Agbele, Richard Jiang

The advantage of using BBC in the food logistics is clear: it can not only identify if the data or labels are authentic, but also clearly record who is responsible for the secured data or labels.

Training Deep Nets with Sublinear Memory Cost

6 code implementations21 Apr 2016 Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin

In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation.

Revise Saturated Activation Functions

no code implementations18 Feb 2016 Bing Xu, Ruitong Huang, Mu Li

In this paper, we revise two commonly used saturated functions, the logistic sigmoid and the hyperbolic tangent (tanh).

MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems

2 code implementations3 Dec 2015 Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, Zheng Zhang

This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion.

BIG-bench Machine Learning Clustering +2

Learning with a Strong Adversary

1 code implementation10 Nov 2015 Ruitong Huang, Bing Xu, Dale Schuurmans, Csaba Szepesvari

The robustness of neural networks to intended perturbations has recently attracted significant attention.

General Classification

Empirical Evaluation of Rectified Activations in Convolutional Network

2 code implementations5 May 2015 Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li

In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU).

General Classification Image Classification

Generative Adversarial Nets

1 code implementation NeurIPS 2014 Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio

We propose a new framework for estimating generative models via adversarial nets, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake.

Generative Adversarial Networks

185 code implementations Proceedings of the 27th International Conference on Neural Information Processing Systems 2014 Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake.

Super-Resolution Time-Series Few-Shot Learning with Heterogeneous Channels

Combination of Diverse Ranking Models for Personalized Expedia Hotel Searches

no code implementations29 Nov 2013 Xudong Liu, Bing Xu, Yuyu Zhang, Qiang Yan, Liang Pang, Qiang Li, Hanxiao Sun, Bin Wang

The ICDM Challenge 2013 is to apply machine learning to the problem of hotel ranking, aiming to maximize purchases according to given hotel characteristics, location attractiveness of hotels, user's aggregated purchase history and competitive online travel agency information for each potential hotel choice.

BIG-bench Machine Learning Feature Engineering

Horizontal and Vertical Ensemble with Deep Representation for Classification

no code implementations12 Jun 2013 Jingjing Xie, Bing Xu, Zhang Chuang

Representation learning, especially which by using deep learning, has been widely applied in classification.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.