no code implementations • SemEval (NAACL) 2022 • Zihang Liu, Yancheng He, Feiqing Zhuang, Bing Xu
Respectively, for subtask 1, that is, to judge whether a sentence is PCL, the method of retraining the model with specific task data is adopted, and the method of splicing [CLS] and the keyword representation of the last three layers as the representation of the sentence; for subtask 2, that is, to judge the PCL type of the sentence, in addition to using the same method as task1, the method of selecting a special loss for Multi-label text classification is applied.
Multi Label Text Classification
Multi-Label Text Classification
+2
no code implementations • CCL 2020 • Ting Jiang, Bing Xu, Tiejun Zhao, Sheng Li
In the first layer, in order to extract textual features of utterances, we propose a convolutional self-attention network(CAN).
no code implementations • 25 Feb 2025 • Sijia Li, Sergio Vicenzo, Bing Xu
In recent years, the fifth-generation (5G) new radio (NR) signals have emerged as a promising supplementary resource for urban navigation.
1 code implementation • 17 Feb 2025 • Hui Huang, Jiaheng Liu, Yancheng He, Shilong Li, Bing Xu, Conghui Zhu, Muyun Yang, Tiejun Zhao
Complex instruction-following with elaborate constraints is imperative for Large Language Models (LLMs).
no code implementations • 20 Nov 2024 • Sergio Vicenzo, Bing Xu
To encourage further research on DPE by the GNSS community, we propose a DPE plug-in module that can be integrated into the conventional 2SP software-defined receivers (SDRs).
no code implementations • 2 Nov 2024 • Dongxu Liu, Bing Xu, Yinzhuo Chen, Bufan Xu, Wenpeng Lu, Muyun Yang, Tiejun Zhao
Reinforcement Learning from Human Feedback (RLHF) has been proven to be an effective method for preference alignment of large language models (LLMs) and is widely used in the post-training process of LLMs.
1 code implementation • 25 Sep 2024 • Hongli Zhou, Hui Huang, Yunfei Long, Bing Xu, Conghui Zhu, Hailong Cao, Muyun Yang, Tiejun Zhao
Recently, there has been a trend of evaluating the Large Language Model (LLM) quality in the flavor of LLM-as-a-Judge, namely leveraging another LLM to evaluate the current output quality.
1 code implementation • 16 May 2024 • Junhao Song, Yingfang Yuan, Kaiwen Chang, Bing Xu, Jin Xuan, Wei Pang
To advance the circular economy (CE), it is crucial to gain insights into the evolution of public attention, cognitive pathways of the masses concerning circular products, and to identify primary concerns.
1 code implementation • 7 Mar 2024 • Hui Huang, Yingqi Qu, Jing Liu, Muyun Yang, Bing Xu, Tiejun Zhao, Wenpeng Lu
The proliferation of open-source Large Language Models (LLMs) underscores the pressing need for evaluation methods.
1 code implementation • 5 Mar 2024 • Hui Huang, Yingqi Qu, Xingyuan Bu, Hongli Zhou, Jing Liu, Muyun Yang, Bing Xu, Tiejun Zhao
Alternatively, other works have fine-tuned judge models based on open-source LLMs as the evaluator.
1 code implementation • 18 Aug 2022 • Hang Gao, Jiangmeng Li, Wenwen Qiang, Lingyu Si, Bing Xu, Changwen Zheng, Fuchun Sun
This observation reveals that there exist confounders in graphs, which may interfere with the model learning semantic information, and current graph representation learning methods have not eliminated their influence.
1 code implementation • Findings (NAACL) 2022 • Zhen Li, Bing Xu, Conghui Zhu, Tiejun Zhao
Compared with unimodal data, multimodal data can provide more features to help the model analyze the sentiment of data.
no code implementations • NAACL 2021 • Xuan Zhou, Xiao Zhang, Chenyang Tao, Junya Chen, Bing Xu, Wei Wang, Jing Xiao
To maximally assimilate knowledge into the student model, we propose a multi-grained distillation scheme, which integrates cross entropy involved in conditional random field (CRF) and fuzzy learning. To validate the effectiveness of our proposal, we conducted a comprehensive evaluation on five NER benchmarks, reporting cross-the-board performance gains relative to competing prior-arts.
no code implementations • SEMEVAL 2020 • Zhen Li, Yaojie Zhang, Bing Xu, Tiejun Zhao
Internet memes emotion recognition is focused by many researchers.
no code implementations • 19 Nov 2019 • Bing Xu, Andrew Tulloch, Yunpeng Chen, Xiaomeng Yang, Lin Qiao
We propose a new building block, IdleBlock, which naturally prunes connections within the block.
no code implementations • 21 Jul 2019 • Bing Xu, Tobechukwu Agbele, Richard Jiang
The advantage of using BBC in the food logistics is clear: it can not only identify if the data or labels are authentic, but also clearly record who is responsible for the secured data or labels.
no code implementations • SEMEVAL 2019 • Yaojie Zhang, Bing Xu, Tiejun Zhao
Our macro-averaged F1-score in sub-task A is 0. 768, ranking 28/103.
28 code implementations • ICCV 2019 • Yunpeng Chen, Haoqi Fan, Bing Xu, Zhicheng Yan, Yannis Kalantidis, Marcus Rohrbach, Shuicheng Yan, Jiashi Feng
Similarly, the output feature maps of a convolution layer can also be seen as a mixture of information at different frequencies.
Ranked #151 on
Action Classification
on Kinetics-400
no code implementations • SEMEVAL 2017 • Jingjing Zhao, Yan Yang, Bing Xu
A CNN method for sentiment classification task in Task 4A of SemEval 2017 is presented.
6 code implementations • 21 Apr 2016 • Tianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin
In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation.
no code implementations • 18 Feb 2016 • Bing Xu, Ruitong Huang, Mu Li
In this paper, we revise two commonly used saturated functions, the logistic sigmoid and the hyperbolic tangent (tanh).
2 code implementations • 3 Dec 2015 • Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, Zheng Zhang
This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion.
1 code implementation • 10 Nov 2015 • Ruitong Huang, Bing Xu, Dale Schuurmans, Csaba Szepesvari
The robustness of neural networks to intended perturbations has recently attracted significant attention.
2 code implementations • 5 May 2015 • Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li
In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU).
Ranked #200 on
Image Classification
on CIFAR-100
1 code implementation • NeurIPS 2014 • Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
We propose a new framework for estimating generative models via adversarial nets, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake.
185 code implementations • Proceedings of the 27th International Conference on Neural Information Processing Systems 2014 • Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake.
Super-Resolution
Time-Series Few-Shot Learning with Heterogeneous Channels
no code implementations • 29 Nov 2013 • Xudong Liu, Bing Xu, Yuyu Zhang, Qiang Yan, Liang Pang, Qiang Li, Hanxiao Sun, Bin Wang
The ICDM Challenge 2013 is to apply machine learning to the problem of hotel ranking, aiming to maximize purchases according to given hotel characteristics, location attractiveness of hotels, user's aggregated purchase history and competitive online travel agency information for each potential hotel choice.
12 code implementations • 1 Jul 2013 • Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li, Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, Yoshua Bengio
The ICML 2013 Workshop on Challenges in Representation Learning focused on three challenges: the black box learning challenge, the facial expression recognition challenge, and the multimodal learning challenge.
Ranked #15 on
Facial Expression Recognition (FER)
on FER2013
no code implementations • 12 Jun 2013 • Jingjing Xie, Bing Xu, Zhang Chuang
Representation learning, especially which by using deep learning, has been widely applied in classification.