Search Results for author: Jing Lu

Found 49 papers, 15 papers with code

Conundrums in Event Coreference Resolution: Making Sense of the State of the Art

no code implementations EMNLP 2021 Jing Lu, Vincent Ng

Despite recent promising results on the application of span-based models for event reference interpretation, there is a lack of understanding of what has been improved.

coreference-resolution Event Coreference Resolution

Multi-stage Training with Improved Negative Contrast for Neural Passage Retrieval

no code implementations EMNLP 2021 Jing Lu, Gustavo Hernandez Abrego, Ji Ma, Jianmo Ni, Yinfei Yang

In the context of neural passage retrieval, we study three promising techniques: synthetic data generation, negative sampling, and fusion.

Passage Retrieval Retrieval +1

Conundrums in Entity Coreference Resolution: Making Sense of the State of the Art

no code implementations EMNLP 2020 Jing Lu, Vincent Ng

Despite the significant progress on entity coreference resolution observed in recent years, there is a general lack of understanding of what has been improved.

coreference-resolution

Active headrest combined with a depth camera-based ear-positioning system

no code implementations25 Dec 2023 Yuteng Liu, Haowen Li, Haishan Zou, Jing Lu, Zhibin Lin

Active headrests can reduce low-frequency noise around ears based on active noise control (ANC) system.

Position

LabelCraft: Empowering Short Video Recommendations with Automated Label Crafting

1 code implementation18 Dec 2023 Yimeng Bai, Yang Zhang, Jing Lu, Jianxin Chang, Xiaoxue Zang, Yanan Niu, Yang song, Fuli Feng

Through meta-learning techniques, LabelCraft effectively addresses the bi-level optimization hurdle posed by the recommender and labeling models, enabling the automatic acquisition of intricate label generation mechanisms. Extensive experiments on real-world datasets corroborate LabelCraft's excellence across varied operational metrics, encompassing usage time, user engagement, and retention.

Meta-Learning Model Optimization

Personalized speech enhancement combining band-split RNN and speaker attentive module

no code implementations20 Feb 2023 Xiaohuai Le, Li Chen, Chao He, Yiqing Guo, Cheng Chen, Xianjun Xia, Jing Lu

Target speaker information can be utilized in speech enhancement (SE) models to more effectively extract the desired speech.

Speech Enhancement

TWIN: TWo-stage Interest Network for Lifelong User Behavior Modeling in CTR Prediction at Kuaishou

no code implementations5 Feb 2023 Jianxin Chang, Chenbin Zhang, Zhiyi Fu, Xiaoxue Zang, Lin Guan, Jing Lu, Yiqun Hui, Dewei Leng, Yanan Niu, Yang song, Kun Gai

And for the user-item cross features, we compress each into a one-dimentional bias term in the attention score calculation to save the computational cost.

Click-Through Rate Prediction

Few-Shot Class-Incremental Learning via Class-Aware Bilateral Distillation

1 code implementation CVPR 2023 Linglan Zhao, Jing Lu, Yunlu Xu, Zhanzhan Cheng, Dashan Guo, Yi Niu, Xiangzhong Fang

While knowledge distillation, a prevailing technique in CIL, can alleviate the catastrophic forgetting of older classes by regularizing outputs between current and previous model, it fails to consider the overfitting risk of novel classes in FSCIL.

Few-Shot Class-Incremental Learning General Knowledge +3

HyperMatch: Noise-Tolerant Semi-Supervised Learning via Relaxed Contrastive Constraint

no code implementations CVPR 2023 Beitong Zhou, Jing Lu, Kerui Liu, Yunlu Xu, Zhanzhan Cheng, Yi Niu

Recent developments of the application of Contrastive Learning in Semi-Supervised Learning (SSL) have demonstrated significant advancements, as a result of its exceptional ability to learn class-aware cluster representations and the full exploitation of massive unlabeled data.

Contrastive Learning

Distributed Active Noise Control System Based on a Block Diffusion FxLMS Algorithm with Bidirectional Communication

no code implementations28 Dec 2022 Tianyou Li, Hongji Duan, Sipei Zhao, Jing Lu, Ian S. Burnett

Recently, distributed active noise control systems based on diffusion adaptation have attracted significant research interest due to their balance between computational complexity and stability compared to conventional centralized and decentralized adaptation schemes.

HYRR: Hybrid Infused Reranking for Passage Retrieval

no code implementations20 Dec 2022 Jing Lu, Keith Hall, Ji Ma, Jianmo Ni

We present Hybrid Infused Reranking for Passages Retrieval (HYRR), a framework for training rerankers based on a hybrid of BM25 and neural retrieval models.

Passage Retrieval Retrieval

Distilling Object Detectors With Global Knowledge

1 code implementation17 Oct 2022 Sanli Tang, Zhongyu Zhang, Zhanzhan Cheng, Jing Lu, Yunlu Xu, Yi Niu, Fan He

Then, a robust distilling module (RDM) is applied to construct the global knowledge based on the prototypes and filtrate noisy global and local knowledge by measuring the discrepancy of the representations in two feature spaces.

Knowledge Distillation Object +2

RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses

no code implementations12 Oct 2022 Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, Michael Bendersky

Recently, substantial progress has been made in text ranking based on pretrained language models such as BERT.

Promptagator: Few-shot Dense Retrieval From 8 Examples

no code implementations23 Sep 2022 Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, Ming-Wei Chang

To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages large language models (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data.

Information Retrieval Natural Questions +1

Rapid-Flooding Time Synchronization for Large-Scale Wireless Sensor Networks

no code implementations30 Jul 2022 Fanrong Shi, Xianguo Tuo, Simon X. Yang, Jing Lu, Huailiang Li

Accurate and fast-convergent time synchronization is very important for wireless sensor networks.

Semi-blind source separation using convolutive transfer function for nonlinear acoustic echo cancellation

1 code implementation4 Jul 2022 Guoliang Cheng, Lele Liao, Kai Chen, Yuxiang Hu, Changbao Zhu, Jing Lu

The recently proposed semi-blind source separation (SBSS) method for nonlinear acoustic echo cancellation (NAEC) outperforms adaptive NAEC in attenuating the nonlinear acoustic echo.

Acoustic echo cancellation blind source separation

A light-weight full-band speech enhancement model

1 code implementation29 Jun 2022 Qinwen Hu, Zhongshu Hou, Xiaohuai Le, Jing Lu

Deep neural network based full-band speech enhancement systems face challenges of high demand of computational resources and imbalanced frequency distribution.

Speech Enhancement

PMAL: Open Set Recognition via Robust Prototype Mining

no code implementations16 Mar 2022 Jing Lu, Yunxu Xu, Hao Li, Zhanzhan Cheng, Yi Niu

Accordingly, the embedding space can be better optimized to discriminate therein the predefined classes and between known and unknowns.

Open Set Learning

Out-of-Domain Semantics to the Rescue! Zero-Shot Hybrid Retrieval Models

no code implementations25 Jan 2022 Tao Chen, Mingyang Zhang, Jing Lu, Michael Bendersky, Marc Najork

In this work, we carefully select five datasets, including two in-domain datasets and three out-of-domain datasets with different levels of domain shift, and study the generalization of a deep model in a zero-shot setting.

Language Modelling Passage Retrieval +1

Large Dual Encoders Are Generalizable Retrievers

2 code implementations15 Dec 2021 Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, Yinfei Yang

With multi-stage training, surprisingly, scaling up the model size brings significant improvement on a variety of retrieval tasks, especially for out-of-domain generalization.

Domain Generalization Retrieval +1

ICDAR 2021 Competition on Scene Video Text Spotting

no code implementations26 Jul 2021 Zhanzhan Cheng, Jing Lu, Baorui Zou, Shuigeng Zhou, Fei Wu

During the competition period (opened on 1st March, 2021 and closed on 11th April, 2021), a total of 24 teams participated in the three proposed tasks with 46 valid submissions, respectively.

Task 2 Text Detection +2

Constrained Multi-Task Learning for Event Coreference Resolution

1 code implementation NAACL 2021 Jing Lu, Vincent Ng

We propose a neural event coreference model in which event coreference is jointly trained with five tasks: trigger detection, entity coreference, anaphoricity determination, realis detection, and argument extraction.

coreference-resolution Event Coreference Resolution +1

Coronary Plaque Analysis for CT Angiography Clinical Research

no code implementations11 Jan 2021 Felix Denzinger, Michael Wels, Christian Hopfgartner, Jing Lu, Max Schöbinger, Andreas Maier, Michael Sühling

However, to enable clinical research with the help of these algorithms, a software solution, which enables manual correction, comprehensive visual feedback and tissue analysis capabilities, is needed.

Segmentation

Event Coreference Resolution with Non-Local Information

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Jing Lu, Vincent Ng

We present two extensions to a state-of-theart joint model for event coreference resolution, which involve incorporating (1) a supervised topic model for improving trigger detection by providing global context, and (2) a preprocessing module that seeks to improve event coreference by discarding unlikely candidate antecedents of an event mention using discourse contexts computed based on salient entities.

coreference-resolution Event Coreference Resolution

Semi-Blind Source Separation for Nonlinear Acoustic Echo Cancellation

1 code implementation25 Oct 2020 Guoliang Cheng, Lele Liao, Hongsheng Chen, Jing Lu

Unlike the commonly utilized adaptive algorithm, the proposed SBSS is based on the independence between the near-end signal and the reference signals, and is less sensitive to the mismatch of nonlinearity between the numerical and actual models.

Acoustic echo cancellation blind source separation

Neural Passage Retrieval with Improved Negative Contrast

no code implementations23 Oct 2020 Jing Lu, Gustavo Hernandez Abrego, Ji Ma, Jianmo Ni, Yinfei Yang

In this paper we explore the effects of negative sampling in dual encoder models used to retrieve passages for automatic question answering.

Open-Domain Question Answering Passage Retrieval +3

Kalman Filtering Attention for User Behavior Modeling in CTR Prediction

no code implementations NeurIPS 2020 Hu Liu, Jing Lu, Xiwei Zhao, Sulong Xu, Hao Peng, Yutong Liu, Zehua Zhang, Jian Li, Junsheng Jin, Yongjun Bao, Weipeng Yan

First, conventional attentions mostly limit the attention field only to a single user's behaviors, which is not suitable in e-commerce where users often hunt for new demands that are irrelevant to any historical behaviors.

Click-Through Rate Prediction

improving partition-block-based acoustic echo canceler in under-modeling scenarios

no code implementations10 Aug 2020 Wenzhi Fan, Jing Lu

Recently, a partitioned-block-based frequency-domain Kalman filter (PFKF) has been proposed for acoustic echo cancellation.

Acoustic echo cancellation

Category-Specific CNN for Visual-aware CTR Prediction at JD.com

no code implementations18 Jun 2020 Hu Liu, Jing Lu, Hao Yang, Xiwei Zhao, Sulong Xu, Hao Peng, Zehua Zhang, Wenjie Niu, Xiaokun Zhu, Yongjun Bao, Weipeng Yan

Existing algorithms usually extract visual features using off-the-shelf Convolutional Neural Networks (CNNs) and late fuse the visual and non-visual features for the finally predicted CTR.

Click-Through Rate Prediction

Object-QA: Towards High Reliable Object Quality Assessment

no code implementations27 May 2020 Jing Lu, Baorui Zou, Zhanzhan Cheng, ShiLiang Pu, Shuigeng Zhou, Yi Niu, Fei Wu

In this paper, we define the problem of object quality assessment for the first time and propose an effective approach named Object-QA to assess high-reliable quality scores for object images.

Object Object Recognition +1

TRIE: End-to-End Text Reading and Information Extraction for Document Understanding

1 code implementation27 May 2020 Peng Zhang, Yunlu Xu, Zhanzhan Cheng, ShiLiang Pu, Jing Lu, Liang Qiao, Yi Niu, Fei Wu

Since real-world ubiquitous documents (e. g., invoices, tickets, resumes and leaflets) contain rich information, automatic document image understanding has become a hot topic.

document understanding

Nonlinear Residual Echo Suppression Based on Multi-stream Conv-TasNet

1 code implementation15 May 2020 Hongsheng Chen, Teng Xiang, Kai Chen, Jing Lu

Acoustic echo cannot be entirely removed by linear adaptive filters due to the nonlinear relationship between the echo and far-end signal.

Acoustic echo cancellation

Sampling Wisely: Deep Image Embedding by Top-K Precision Optimization

no code implementations ICCV 2019 Jing Lu, Chaofan Xu, Wei Zhang, Ling-Yu Duan, Tao Mei

Consequently, gradient descent direction on the training loss is mostly inconsistent with the direction of optimizing the concerned evaluation metric.

Image Retrieval

Position Focused Attention Network for Image-Text Matching

1 code implementation23 Jul 2019 Yaxiong Wang, Hao Yang, Xueming Qian, Lin Ma, Jing Lu, Biao Li, Xin Fan

Then, an attention mechanism is proposed to model the relations between the image region and blocks and generate the valuable position feature, which will be further utilized to enhance the region expression and model a more reliable relationship between the visual image and the textual sentence.

Image-text matching Position +2

You Only Recognize Once: Towards Fast Video Text Spotting

1 code implementation8 Mar 2019 Zhanzhan Cheng, Jing Lu, Yi Niu, ShiLiang Pu, Fei Wu, Shuigeng Zhou

Video text spotting is still an important research topic due to its various real-applications.

Text Detection Text Spotting

Online Learning: A Comprehensive Survey

no code implementations8 Feb 2018 Steven C. H. Hoi, Doyen Sahoo, Jing Lu, Peilin Zhao

Online learning represents an important family of machine learning algorithms, in which a learner attempts to resolve an online prediction (or any type of decision-making) task by learning a model/hypothesis from a sequence of data instances one at a time.

BIG-bench Machine Learning Decision Making

Online Deep Learning: Learning Deep Neural Networks on the Fly

4 code implementations10 Nov 2017 Doyen Sahoo, Quang Pham, Jing Lu, Steven C. H. Hoi

Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch learning setting, which requires the entire training data to be made available prior to the learning task.

Joint Learning for Event Coreference Resolution

no code implementations ACL 2017 Jing Lu, Vincent Ng

While joint models have been developed for many NLP tasks, the vast majority of event coreference resolvers, including the top-performing resolvers competing in the recent TAC KBP 2016 Event Nugget Detection and Coreference task, are pipeline-based, where the propagation of errors from the trigger detection component to the event coreference component is a major performance limiting factor.

coreference-resolution Event Coreference Resolution

Joint Inference for Event Coreference Resolution

no code implementations COLING 2016 Jing Lu, Deepak Venugopal, Vibhav Gogate, Vincent Ng

Event coreference resolution is a challenging problem since it relies on several components of the information extraction pipeline that typically yield noisy outputs.

coreference-resolution Event Coreference Resolution

SOL: A Library for Scalable Online Learning Algorithms

1 code implementation28 Oct 2016 Yue Wu, Steven C. H. Hoi, Chenghao Liu, Jing Lu, Doyen Sahoo, Nenghai Yu

SOL is an open-source library for scalable online learning algorithms, and is particularly suitable for learning with high-dimensional data.

BIG-bench Machine Learning General Classification +1

Event Coreference Resolution with Multi-Pass Sieves

no code implementations LREC 2016 Jing Lu, Vincent Ng

Multi-pass sieve approaches have been successfully applied to entity coreference resolution and many other tasks in natural language processing (NLP), owing in part to the ease of designing high-precision rules for these tasks.

Avg coreference-resolution +2

Detection and Visualization of Endoleaks in CT Data for Monitoring of Thoracic and Abdominal Aortic Aneurysm Stents

no code implementations9 Feb 2016 Jing Lu, Jan Egger, Andreas Wimmer, Stefan Großkopf, Bernd Freisleben

The aneurysm segmentation includes two steps: first, the inner boundary is segmented based on a grey level model with two thresholds; then, an adapted active contour model approach is applied to the more complicated outer boundary segmentation, with its initialization based on the available inner boundary segmentation.

Anatomy Segmentation

Budget Online Multiple Kernel Learning

no code implementations16 Nov 2015 Jing Lu, Steven C. H. Hoi, Doyen Sahoo, Peilin Zhao

To overcome this drawback, we present a novel framework of Budget Online Multiple Kernel Learning (BOMKL) and propose a new Sparse Passive Aggressive learning to perform effective budget online learning.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.