Search Results for author: Ke Li

Found 193 papers, 81 papers with code

C^3KG: A Chinese Commonsense Conversation Knowledge Graph

1 code implementation Findings (ACL) 2022 Dawei Li, Yanran Li, Jiayi Zhang, Ke Li, Chen Wei, Jianwei Cui, Bin Wang

Existing commonsense knowledge bases often organize tuples in an isolated manner, which is deficient for commonsense conversational models to plan the next steps.

RESTORE: Towards Feature Shift for Vision-Language Prompt Learning

1 code implementation10 Mar 2024 Yuncheng Yang, Chuyan Zhang, Zuopeng Yang, Yuting Gao, Yulei Qin, Ke Li, Xing Sun, Jie Yang, Yun Gu

Prompt learning is effective for fine-tuning foundation models to improve their generalization across a variety of downstream tasks.

Towards Multimodal Sentiment Analysis Debiasing via Bias Purification

no code implementations8 Mar 2024 Dingkang Yang, Mingcheng Li, Dongling Xiao, Yang Liu, Kun Yang, Zhaoyu Chen, Yuzheng Wang, Peng Zhai, Ke Li, Lihua Zhang

In the inference phase, given a factual multimodal input, MCIS imagines two counterfactual scenarios to purify and mitigate these biases.

counterfactual Counterfactual Inference +1

Towards Multimodal Human Intention Understanding Debiasing via Subject-Deconfounding

no code implementations8 Mar 2024 Dingkang Yang, Dongling Xiao, Ke Li, Yuzheng Wang, Zhaoyu Chen, Jinjie Wei, Lihua Zhang

Multimodal intention understanding (MIU) is an indispensable component of human expression analysis (e. g., sentiment or humor) from heterogeneous modalities, including visual postures, linguistic contents, and acoustic behaviors.

Sinkhorn Distance Minimization for Knowledge Distillation

1 code implementation27 Feb 2024 Xiao Cui, Yulei Qin, Yuting Gao, Enwei Zhang, Zihan Xu, Tong Wu, Ke Li, Xing Sun, Wengang Zhou, Houqiang Li

We propose the Sinkhorn Knowledge Distillation (SinKD) that exploits the Sinkhorn distance to ensure a nuanced and precise assessment of the disparity between teacher and student distributions.

Knowledge Distillation

Is the System Message Really Important to Jailbreaks in Large Language Models?

no code implementations20 Feb 2024 Xiaotian Zou, Yongkang Chen, Ke Li

To address this question, we conducted experiments in a stable GPT version gpt-3. 5-turbo-0613 to generated jailbreak prompts with varying system messages: short, long, and none.

Evolutionary Algorithms

Multi-Fidelity Methods for Optimization: A Survey

no code implementations15 Feb 2024 Ke Li, Fan Li

Real-world black-box optimization often involves time-consuming or costly experiments and simulations.

Benchmarking Computational Efficiency +2

ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Process

no code implementations16 Jan 2024 Kiyohiro Nakayama, Mikaela Angelina Uy, Yang You, Ke Li, Leonidas Guibas

We introduce ProvNeRF, a model that enriches a traditional NeRF representation by incorporating per-point provenance, modeling likely source locations for each point.

Novel View Synthesis

Human-in-the-Loop Policy Optimization for Preference-Based Multi-Objective Reinforcement Learning

no code implementations4 Jan 2024 Ke Li, Han Guo

The learned preference information is used to progressively guide policy optimization towards policies of interest.

Decision Making Management +1

Evolutionary Alternating Direction Method of Multipliers for Constrained Multi-Objective Optimization with Unknown Constraints

no code implementations2 Jan 2024 Shuang Li, Ke Li, Wei Li, Ming Yang

Constrained multi-objective optimization problems (CMOPs) pervade real-world applications in science, engineering, and design.

How Good Are Deep Generative Models for Solving Inverse Problems?

no code implementations20 Dec 2023 Shichong Peng, Alireza Moazeni, Ke Li

We assess the validity of these models' outputs as solutions to the inverse problems and conduct a thorough analysis of the reliability of the models' estimates of uncertainty over the solution.

Super-Resolution valid

Weakly Supervised Open-Vocabulary Object Detection

no code implementations19 Dec 2023 Jianghang Lin, Yunhang Shen, Bingquan Wang, Shaohui Lin, Ke Li, Liujuan Cao

Despite weakly supervised object detection (WSOD) being a promising step toward evading strong instance-level annotations, its capability is confined to closed-set categories within a single training dataset.

Attribute Novel Concepts +6

SPD-DDPM: Denoising Diffusion Probabilistic Models in the Symmetric Positive Definite Space

1 code implementation13 Dec 2023 Yunchen Li, Zhou Yu, Gaoqi He, Yunhang Shen, Ke Li, Xing Sun, Shaohui Lin

On the other hand, the model unconditionally learns the probability distribution of the data $p(X)$ and generates samples that conform to this distribution.

Denoising Traffic Prediction

MMICT: Boosting Multi-Modal Fine-Tuning with In-Context Examples

no code implementations11 Dec 2023 Tao Chen, Enwei Zhang, Yuting Gao, Ke Li, Xing Sun, Yan Zhang, Hui Li

Although In-Context Learning (ICL) brings remarkable performance gains to Large Language Models (LLMs), the improvements remain lower than fine-tuning on downstream tasks.

In-Context Learning

Adaptive Feature Selection for No-Reference Image Quality Assessment using Contrastive Mitigating Semantic Noise Sensitivity

no code implementations11 Dec 2023 Xudong Li, Timin Gao, Xiawu Zheng, Runze Hu, Jingyuan Zheng, Yunhang Shen, Ke Li, Yutao Liu, Pingyang Dai, Yan Zhang, Rongrong Ji

The current state-of-the-art No-Reference Image Quality Assessment (NR-IQA) methods typically use feature extraction in upstream backbone networks, which assumes that all extracted features are relevant.

Contrastive Learning feature selection +2

Constrained Bayesian Optimization Under Partial Observations: Balanced Improvements and Provable Convergence

1 code implementation6 Dec 2023 Shengbo Wang, Ke Li

We endeavor to design an efficient and provable method for expensive POCOPs under the framework of constrained Bayesian optimization.

Bayesian Optimization

Towards the Inferrence of Structural Similarity of Combinatorial Landscapes

no code implementations5 Dec 2023 Mingyu Huang, Ke Li

However, due to the black-box nature of combinatorial optimization, it is far from trivial to infer such similarity in real-world scenarios.

Combinatorial Optimization

Rethinking Urban Mobility Prediction: A Super-Multivariate Time Series Forecasting Approach

1 code implementation4 Dec 2023 Jinguo Cheng, Ke Li, Yuxuan Liang, Lijun Sun, Junchi Yan, Yuankai Wu

To address this challenge, we present the Super-Multivariate Urban Mobility Transformer (SUMformer), which utilizes a specially designed attention mechanism to calculate temporal and cross-variable correlations and reduce computational costs stemming from a large number of time series.

Multivariate Time Series Forecasting Time Series +1

Aligning and Prompting Everything All at Once for Universal Visual Perception

1 code implementation4 Dec 2023 Yunhang Shen, Chaoyou Fu, Peixian Chen, Mengdan Zhang, Ke Li, Xing Sun, Yunsheng Wu, Shaohui Lin, Rongrong Ji

However, predominant paradigms, driven by casting instance-level tasks as an object-word alignment, bring heavy cross-modality interaction, which is not effective in prompting object detection and visual grounding.

Object object-detection +6

Less is More: Learning Reference Knowledge Using No-Reference Image Quality Assessment

no code implementations1 Dec 2023 Xudong Li, Jingyuan Zheng, Xiawu Zheng, Runze Hu, Enwei Zhang, Yuting Gao, Yunhang Shen, Ke Li, Yutao Liu, Pingyang Dai, Yan Zhang, Rongrong Ji

Concretely, by innovatively introducing a novel feature distillation method in IQA, we propose a new framework to learn comparative knowledge from non-aligned reference images.

Inductive Bias No-Reference Image Quality Assessment +1

On the Hyperparameter Landscapes of Machine Learning Algorithms

no code implementations23 Nov 2023 Mingyu Huang, Ke Li

Despite the recent success in a plethora of hyperparameter optimization (HPO) methods for machine learning (ML) models, the intricate interplay between model hyperparameters (HPs) and predictive losses (a. k. a fitness), which is a key prerequisite for understanding HPO, remain notably underexplored in our community.

Hyperparameter Optimization Transfer Learning

NeRF Revisited: Fixing Quadrature Instability in Volume Rendering

no code implementations NeurIPS 2023 Mikaela Angelina Uy, Kiyohiro Nakayama, Guandao Yang, Rahul Krishna Thomas, Leonidas Guibas, Ke Li

Volume rendering requires evaluating an integral along each ray, which is numerically approximated with a finite sum that corresponds to the exact integral along the ray under piecewise constant volume density.

Woodpecker: Hallucination Correction for Multimodal Large Language Models

1 code implementation24 Oct 2023 Shukang Yin, Chaoyou Fu, Sirui Zhao, Tong Xu, Hao Wang, Dianbo Sui, Yunhang Shen, Ke Li, Xing Sun, Enhong Chen

Hallucination is a big shadow hanging over the rapidly evolving Multimodal Large Language Models (MLLMs), referring to the phenomenon that the generated text is inconsistent with the image content.

Hallucination

Solving Expensive Optimization Problems in Dynamic Environments with Meta-learning

no code implementations19 Oct 2023 huan zhang, Jinliang Ding, Liang Feng, Kay Chen Tan, Ke Li

Although data-driven evolutionary optimization and Bayesian optimization (BO) approaches have shown promise in solving expensive optimization problems in static environments, the attempts to develop such approaches in dynamic environments remain rarely unexplored.

Bayesian Optimization Meta-Learning

Dynamic ASR Pathways: An Adaptive Masking Approach Towards Efficient Pruning of A Multilingual ASR Model

no code implementations22 Sep 2023 Jiamin Xie, Ke Li, Jinxi Guo, Andros Tjandra, Yuan Shangguan, Leda Sari, Chunyang Wu, Junteng Jia, Jay Mahadeokar, Ozlem Kalinli

In this work, we propose the use of an adaptive masking approach in two scenarios for pruning a multilingual ASR model efficiently, each resulting in sparse monolingual models or a sparse multilingual model (named as Dynamic ASR Pathways).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Masked Autoencoders are Efficient Class Incremental Learners

1 code implementation ICCV 2023 Jiang-Tian Zhai, Xialei Liu, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng

Moreover, MAEs can reliably reconstruct original input images from randomly selected patches, which we use to store exemplars from past tasks more efficiently for CIL.

Class Incremental Learning Incremental Learning

MonoNeRD: NeRF-like Representations for Monocular 3D Object Detection

1 code implementation ICCV 2023 Junkai Xu, Liang Peng, Haoran Cheng, Hao Li, Wei Qian, Ke Li, Wenxiao Wang, Deng Cai

To the best of our knowledge, this work is the first to introduce volume rendering for M3D, and demonstrates the potential of implicit reconstruction for image-based 3D perception.

Monocular 3D Object Detection Object +1

LiDAR Meta Depth Completion

1 code implementation24 Jul 2023 Wolfgang Boettcher, Lukas Hoyer, Ozan Unal, Ke Li, Dengxin Dai

While using a single model, our method yields significantly better results than a non-adaptive baseline trained on different LiDAR patterns.

Depth Completion Monocular Depth Estimation

Prompting Large Language Models with Speech Recognition Abilities

no code implementations21 Jul 2023 Yassir Fathullah, Chunyang Wu, Egor Lakomkin, Junteng Jia, Yuan Shangguan, Ke Li, Jinxi Guo, Wenhan Xiong, Jay Mahadeokar, Ozlem Kalinli, Christian Fuegen, Mike Seltzer

Furthermore, we perform ablation studies to investigate whether the LLM can be completely frozen during training to maintain its original capabilities, scaling up the audio encoder, and increasing the audio encoder striding to generate fewer embeddings.

Abstractive Text Summarization Automatic Speech Recognition +3

PAPR: Proximity Attention Point Rendering

1 code implementation NeurIPS 2023 Yanshu Zhang, Shichong Peng, Alireza Moazeni, Ke Li

PAPR effectively learns point cloud positions to represent the correct scene geometry, even when the initialization drastically differs from the target geometry.

Multi-View 3D Reconstruction Neural Rendering +1

Model-Assisted Probabilistic Safe Adaptive Control With Meta-Bayesian Learning

no code implementations3 Jul 2023 Shengbo Wang, Ke Li, Yin Yang, Yuting Cao, TingWen Huang, Shiping Wen

Specifically, with the help of CBF method, we learn the inherent and external uncertainties by a unified adaptive Bayesian linear regression (ABLR) model, which consists of a forward neural network (NN) and a Bayesian output layer.

Meta-Learning Safe Exploration

MEMD-ABSA: A Multi-Element Multi-Domain Dataset for Aspect-Based Sentiment Analysis

1 code implementation29 Jun 2023 Hongjie Cai, Nan Song, Zengzhi Wang, Qiming Xie, Qiankun Zhao, Ke Li, Siwei Wu, Shijie Liu, Jianfei Yu, Rui Xia

Aspect-based sentiment analysis is a long-standing research interest in the field of opinion mining, and in recent years, researchers have gradually shifted their focus from simple ABSA subtasks to end-to-end multi-element ABSA tasks.

Aspect-Based Sentiment Analysis Opinion Mining +1

A Survey on Multimodal Large Language Models

1 code implementation23 Jun 2023 Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, Enhong Chen

Multimodal Large Language Model (MLLM) recently has been a new rising research hotspot, which uses powerful Large Language Models (LLMs) as a brain to perform multimodal tasks.

In-Context Learning Language Modelling +4

MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models

3 code implementations23 Jun 2023 Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji

Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image.

Benchmarking Language Modelling +3

Learning from Visual Observation via Offline Pretrained State-to-Go Transformer

no code implementations NeurIPS 2023 Bohan Zhou, Ke Li, Jiechuan Jiang, Zongqing Lu

Learning from visual observation (LfVO), aiming at recovering policies from only visual observation data, is promising yet a challenging problem.

reinforcement-learning

Multi-modal Queried Object Detection in the Wild

1 code implementation NeurIPS 2023 Yifan Xu, Mengdan Zhang, Chaoyou Fu, Peixian Chen, Xiaoshan Yang, Ke Li, Changsheng Xu

To address the learning inertia problem brought by the frozen detector, a vision conditioned masked language prediction strategy is proposed.

Few-Shot Object Detection Object +2

READ: Recurrent Adaptation of Large Transformers

no code implementations24 May 2023 Sid Wang, John Nguyen, Ke Li, Carole-Jean Wu

However, fine-tuning all pre-trained model parameters becomes impractical as the model size and number of tasks increase.

Transfer Learning

Compressing neural network by tensor network with exponentially fewer variational parameters

no code implementations10 May 2023 Yong Qing, Peng-Fei Zhou, Ke Li, Shi-Ju Ran

Neural network (NN) designed for challenging machine learning tasks is in general a highly nonlinear mapping that contains massive variational parameters.

Tensor Networks

DiffFacto: Controllable Part-Based 3D Point Cloud Generation with Cross Diffusion

no code implementations ICCV 2023 Kiyohiro Nakayama, Mikaela Angelina Uy, Jiahui Huang, Shi-Min Hu, Ke Li, Leonidas J Guibas

We propose a factorization that models independent part style and part configuration distributions and presents a novel cross-diffusion network that enables us to generate coherent and plausible shapes under our proposed factorization.

Point Cloud Generation

SketchXAI: A First Look at Explainability for Human Sketches

no code implementations CVPR 2023 Zhiyu Qu, Yulia Gryaditskaya, Ke Li, Kaiyue Pang, Tao Xiang, Yi-Zhe Song

Following this, we design a simple explainability-friendly sketch encoder that accommodates the intrinsic properties of strokes: shape, location, and order.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

Diffusion models with location-scale noise

no code implementations12 Apr 2023 Alexia Jolicoeur-Martineau, Kilian Fatras, Ke Li, Tal Kachman

Diffusion Models (DMs) are powerful generative models that add Gaussian noise to the data and learn to remove it.

HybridFusion: LiDAR and Vision Cross-Source Point Cloud Fusion

no code implementations10 Apr 2023 Yu Wang, Shuhui Bu, Lin Chen, Yifei Dong, Kun Li, Xuefeng Cao, Ke Li

First, the point cloud is divided into small patches, and a matching patch set is selected based on global descriptors and spatial distribution, which constitutes the coarse matching process.

Point Cloud Registration

RFAConv: Innovating Spatial Attention and Standard Convolutional Operation

1 code implementation6 Apr 2023 Xin Zhang, Chen Liu, Degang Yang, Tingting Song, Yichen Ye, Ke Li, Yingze Song

In this paper, we propose a new perspective on the effectiveness of spatial attention, which is that the spatial attention mechanism essentially solves the problem of convolutional kernel parameter sharing.

Classification Object Detection +1

SoftCLIP: Softer Cross-modal Alignment Makes CLIP Stronger

no code implementations30 Mar 2023 Yuting Gao, Jinfeng Liu, Zihan Xu, Tong Wu Enwei Zhang, Wei Liu, Jie Yang, Ke Li, Xing Sun

During the preceding biennium, vision-language pre-training has achieved noteworthy success on several downstream tasks.

Zero-Shot Learning

SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates

no code implementations CVPR 2023 Mikaela Angelina Uy, Ricardo Martin-Brualla, Leonidas Guibas, Ke Li

To address this issue, we introduce SCADE, a novel technique that improves NeRF reconstruction quality on sparse, unconstrained input views for in-the-wild indoor scenes.

3D Reconstruction Monocular Depth Estimation +1

CLIP4MC: An RL-Friendly Vision-Language Model for Minecraft

1 code implementation19 Mar 2023 Ziluo Ding, Hao Luo, Ke Li, Junpeng Yue, Tiejun Huang, Zongqing Lu

One of the essential missions in the AI research community is to build an autonomous embodied agent that can attain high-level performance across a wide spectrum of tasks.

Contrastive Learning Language Modelling +1

Practical Cross-System Shilling Attacks with Limited Access to Data

1 code implementation14 Feb 2023 Meifang Zeng, Ke Li, Bingchuan Jiang, Liujuan Cao, Hui Li

With the idea of Cross-system Attack, we design a Practical Cross-system Shilling Attack (PC-Attack) framework that requires little information about the victim RS model and the target RS data for conducting attacks.

Recommendation Systems

Quality Indicators for Preference-based Evolutionary Multi-objective Optimization Using a Reference Point: A Review and Analysis

1 code implementation28 Jan 2023 Ryoji Tanabe, Ke Li

Some quality indicators have been proposed for benchmarking preference-based evolutionary multi-objective optimization algorithms using a reference point.

Benchmarking Decision Making

Photo Pre-Training, but for Sketch

1 code implementation CVPR 2023 Ke Li, Kaiyue Pang, Yi-Zhe Song

This lack of sketch data has imposed on the community a few "peculiar" design choices -- the most representative of them all is perhaps the coerced utilisation of photo-based pre-training (i. e., no sketch), for many core tasks that otherwise dictates specific sketch understanding.

Sketch-Based Image Retrieval

Black-Box Sparse Adversarial Attack via Multi-Objective Optimisation

no code implementations CVPR 2023 Phoenix Neale Williams, Ke Li

However, existing methods often struggle to simultaneously minimize the number of modified pixels and the size of the modifications, often requiring a large number of queries and assuming unrestricted access to the targeted DNN.

Adversarial Attack

Fewer is More: Efficient Object Detection in Large Aerial Images

1 code implementation26 Dec 2022 Xingxing Xie, Gong Cheng, Qingyang Li, Shicheng Miao, Ke Li, Junwei Han

Current mainstream object detection methods for large aerial images usually divide large images into patches and then exhaustively detect the objects of interest on all patches, no matter whether there exist objects or not.

Object object-detection +1

Robust Saliency Guidance for Data-free Class Incremental Learning

no code implementations16 Dec 2022 Xialei Liu, Jiang-Tian Zhai, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng

Data-Free Class Incremental Learning (DFCIL) aims to sequentially learn tasks with access only to data from the current one.

Class Incremental Learning Incremental Learning

Improving Fast-slow Encoder based Transducer with Streaming Deliberation

no code implementations15 Dec 2022 Ke Li, Jay Mahadeokar, Jinxi Guo, Yangyang Shi, Gil Keren, Ozlem Kalinli, Michael L. Seltzer, Duc Le

Experiments on Librispeech and in-house data show relative WER reductions (WERRs) from 3% to 5% with a slight increase in model size and negligible extra token emission latency compared with fast-slow encoder based transducer.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

CHIMLE: Conditional Hierarchical IMLE for Multimodal Conditional Image Synthesis

1 code implementation25 Nov 2022 Shichong Peng, Alireza Moazeni, Ke Li

A persistent challenge in conditional image synthesis has been to generate diverse output images from the same input image despite only one output image being observed per input image.

Image Generation Image Super-Resolution

Immersive Neural Graphics Primitives

1 code implementation24 Nov 2022 Ke Li, Tim Rolff, Susanne Schmidt, Reinhard Bacher, Simone Frintrop, Wim Leemans, Frank Steinicke

In this paper, we present and evaluate a NeRF-based framework that is capable of rendering scenes in immersive VR allowing users to freely move their heads to explore complex real-world scenes.

Benchmarking Super-Resolution

A Data-Driven Evolutionary Transfer Optimization for Expensive Problems in Dynamic Environments

no code implementations5 Nov 2022 Ke Li, Renzhi Chen, Xin Yao

Many real-world problems are usually computationally costly and the objective functions evolve over time.

Transfer Learning

Joint Audio/Text Training for Transformer Rescorer of Streaming Speech Recognition

no code implementations31 Oct 2022 Suyoun Kim, Ke Li, Lucas Kabela, Rongqing Huang, Jiedan Zhu, Ozlem Kalinli, Duc Le

In this work, we present our Joint Audio/Text training method for Transformer Rescorer, to leverage unpaired text-only data which is relatively cheaper than paired audio-text data.

speech-recognition Speech Recognition

Micro and Macro Level Graph Modeling for Graph Variational Auto-Encoders

1 code implementation30 Oct 2022 Kiarash Zahirnia, Oliver Schulte, Parmis Naddaf, Ke Li

We utilize the micro-macro objective to improve graph generation with a GraphVAE, a well-established model based on graph-level latent variables, that provides fast training and generation time for medium-sized graphs.

Graph Generation

Augmentor or Filter? Reconsider the Role of Pre-trained Language Model in Text Classification Augmentation

1 code implementation6 Oct 2022 Heng Yang, Ke Li

Our study shows that even employing pre-trained language models, existing text augmentation methods generate numerous low-quality instances and lead to the feature space shift problem in augmentation instances.

Language Modelling Sentence +5

Long-Tailed Class Incremental Learning

1 code implementation1 Oct 2022 Xialei Liu, Yu-Song Hu, Xu-Sheng Cao, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng

However, conventional CIL methods consider a balanced distribution for each new task, which ignores the prevalence of long-tailed distributions in the real world.

Class Incremental Learning Incremental Learning

Defend Data Poisoning Attacks on Voice Authentication

no code implementations9 Sep 2022 Ke Li, Cameron Baird, Dan Lin

With the advances in deep learning, speaker verification has achieved very high accuracy and is gaining popularity as a type of biometric authentication option in many scenes of our daily life, especially the growing market of web services.

Data Poisoning Ensemble Learning +1

LAB-Net: LAB Color-Space Oriented Lightweight Network for Shadow Removal

1 code implementation27 Aug 2022 Hong Yang, Gongrui Nan, Mingbao Lin, Fei Chao, Yunhang Shen, Ke Li, Rongrong Ji

Finally, the LSA modules are further developed to fully use the prior information in non-shadow regions to cleanse the shadow regions.

Shadow Removal

PyABSA: A Modularized Framework for Reproducible Aspect-based Sentiment Analysis

2 code implementations2 Aug 2022 Heng Yang, Chen Zhang, Ke Li

The advancement of aspect-based sentiment analysis (ABSA) has urged the lack of a user-friendly framework that can largely lower the difficulty of reproducing state-of-the-art ABSA performance, especially for beginners.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +5

Efficient Decoder-free Object Detection with Transformers

2 code implementations14 Jun 2022 Peixian Chen, Mengdan Zhang, Yunhang Shen, Kekai Sheng, Yuting Gao, Xing Sun, Ke Li, Chunhua Shen

A natural usage of ViTs in detection is to replace the CNN-based backbone with a transformer-based backbone, which is straightforward and effective, with the price of bringing considerable computation burden for inference.

Object Object Detection

Learning Best Combination for Efficient N:M Sparsity

1 code implementation14 Jun 2022 Yuxin Zhang, Mingbao Lin, Zhihang Lin, Yiting Luo, Ke Li, Fei Chao, Yongjian Wu, Rongrong Ji

In this paper, we show that the N:M learning can be naturally characterized as a combinatorial problem which searches for the best combination candidate within a finite collection.

Evolutionary Multi-Task Injection Testing on Web Application Firewalls

1 code implementation12 Jun 2022 Ke Li, Heng Yang, Willem Visser

In this paper, we propose DaNuoYi, an automatic injection testing tool that simultaneously generates test inputs for multiple types of injection attacks on a WAF.

Multi-Task Learning Translation

Do We Really Need to Use Constraint Violation in Constrained Evolutionary Multi-Objective Optimization?

no code implementations28 May 2022 Shuang Li, Ke Li, Wei Li

Constraint violation has been a building block to design evolutionary multi-objective optimization algorithms for solving constrained multi-objective optimization problems.

Data-Driven Evolutionary Multi-Objective Optimization Based on Multiple-Gradient Descent for Disconnected Pareto Fronts

no code implementations28 May 2022 Renzhi Chen, Ke Li

Data-driven evolutionary multi-objective optimization (EMO) has been recognized as an effective approach for multi-objective optimization problems with expensive objective functions.

C3KG: A Chinese Commonsense Conversation Knowledge Graph

1 code implementation6 Apr 2022 Dawei Li, Yanran Li, Jiayi Zhang, Ke Li, Chen Wei, Jianwei Cui, Bin Wang

Existing commonsense knowledge bases often organize tuples in an isolated manner, which is deficient for commonsense conversational models to plan the next steps.

Interactive Evolutionary Multi-Objective Optimization via Learning-to-Rank

no code implementations6 Apr 2022 Ke Li, Guiyu Lai, Xin Yao

Bearing this in mind, this paper develops a framework for designing preference-based EMO algorithms to find SOI in an interactive manner.

Decision Making Learning-To-Rank

Training-free Transformer Architecture Search

1 code implementation CVPR 2022 Qinqin Zhou, Kekai Sheng, Xiawu Zheng, Ke Li, Xing Sun, Yonghong Tian, Jie Chen, Rongrong Ji

Recently, Vision Transformer (ViT) has achieved remarkable success in several computer vision tasks.

ARM: Any-Time Super-Resolution Method

1 code implementation21 Mar 2022 Bohong Chen, Mingbao Lin, Kekai Sheng, Mengdan Zhang, Peixian Chen, Ke Li, Liujuan Cao, Rongrong Ji

To that effect, we construct an Edge-to-PSNR lookup table that maps the edge score of an image patch to the PSNR performance for each subnet, together with a set of computation costs for the subnets.

Image Super-Resolution

Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks

1 code implementation8 Mar 2022 Yunshan Zhong, Mingbao Lin, Xunchao Li, Ke Li, Yunhang Shen, Fei Chao, Yongjian Wu, Rongrong Ji

However, these methods suffer from severe performance degradation when quantizing the SR models to ultra-low precision (e. g., 2-bit and 3-bit) with the low-cost layer-wise quantizer.

Quantization Super-Resolution

CF-ViT: A General Coarse-to-Fine Method for Vision Transformer

1 code implementation8 Mar 2022 Mengzhao Chen, Mingbao Lin, Ke Li, Yunhang Shen, Yongjian Wu, Fei Chao, Rongrong Ji

Our proposed CF-ViT is motivated by two important observations in modern ViT models: (1) The coarse-grained patch splitting can locate informative regions of an input image.

Automated Few-Shot Time Series Forecasting based on Bi-level Programming

no code implementations7 Mar 2022 Jiangjiao Xu, Ke Li

One of the critical challenges of time series renewable energy forecasting is the lack of historical data to train an adequate predictive model.

Decision Making Few-Shot Learning +3

Art-Attack: Black-Box Adversarial Attack via Evolutionary Art

no code implementations7 Mar 2022 Phoenix Williams, Ke Li

To evaluate the effectiveness of our proposed method, we attack three state-of-the-art image classification models trained on the CIFAR-10 dataset in a targeted manner.

Adversarial Attack Image Classification

Variational Model Inversion Attacks

1 code implementation NeurIPS 2021 Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani

In this work, we provide a probabilistic interpretation of model inversion attacks, and formulate a variational objective that accounts for both diversity and accuracy.

Unlocking the Secrets of Software Configuration Landscapes-Ruggedness, Accessibility, Escapability, and Transferability

no code implementations5 Jan 2022 Mingyu Huang, Peili Mao, Ke Li

Modern software systems are often highly configurable to tailor varied requirements from diverse stakeholders.

RMNet: Equivalently Removing Residual Connection from Networks

1 code implementation1 Nov 2021 Fanxu Meng, Hao Cheng, Jiaxin Zhuang, Ke Li, Xing Sun

In this paper, we aim to remedy this problem and propose to remove the residual connection in a vanilla ResNet equivalently by a reserving and merging (RM) operation on ResBlock.

Network Pruning

Anchor-free Oriented Proposal Generator for Object Detection

1 code implementation5 Oct 2021 Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, Junwei Han

Nowadays, oriented detectors mostly use horizontal boxes as intermedium to derive oriented boxes from them.

Object object-detection +2

Self-supervised Models are Good Teaching Assistants for Vision Transformers

no code implementations29 Sep 2021 Haiyan Wu, Yuting Gao, Ke Li, Yinqi Zhang, Shaohui Lin, Yuan Xie, Xing Sun

These findings motivate us to introduce an self-supervised teaching assistant (SSTA) besides the commonly used supervised teacher to improve the performance of transformers.

Image Classification Knowledge Distillation

Generating Unobserved Alternatives with Tower Implicit Model (TIM)

no code implementations29 Sep 2021 Shichong Peng, Seyed Alireza Moazenipourasil, Ke Li

We consider problems where multiple predictions can be considered correct, but only one of them is given as supervision.

regression

A Two-Stage Framework to Generate Video Chapter

no code implementations29 Sep 2021 Canyu Le, Zhiyuan Tang, Ke Li, Jiandong Yang

On top of this dataset, we propose a two-stage framework to perform chapter localization and chapter title generation.

Vocal Bursts Valence Prediction

Private Language Model Adaptation for Speech Recognition

no code implementations28 Sep 2021 Zhe Liu, Ke Li, Shreyan Bakshi, Fuchun Peng

Speech model adaptation is crucial to handle the discrepancy between server-side proxy training data and actual data received on local devices of users.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Gotta Go Fast with Score-Based Generative Models

no code implementations NeurIPS Workshop DLDE 2021 Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, Ioannis Mitliagkas

Score-based (denoising diffusion) generative models have recently gained a lot of success in generating realistic and diverse data.

Denoising

Regret Lower Bound and Optimal Algorithm for High-Dimensional Contextual Linear Bandit

no code implementations23 Sep 2021 Ke Li, Yun Yang, Naveen N. Narisetty

This new lower bound unifies existing regret bound results that have different dependencies on T due to the use of different values of margin parameter $\alpha$ explicitly implied by their assumptions.

Batched Data-Driven Evolutionary Multi-Objective Optimization Based on Manifold Interpolation

no code implementations12 Sep 2021 Ke Li, Renzhi Chen

Data-driven evolutionary optimization can be used to search for a set of non-dominated trade-off solutions, where the expensive objective functions are approximated as a surrogate model.

Fine-grained Data Distribution Alignment for Post-Training Quantization

1 code implementation9 Sep 2021 Yunshan Zhong, Mingbao Lin, Mengzhao Chen, Ke Li, Yunhang Shen, Fei Chao, Yongjian Wu, Rongrong Ji

While post-training quantization receives popularity mostly due to its evasion in accessing the original complete training dataset, its poor performance also stems from scarce images.

Quantization

Decomposition Multi-Objective Evolutionary Optimization: From State-of-the-Art to Future Opportunities

no code implementations21 Aug 2021 Ke Li

Decomposition has been the mainstream approach in the classic mathematical programming for multi-objective optimization and multi-criterion decision-making.

Decision Making

Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting

1 code implementation ICCV 2021 Binghui Chen, Zhaoyi Yan, Ke Li, Pengyu Li, Biao Wang, WangMeng Zuo, Lei Zhang

In crowd counting, due to the problem of laborious labelling, it is perceived intractability of collecting a new large-scale dataset which has plentiful images with large diversity in density, scene, etc.

Crowd Counting

DeepExpress: Heterogeneous and Coupled Sequence Modeling for Express Delivery Prediction

no code implementations18 Aug 2021 Siyuan Ren, Bin Guo, Longbing Cao, Ke Li, Jiaqi Liu, Zhiwen Yu

To address these issues, we propose DeepExpress - a deep-learning based express delivery sequence prediction model, which extends the classic seq2seq framework to learning complex coupling between sequence and features.

Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer

1 code implementation3 Aug 2021 Yifan Xu, Zhijie Zhang, Mengdan Zhang, Kekai Sheng, Ke Li, WeiMing Dong, Liqing Zhang, Changsheng Xu, Xing Sun

Vision transformers (ViTs) have recently received explosive popularity, but the huge computational cost is still a severe issue.

Image Classification

Multimodal Shape Completion via IMLE

no code implementations30 Jun 2021 Himanshu Arora, Saurabh Mishra, Shichong Peng, Ke Li, Ali Mahdavi-Amiri

Shape completion is the problem of completing partial input shapes such as partial scans.

Generating the Graph Gestalt: Kernel-Regularized Graph Representation Learning

no code implementations29 Jun 2021 Kiarash Zahirnia, Ankita Sakhuja, Oliver Schulte, Parmis Nadaf, Ke Li, Xia Hu

Our experiments demonstrate a significant improvement in the realism of the generated graph structures, typically by 1-2 orders of magnitude of graph structure metrics, compared to leading graph VAEand GAN models.

Graph Representation Learning

Cascading Modular Network (CAM-Net) for Multimodal Image Synthesis

no code implementations16 Jun 2021 Shichong Peng, Alireza Moazeni, Ke Li

Deep generative models such as GANs have driven impressive advances in conditional image synthesis in recent years.

Image Generation

Deep Medial Fields

no code implementations7 Jun 2021 Daniel Rebain, Ke Li, Vincent Sitzmann, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi

Implicit representations of geometry, such as occupancy fields or signed distance fields (SDF), have recently re-gained popularity in encoding 3D solid shape in a functional form.

Gotta Go Fast When Generating Data with Score-Based Models

1 code implementation28 May 2021 Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, Ioannis Mitliagkas

For high-resolution images, our method leads to significantly higher quality samples than all other methods tested.

Ranked #8 on Image Generation on CIFAR-10 (Inception score metric)

Image Generation

Unsupervised Discriminative Learning of Sounds for Audio Event Classification

no code implementations19 May 2021 Sascha Hornauer, Ke Li, Stella X. Yu, Shabnam Ghaffarzadegan, Liu Ren

Recent progress in network-based audio event classification has shown the benefit of pre-training models on visual data such as ImageNet.

Classification Transfer Learning

Towards an Online Empathetic Chatbot with Emotion Causes

no code implementations11 May 2021 Yanran Li, Ke Li, Hongke Ning, Xiaoqiang Xia, Yalong Guo, Chen Wei, Jianwei Cui, Bin Wang

Existing emotion-aware conversational models usually focus on controlling the response contents to align with a specific emotion class, whereas empathy is the ability to understand and concern the feelings and experience of others.

Chatbot

ISTR: End-to-End Instance Segmentation with Transformers

1 code implementation3 May 2021 Jie Hu, Liujuan Cao, Yao Lu, Shengchuan Zhang, Yan Wang, Ke Li, Feiyue Huang, Ling Shao, Rongrong Ji

However, such an upgrade is not applicable to instance segmentation, due to its significantly higher output dimensions compared to object detection.

Instance Segmentation object-detection +3

DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning

2 code implementations19 Apr 2021 Yuting Gao, Jia-Xin Zhuang, Shaohui Lin, Hao Cheng, Xing Sun, Ke Li, Chunhua Shen

Specifically, we find the final embedding obtained by the mainstream SSL methods contains the most fruitful information, and propose to distill the final embedding to maximally transmit a teacher's knowledge to a lightweight model by constraining the last embedding of the student to be consistent with that of the teacher.

Contrastive Learning Representation Learning +1

Pose Recognition with Cascade Transformers

2 code implementations CVPR 2021 Ke Li, Shijie Wang, Xiang Zhang, Yifan Xu, Weijian Xu, Zhuowen Tu

Here we utilize the encoder-decoder structure in Transformers to perform regression-based person and keypoint detection that is general-purpose and requires less heuristic design compared with the existing approaches.

Keypoint Detection regression

speechocean762: An Open-Source Non-native English Speech Corpus For Pronunciation Assessment

2 code implementations3 Apr 2021 Junbo Zhang, Zhiwen Zhang, Yongqing Wang, Zhiyong Yan, Qiong Song, YuKai Huang, Ke Li, Daniel Povey, Yujun Wang

This paper introduces a new open-source speech corpus named "speechocean762" designed for pronunciation assessment use, consisting of 5000 English utterances from 250 non-native speakers, where half of the speakers are children.

Phone-level pronunciation scoring Sentence +1

On Evolving Attention Towards Domain Adaptation

no code implementations25 Mar 2021 Kekai Sheng, Ke Li, Xiawu Zheng, Jian Liang, WeiMing Dong, Feiyue Huang, Rongrong Ji, Xing Sun

However, considering that the configuration of attention, i. e., the type and the position of attention module, affects the performance significantly, it is more generalized to optimize the attention configuration automatically to be specialized for arbitrary UDA scenario.

Partial Domain Adaptation Unsupervised Domain Adaptation

An Improved Two-Archive Evolutionary Algorithm for Constrained Multi-Objective Optimization

no code implementations10 Mar 2021 Xinyu Shan, Ke Li

Constrained multi-objective optimization problems (CMOPs) are ubiquitous in real-world engineering optimization scenarios.

Vocal Bursts Valence Prediction

Multi-Objective Reinforcement Learning based Multi-Microgrid System Optimisation Problem

no code implementations10 Mar 2021 Jiangjiao Xu, Ke Li, Mohammad Abusara

The proposed model consists of three layers, smart grid layer, independent system operator (ISO) layer and power grid layer.

energy management Management +2

A Parallelizable Lattice Rescoring Strategy with Neural Language Models

1 code implementation8 Mar 2021 Ke Li, Daniel Povey, Sanjeev Khudanpur

This paper proposes a parallel computation strategy and a posterior-based lattice expansion algorithm for efficient lattice rescoring with neural language models (LMs) for automatic speech recognition.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Combinatorial Bandits under Strategic Manipulations

1 code implementation25 Feb 2021 Jing Dong, Ke Li, Shuai Li, Baoxiang Wang

Strategic behavior against sequential learning methods, such as "click framing" in real recommendation systems, have been widely observed.

Multi-Armed Bandits Recommendation Systems

Measurement of the absolute branching fractions for purely leptonic $D_s^+$ decays

no code implementations23 Feb 2021 BESIII Collaboration, M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, R. Aliberti, A. Amoroso, M. R. An, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, Z. J Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, S. X. Du, Y. L. Fan, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, J. H. Feng, M. Fritsch, C. D. Fu, Y. Gao, Y. G. Gao, I. Garzia, P. T. Ge, C. Geng, E. M. Gersabeck, A Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, T. T. Han, W. Y. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, C. H. Heinz, T. Held, Y. K. Heng, C. Herold, M. Himmelreich, T. Holtmann, G. Y. Hou, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, N Hüsken, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, Y. Y. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, M. Q. Jing, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, Z. H. Lei, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, J. S. Li, Ke Li, L. K. Li, Lei LI, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Xiaoyu Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. J. Liu, C. X. Liu, D. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. L. Liu, J. Y. Liu, K. Liu, K. Y. Liu, L. Liu, M. H. Liu, P. L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, W. M. Liu, X. Liu, Y. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. X. Ma, X. Y. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, R. Poling, V. Prasad, H. Qi, H. R. Qi, K. H. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, H. S. Sang, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, M. Scodeggio, D. C. Shan, W. Shan, X. Y. Shan, J. F. Shangguan, M. Shao, C. P. Shen, H. F. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, K. X. Su, P. P. Su, F. F. Sui, G. X. Sun, H. K. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, X Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, J. X. Teng, V. Thoren, W. H. Tian, Y. T. Tian, I. Uman, B. Wang, C. W. Wang, D. Y. Wang, H. J. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Y. Y. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, G. F. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, S. L. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, L. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, A. Q. Zhang, B. X. Zhang, Guangyi Zhang, H. Zhang, H. H. Zhang, H. Y. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, L. M. Zhang, L. Q. Zhang, Lei Zhang, S. Zhang, S. F. Zhang, Shulei Zhang, X. D. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, X. Y. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, T. J. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou

Constraining our measurement to the Standard Model expectation of lepton universality ($R=9. 75$), we find the more precise results $\cal B(D_s^+\to \tau^+\nu_\tau) = (5. 22\pm0. 10\pm 0. 14)\times10^{-2}$ and $A_{\it CP}(\tau^\pm\nu_\tau) = (-0. 1\pm1. 9\pm1. 0)\%$.

High Energy Physics - Experiment

Cross section measurement of $e^+e^- \to p\bar{p}η$ and $e^+e^- \to p\bar{p}ω$ at center-of-mass energies between 3.773 GeV and 4.6 GeV

no code implementations8 Feb 2021 M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, R. Aliberti, A. Amoroso, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J Biernat, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, Z. J Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, S. X. Du, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Gao, Y. G. Gao, I. Garzia, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, T. T. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, C. H. Heinz, T. Held, Y. K. Heng, C. Herold, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, N. Hüsken, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, Z. H. Lei, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, Ke Li, L. K. Li, Lei LI, P. L. Li, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. J. Liu, C. X. Liu, D. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, L. Liu, M. H. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, W. M. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. X. Ma, X. Y. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. Qi, H. R. Qi, K. H. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, H. S. Sang, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, M. Scodeggio, D. C. Shan, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, K. X. Su, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, X Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, J. X. Teng, V. Thoren, I. Uman, B. Wang, C. W. Wang, D. Y. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, G. F. Xu, J. J. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, S. L. Yang, Y. H. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, L. Yuan, W. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, Guangyi Zhang, H. Zhang, H. H. Zhang, H. Y. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, Lei Zhang, S. Zhang, S. F. Zhang, X. D. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou

Based on $14. 7~\textrm{fb}^{-1}$ of $e^+e^-$ annihilation data collected with the BESIII detector at the BEPCII collider at 17 different center-of-mass energies between $3. 7730~\textrm{GeV}$ and $4. 5995~\textrm{GeV}$, Born cross sections of the two processes $e^+e^- \to p\bar{p}\eta$ and $e^+e^- \to p\bar{p}\omega$ are measured for the first time.

High Energy Physics - Experiment

An Empirical Study and Analysis on Open-Set Semi-Supervised Learning

no code implementations19 Jan 2021 Huixiang Luo, Hao Cheng, Fanxu Meng, Yuting Gao, Ke Li, Mengdan Zhang, Xing Sun

Pseudo-labeling (PL) and Data Augmentation-based Consistency Training (DACT) are two approaches widely used in Semi-Supervised Learning (SSL) methods.

Data Augmentation

Hyperspectral Image Super-Resolution with Spectral Mixup and Heterogeneous Datasets

2 code implementations19 Jan 2021 Ke Li, Dengxin Dai, Ender Konukoglu, Luc van Gool

With these contributions, our method is able to learn from heterogeneous datasets and lift the requirement for having a large amount of HD HSI training samples.

Data Augmentation Hyperspectral Image Super-Resolution +2

Measurements of the center-of-mass energies of $e^{+}e^{-}$ collisions at BESIII

no code implementations29 Dec 2020 BESIII Collaboration, M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, R. Aliberti, A. Amoroso, M. R. An, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, Z. J Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, S. X. Du, Y. L. Fan, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, J. H. Feng, M. Fritsch, C. D. Fu, Y. Gao, Y. G. Gao, I. Garzia, P. T. Ge, C. Geng, E. M. Gersabeck, A Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, T. T. Han, W. Y. Han, X. Q. Hao, F. A. Harris, N Hüsken, K. L. He, F. H. Heinsius, C. H. Heinz, T. Held, Y. K. Heng, C. Herold, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, Y. Y. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, Z. H. Lei, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, J. S. Li, Ke Li, L. K. Li, Lei LI, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Xiaoyu Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. J. Liu, C. X. Liu, D. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. L. Liu, J. Y. Liu, K. Liu, K. Y. Liu, Ke Liu, L. Liu, M. H. Liu, P. L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, W. M. Liu, X. Liu, Y. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. X. Ma, X. Y. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, R. Poling, V. Prasad, H. Qi, H. R. Qi, K. H. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, H. S. Sang, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, M. Scodeggio, D. C. Shan, W. Shan, X. Y. Shan, J. F. Shangguan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, K. X. Su, P. P. Su, F. F. Sui, G. X. Sun, H. K. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, X Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, J. X. Teng, V. Thoren, W. H. Tian, Y. T. Tian, I. Uman, B. Wang, C. W. Wang, D. Y. Wang, H. J. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Y. Y. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, G. F. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, S. L. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, L. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, Guangyi Zhang, H. Zhang, H. H. Zhang, H. Y. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, L. M. Zhang, L. Q. Zhang, Lei Zhang, S. Zhang, S. F. Zhang, Shulei Zhang, X. D. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, X. Y. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, T. J. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou

During the 2016-17 and 2018-19 running periods, the BESIII experiment collected 7. 5~fb$^{-1}$ of $e^+e^-$ collision data at center-of-mass energies ranging from 4. 13 to 4. 44 GeV.

High Energy Physics - Experiment

One for More: Selecting Generalizable Samples for Generalizable ReID Model

1 code implementation10 Dec 2020 Enwei Zhang, Xinyang Jiang, Hao Cheng, AnCong Wu, Fufu Yu, Ke Li, Xiaowei Guo, Feng Zheng, Wei-Shi Zheng, Xing Sun

Current training objectives of existing person Re-IDentification (ReID) models only ensure that the loss of the model decreases on selected training batch, with no regards to the performance on samples outside the batch.

Person Re-Identification

Search for the reaction $e^{+}e^{-} \rightarrow π^{+}π^{-} χ_{cJ}$ and a charmonium-like structure decaying to $χ_{cJ}π^{\pm}$ between 4.18 and 4.60 GeV

no code implementations4 Dec 2020 BESIII Collaboration, M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, A. Amoroso, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, J. V. Bennett, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J Biernat, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, J. P. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, S. X. Du, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Fu, X. L. Gao, Y. Gao, Y. G. Gao, I. Garzia, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, S. Han, T. T. Han, T. Z. Han, X. Q. Hao, F. A. Harris, N. Hüsken, K. L. He, F. H. Heinsius, T. Held, Y. K. Heng, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, Ke Li, L. K. Li, Lei LI, P. L. Li, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. Liu, B. J. Liu, C. X. Liu, D. Liu, D. Y. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, Ke Liu, L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, Y. F. Long, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. N. Ma, X. X. Ma, X. Y. Ma, Y. M. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. Qi, H. R. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, D. C. Shan, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, Q. Q. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, V. Thoren, I. Uman, B. Wang, B. L. Wang, C. W. Wang, D. Y. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Y. J. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, X. A. Xiong, G. F. Xu, J. J. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, R. X. Yang, S. L. Yang, Y. H. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, W. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, Guangyi Zhang, H. H. Zhang, H. Y. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, Lei Zhang, S. Zhang, S. F. Zhang, T. J. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou

We search for the process $e^{+}e^{-}\rightarrow \pi ^{+}\pi ^{-} \chi_{cJ}$ ($J=0, 1, 2$) and for a charged charmonium-like state in the $\pi ^{\pm} \chi_{cJ}$ subsystem.

High Energy Physics - Experiment

Better Knowledge Retention through Metric Learning

no code implementations26 Nov 2020 Ke Li, Shichong Peng, Kailas Vodrahalli, Jitendra Malik

In continual learning, new categories may be introduced over time, and an ideal learning system should perform well on both the original categories and the new categories.

Continual Learning Metric Learning

DeRF: Decomposed Radiance Fields

no code implementations CVPR 2021 Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, Andrea Tagliasacchi

Moreover, we show that a Voronoi spatial decomposition is preferable for this purpose, as it is provably compatible with the Painter's Algorithm for efficient and GPU-friendly rendering.

A Multi-stream Convolutional Neural Network for Micro-expression Recognition Using Optical Flow and EVM

no code implementations7 Nov 2020 Jinming Liu, Ke Li, Baolin Song, Li Zhao

On the other hand, some methods based on deep learning also cannot get high accuracy due to problems such as the imbalance of databases.

Micro Expression Recognition Micro-Expression Recognition +1

Generating Unobserved Alternatives

no code implementations3 Nov 2020 Shichong Peng, Ke Li

This setting differs from both the regression and class-conditional generative modelling settings: in the former, there is a unique observed output for each input, which is provided as supervision; in the latter, there are many observed outputs for each input, and many are provided as supervision.

regression Super-Resolution

University of Washington at TREC 2020 Fairness Ranking Track

no code implementations3 Nov 2020 Yunhe Feng, Daniel Saelid, Ke Li, Ruoyuan Gao, Chirag Shah

The results showed that our runs performed below par for re-ranking task, but above average for retrieval.

Ethics Fairness +2

Pruning Filter in Filter

1 code implementation NeurIPS 2020 Fanxu Meng, Hao Cheng, Ke Li, Huixiang Luo, Xiaowei Guo, Guangming Lu, Xing Sun

Through extensive experiments, we demonstrate that SWP is more effective compared to the previous FP-based methods and achieves the state-of-art pruning ratio on CIFAR-10 and ImageNet datasets without obvious accuracy drop.

Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning

2 code implementations CVPR 2021 Jinpeng Wang, Yuting Gao, Ke Li, Yiqi Lin, Andy J. Ma, Hao Cheng, Pai Peng, Feiyue Huang, Rongrong Ji, Xing Sun

Then we force the model to pull the feature of the distracting video and the feature of the original video closer, so that the model is explicitly restricted to resist the background influence, focusing more on the motion changes.

Representation Learning Self-Supervised Learning

Efficient MDI Adaptation for n-gram Language Models

no code implementations5 Aug 2020 Ruizhe Huang, Ke Li, Ashish Arora, Dan Povey, Sanjeev Khudanpur

This paper presents an efficient algorithm for n-gram language model adaptation under the minimum discrimination information (MDI) principle, where an out-of-domain language model is adapted to satisfy the constraints of marginal probabilities of the in-domain data.

Language Modelling

Model independent determination of the spin of the $Ω^{-}$ and its polarization alignment in $ψ(3686)\rightarrowΩ^{-}\barΩ^{+}$

no code implementations7 Jul 2020 M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, A. Amoroso, Q. An, Anita, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, J. V. Bennett, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J Biernat, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, J. P. Dai, X. C. Dai, A. Dbeyssi, R. B. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, S. X. Du, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Fu, X. L. Gao, Y. Gao, Y. G. Gao, I. Garzia, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, S. Han, T. T. Han, T. Z. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, T. Held, Y. K. Heng, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, N. Huesken, T. Hussain, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. B. Jiang, X. S. Jiang, X. Y. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, L. Lavezzi, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, Ke Li, L. K. Li, Lei LI, P. L. Li, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, L. Z. Liao, J. Libby, C. X. Lin, B. Liu, B. J. Liu, C. X. Liu, D. Liu, D. Y. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, Ke Liu, L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, Y. F. Long, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. N. Ma, X. X. Ma, X. Y. Ma, Y. M. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. Qi, H. R. Qi, M. Qi, T. Y. Qi, S. Qian, W. -B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, D. C. Shan, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, Q. Q. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, V. Thoren, I. Uman, B. Wang, B. L. Wang, C. W. Wang, D. Y. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Y. J. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, X. A. Xiong, G. F. Xu, J. J. Xu, Q. J. Xu, W. Xu, X. P. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, R. X. Yang, S. L. Yang, Y. H. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, W. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, Guangyi Zhang, H. H. Zhang, H. Y. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, L. Zhang, Lei Zhang, S. Zhang, S. F. Zhang, T. J. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, W. J. Zhu, X. L. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou

We present an analysis of the process $\psi(3686) \to \Omega^- \bar{\Omega}^+$ ($\Omega^-\to K^-\Lambda$, $\bar{\Omega}^+\to K^+\bar{\Lambda}$, $\Lambda\to p\pi^-$, $\bar{\Lambda}\to \bar{p}\pi^+$) based on a data set of $448\times 10^6$ $\psi(3686)$ decays collected with the BESIII detector at the BEPCII electron-positron collider.

High Energy Physics - Experiment

Knee Point Identification Based on Trade-Off Utility

1 code implementation23 May 2020 Ke Li, Haifeng Nie, Huifu Gao, Xin Yao

Knee points, characterised as their smallest trade-off loss at all objectives, are attractive to decision makers in multi-criterion decision-making.

Decision Making

Filter Grafting for Deep Neural Networks: Reason, Method, and Cultivation

1 code implementation26 Apr 2020 Hao Cheng, Fanxu Meng, Ke Li, Yuting Gao, Guangming Lu, Xing Sun, Rongrong Ji

To gain a universal improvement on both valid and invalid filters, we compensate grafting with distillation (\textbf{Cultivation}) to overcome the drawback of grafting .

valid

Adaptive Operator Selection Based on Dynamic Thompson Sampling for MOEA/D

no code implementations22 Apr 2020 Lei Sun, Ke Li

In particular, each arm of our bandit learning model represents a reproduction operator and is assigned with a prior reward distribution.

Thompson Sampling

On the Combined Impact of Population Size and Sub-problem Selection in MOEA/D

no code implementations15 Apr 2020 Geoffrey Pruvost, Bilel Derbel, Arnaud Liefooghe, Ke Li, Qingfu Zhang

This paper intends to understand and to improve the working principle of decomposition-based multi-objective evolutionary algorithms.

Evolutionary Algorithms

Inclusive GAN: Improving Data and Minority Coverage in Generative Models

1 code implementation ECCV 2020 Ning Yu, Ke Li, Peng Zhou, Jitendra Malik, Larry Davis, Mario Fritz

Generative Adversarial Networks (GANs) have brought about rapid progress towards generating photorealistic images.

Multi-Modal Graph Neural Network for Joint Reasoning on Vision and Scene Text

1 code implementation CVPR 2020 Difei Gao, Ke Li, Ruiping Wang, Shiguang Shan, Xilin Chen

Then, we introduce three aggregators which guide the message passing from one graph to another to utilize the contexts in various modalities, so as to refine the features of nodes.

Question Answering Visual Question Answering (VQA)

Architecture Disentanglement for Deep Neural Networks

1 code implementation ICCV 2021 Jie Hu, Liujuan Cao, Qixiang Ye, Tong Tong, Shengchuan Zhang, Ke Li, Feiyue Huang, Rongrong Ji, Ling Shao

Based on the experimental results, we present three new findings that provide fresh insights into the inner logic of DNNs.

AutoML Disentanglement

Surrogate Assisted Evolutionary Algorithm for Medium Scale Expensive Multi-Objective Optimisation Problems

no code implementations8 Feb 2020 Xiaoran Ruan, Ke Li, Bilel Derbel, Arnaud Liefooghe

The effectiveness of our proposed algorithm is validated on benchmark problems with 10, 20, 50 variables, comparing with three state-of-the-art SAEAs.

Evolutionary Algorithms

Understanding the Automated Parameter Optimization on Transfer Learning for CPDP: An Empirical Study

1 code implementation8 Feb 2020 Ke Li, Zilin Xiang, Tao Chen, Shuo Wang, Kay Chen Tan

Given a tight computational budget, it is more cost-effective to focus on optimizing the parameter configuration of transfer learning algorithms (3) The research on CPDP is far from mature where it is "not difficult" to find a better alternative by making a combination of existing transfer learning and classification techniques.

Transfer Learning

Routing-Led Placement of VNFs in Arbitrary Networks

no code implementations30 Jan 2020 Joseph Billingsley, Ke Li, Wang Miao, Geyong Min, Nektarios Georgalas

The ever increasing demand for computing resources has led to the creation of hyperscale datacentres with tens of thousands of servers.

Search-Based Software Engineering for Self-Adaptive Systems: Survey, Disappointments, Suggestions and Opportunities

1 code implementation22 Jan 2020 Tao Chen, Miqing Li, Ke Li, Kalyanmoy Deb

In this paper, we provide the first systematic and comprehensive survey exclusively on SBSE for SASs, covering papers in 27 venues from 7 repositories, which eventually leads to several key statistics from the most notable 74 primary studies in this particular field of research.

Self Adaptive System

Filter Grafting for Deep Neural Networks

2 code implementations CVPR 2020 Fanxu Meng, Hao Cheng, Ke Li, Zhixin Xu, Rongrong Ji, Xing Sun, Gaungming Lu

To better perform the grafting process, we develop an entropy-based criterion to measure the information of filters and an adaptive weighting strategy for balancing the grafted information among networks.

Asymmetric Co-Teaching for Unsupervised Cross Domain Person Re-Identification

1 code implementation3 Dec 2019 Fengxiang Yang, Ke Li, Zhun Zhong, Zhiming Luo, Xing Sun, Hao Cheng, Xiaowei Guo, Feiyue Huang, Rongrong Ji, Shaozi Li

This procedure encourages that the selected training samples can be both clean and miscellaneous, and that the two models can promote each other iteratively.

Clustering Miscellaneous +2

Approximate Feature Collisions in Neural Nets

1 code implementation NeurIPS 2019 Ke Li, Tianhao Zhang, Jitendra Malik

Work on adversarial examples has shown that neural nets are surprisingly sensitive to adversarially chosen changes of small magnitude.

Model Adaption Object Detection System for Robot

no code implementations7 Nov 2019 Jingwen Fu, Licheng Zong, Yinbing Li, Ke Li, Bingqian Yang, Xibei Liu

Object detection for robot guidance is a crucial mission for autonomous robots, which has provoked extensive attention for researchers.

Object object-detection +2

Does Preference Always Help? A Holistic Study on Preference-Based Evolutionary Multi-Objective Optimisation Using Reference Points

no code implementations30 Sep 2019 Ke Li, Min-Hui Liao, Kalyanmoy Deb, Geyong Min, Xin Yao

The ultimate goal of multi-objective optimisation is to help a decision maker (DM) identify solution(s) of interest (SOI) achieving satisfactory trade-offs among multiple conflicting criteria.

Decision Making

D3M: A deep domain decomposition method for partial differential equations

no code implementations24 Sep 2019 Ke Li, Kejun Tang, Tianfan Wu, Qifeng Liao

A state-of-the-art deep domain decomposition method (D3M) based on the variational principle is proposed for partial differential equations (PDEs).

Object Detection in Optical Remote Sensing Images: A Survey and A New Benchmark

1 code implementation31 Aug 2019 Ke Li, Gang Wan, Gong Cheng, Liqiu Meng, Junwei Han

However, the current survey of datasets and deep learning based methods for object detection in optical remote sensing images is not adequate.

Object object-detection +1

Bayesian Network Based Label Correlation Analysis For Multi-label Classifier Chain

no code implementations6 Aug 2019 Ran Wang, Suhe Ye, Ke Li, Sam Kwong

Classifier chain (CC) is a multi-label learning approach that constructs a sequence of binary classifiers according to a label order.

Multi-Label Learning

Semi-Supervised Adversarial Monocular Depth Estimation

no code implementations6 Aug 2019 Rongrong Ji, Ke Li, Yan Wang, Xiaoshuai Sun, Feng Guo, Xiaowei Guo, Yongjian Wu, Feiyue Huang, Jiebo Luo

In this paper, we address the problem of monocular depth estimation when only a limited number of training image-depth pairs are available.

Monocular Depth Estimation

Evidence for $Z_{c}^{\pm}$ decays into the $ρ^{\pm} η_{c}$ final state

no code implementations3 Jun 2019 M. Ablikim, M. N. Achasov, S. Ahmed, M. Albrecht, M. Alekseev, A. Amoroso, F. F. An, Q. An, Y. Bai, O. Bakina, R. Baldini Ferroli, Y. Ban, K. Begzsuren, D. W. Bennett, J. V. Bennett, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, E. Boger, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, S. A. Cetin, J. Chai, J. F. Chang, W. L. Chang, G. Chelkov, G. Chen, H. S. Chen, J. C. Chen, M. L. Chen, P. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, W. Cheng, X. K. Chu, G. Cibinetto, F. Cossio, H. L. Dai, J. P. Dai, A. Dbeyssi, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. DeMori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, Z. L. Dou, S. X. Du, P. F. Duan, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Q. Gao, X. L. Gao, Y. Gao, Y. G. Gao, Z. Gao, B. Garillon, I. Garzia, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, Y. T. Gu, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, Z. Haddadi, S. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, T. Held, Y. K. Heng, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, J. S. Huang, X. T. Huang, X. Z. Huang, Z. L. Huang, T. Hussain, W. Ikegami Andersson, M. Irshad, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. L. Jiang, X. S. Jiang, X. Y. Jiang, J. B. Jiao, Z. Jiao, D. P. Jin, S. Jin, Y. Jin, T. Johansson, A. Julin, N. Kalantar-Nayestanaki, X. S. Kang, M. Kavatsyuk, B. C. Ke, I. K. Keshk, T. Khan, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. Kurth, W. Kühn, J. S. Lange, P. Larin, L. Lavezzi, S. Leiber, H. Leithoff, C. Li, Cheng Li, D. M. Li, F. Li, F. Y. Li, G. Li, H. B. Li, H. J. Li, J. C. Li, J. W. Li, K. J. Li, Kang Li, Ke Li, Lei LI, P. L. Li, P. R. Li, Q. Y. Li, T. Li, W. D. Li, W. G. Li, X. L. Li, X. N. Li, X. Q. Li, Z. B. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, D. X. Lin, B. Liu, B. J. Liu, C. X. Liu, D. Liu, D. Y. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. L. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Y. Liu, Ke Liu, L. D. Liu, Q. Liu, S. B. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Zhiqing Liu, Y. F. Long, X. C. Lou, H. J. Lu, J. G. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, X. N. Ma, X. Y. Ma, Y. M. Ma, F. E. Maas, M. Maggiora, S. Maldaner, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, J. Min, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, C. Morales Morales, N. Yu. Muchnoi, H. Muramatsu, A. Mustafa, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Niu, X. Y. Niu, S. L. Olsen, Q. Ouyang, S. Pacetti, Y. Pan, M. Papenbrock, P. Patteri, M. Pelizaeus, J. Pellegrino, H. P. Peng, Z. Y. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. R. Qi, M. Qi, T. Y. Qi, S. Qian, C. F. Qiao, N. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, C. F. Redmer, M. Richter, M. Ripka, A. Rivetti, M. Rolo, G. Rong, Ch. Rosner, A. Sarantsev, M. Savrié, K. Schoenning, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. Y. Sheng, X. Shi, J. J. Song, W. M. Song, X. Y. Song, S. Sosio, C. Sowa, S. Spataro, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, X. H. Sun, Y. J. Sun, Y. K Sun, Y. Z. Sun, Z. J. Sun, Z. T. Sun, Y. T Tan, C. J. Tang, G. Y. Tang, X. Tang, M. Tiemens, B. Tsednee, I. Uman, B. Wang, B. L. Wang, C. W. Wang, D. Wang, D. Y. Wang, Dan Wang, H. H. Wang, K. Wang, L. L. Wang, L. S. Wang, M. Wang, Meng Wang, P. Wang, P. L. Wang, W. P. Wang, X. F. Wang, Y. Wang, Y. F. Wang, Z. Wang, Z. G. Wang, Z. Y. Wang, Zongyuan Wang, T. Weber, D. H. Wei, P. Weidenkaff, S. P. Wen, U. Wiedner, M. Wolke, L. H. Wu, L. J. Wu, Z. Wu, L. Xia, X. Xia, Y. Xia, D. Xiao, Y. J. Xiao, Z. J. Xiao, Y. G. Xie, Y. H. Xie, X. A. Xiong, Q. L. Xiu, G. F. Xu, J. J. Xu, L. Xu, Q. J. Xu, X. P. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Y. H. Yan, H. J. Yang, H. X. Yang, L. Yang, R. X. Yang, S. L. Yang, Y. H. Yang, Y. X. Yang, Yifan Yang, Z. Q. Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, J. S. Yu, C. Z. Yuan, Y. Yuan, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, B. Y. Zhang, C. C. Zhang, D. H. Zhang, H. H. Zhang, H. Y. Zhang, J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, K. Zhang, L. Zhang, S. F. Zhang, T. J. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yang Zhang, YaoZ hang, Yu Zhang, Z. H. Zhang, Z. P. Zhang, Z. Y. Zhang, G. Zhao, J. W. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, T. C. Zhao, Y. B. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, W. J. Zheng, Y. H. Zheng, B. Zhong, L. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, X. Y. Zhou, Xiaoyu Zhou, Xu Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. Zhu, S. H. Zhu, X. L. Zhu, Y. C. Zhu, Y. S. Zhu, Z. A. Zhu, J. Zhuang, B. S. Zou, J. H. Zou

We study $e^{+}e^{-}$ collisions with a $\pi^{+}\pi^{-}\pi^{0}\eta_{c}$ final state using data samples collected with the BESIII detector at center-of-mass energies $\sqrt{s}=4. 226$, $4. 258$, $4. 358$, $4. 416$, and $4. 600$ GeV.

High Energy Physics - Experiment

Accelerated Sparse Recovery Under Structured Measurements

no code implementations ICLR 2019 Ke Li, Jitendra Malik

Extensive work on compressed sensing has yielded a rich collection of sparse recovery algorithms, each making different tradeoffs between recovery condition and computational efficiency.

Computational Efficiency

Visualisation of Pareto Front Approximation: A Short Survey and Empirical Comparisons

no code implementations5 Mar 2019 Huiru Gao, Haifeng Nie, Ke Li

Visualisation is an effective way to facilitate the analysis and understanding of multivariate data.

Decision Making

Which Surrogate Works for Empirical Performance Modelling? A Case Study with Differential Evolution

no code implementations30 Jan 2019 Ke Li, Zilin Xiang, Kay Chen Tan

Perhaps surprisingly, it is possible to build a cheap-to-evaluate surrogate that models the algorithm's empirical performance as a function of its parameters.

regression

Trajectory Normalized Gradients for Distributed Optimization

no code implementations24 Jan 2019 Jianqiao Wangni, Ke Li, Jianbo Shi, Jitendra Malik

Recently, researchers proposed various low-precision gradient compression, for efficient communication in large-scale distributed optimization.

Benchmarking Distributed Optimization

Speaker Adaptation for End-to-End CTC Models

no code implementations4 Jan 2019 Ke Li, Jinyu Li, Yong Zhao, Kshitiz Kumar, Yifan Gong

We propose two approaches for speaker adaptation in end-to-end (E2E) automatic speech recognition systems.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Are All Training Examples Created Equal? An Empirical Study

no code implementations30 Nov 2018 Kailas Vodrahalli, Ke Li, Jitendra Malik

Modern computer vision algorithms often rely on very large training datasets.

Active Learning

On the Implicit Assumptions of GANs

no code implementations29 Nov 2018 Ke Li, Jitendra Malik

Generative adversarial nets (GANs) have generated a lot of excitement.

Diverse Image Synthesis from Semantic Layouts via Conditional IMLE

1 code implementation ICCV 2019 Ke Li, Tianhao Zhang, Jitendra Malik

Most existing methods for conditional image synthesis are only able to generate a single plausible image for any given input, or at best a fixed number of plausible images.

Image Generation Semantic Segmentation

Super-Resolution via Conditional Implicit Maximum Likelihood Estimation

no code implementations2 Oct 2018 Ke Li, Shichong Peng, Jitendra Malik

Single-image super-resolution (SISR) is a canonical problem with diverse applications.

Image Super-Resolution

Implicit Maximum Likelihood Estimation

1 code implementation ICLR 2019 Ke Li, Jitendra Malik

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly.

Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks

1 code implementation Interspeech 2018 2018 Daniel Povey, Gaofeng Cheng, Yiming Wang, Ke Li, Hainan Xu, Mahsa Yarmohammadi, Sanjeev Khudanpur

Time Delay Neural Networks (TDNNs), also known as onedimensional Convolutional Neural Networks (1-d CNNs), are an efficient and well-performing neural network architecture for speech recognition.

speech-recognition Speech Recognition

Interactive Decomposition Multi-Objective Optimization via Progressively Learned Value Functions

no code implementations2 Jan 2018 Ke Li, Renzhi Chen, Dragan Savic, Xin Yao

In the preference elicitation session, the preference information learned in the consultation module is translated into the form that can be used in a decomposition-based EMO algorithm, i. e., a set of reference points that are biased toward to the ROI.

Decision Making

Two-Archive Evolutionary Algorithm for Constrained Multi-Objective Optimization

no code implementations21 Nov 2017 Ke Li, Renzhi Chen, Guangtao Fu, Xin Yao

When solving constrained multi-objective optimization problems, an important issue is how to balance convergence, diversity and feasibility simultaneously.

Vocal Bursts Valence Prediction

Evolutionary Many-Objective Optimization Based on Adversarial Decomposition

no code implementations7 Apr 2017 Mengyuan Wu, Ke Li, Sam Kwong, Qingfu Zhang

It decomposes a multi-objective optimization problem into several single-objective optimization subproblems, each of which is usually defined as a scalarizing function using a weight vector.

Learning to Optimize Neural Nets

no code implementations ICLR 2018 Ke Li, Jitendra Malik

Learning to Optimize is a recently proposed framework for learning optimization algorithms using reinforcement learning.

reinforcement-learning Reinforcement Learning (RL) +1

Fast k-Nearest Neighbour Search via Prioritized DCI

2 code implementations ICML 2017 Ke Li, Jitendra Malik

Most exact methods for k-nearest neighbour search suffer from the curse of dimensionality; that is, their query times exhibit exponential dependence on either the ambient or the intrinsic dimensionality.

Effective face landmark localization via single deep network

no code implementations9 Feb 2017 Zongping Deng, Ke Li, Qijun Zhao, Yi Zhang, Hu Chen

In this paper, we propose a novel face alignment method using single deep network (SDN) on existing limited training data.

Data Augmentation Face Alignment

Integration of Preferences in Decomposition Multi-Objective Optimization

no code implementations20 Jan 2017 Ke Li, Kalyanmoy Deb, Xin Yao

Extensive experiments, both proof-of-principle and on a variety of problems with 3 to 10 objectives, fully demonstrate the effectiveness of our proposed method for approximating the preferred solutions in the region of interest.

Decision Making

Low-dose CT denoising with convolutional neural network

no code implementations2 Oct 2016 Hu Chen, Yi Zhang, Weihua Zhang, Peixi Liao, Ke Li, Jiliu Zhou, Ge Wang

To reduce the potential radiation risk, low-dose CT has attracted much attention.

Denoising

Low-Dose CT via Deep Neural Network

no code implementations27 Sep 2016 Hu Chen, Yi Zhang, Weihua Zhang, Peixi Liao, Ke Li, Jiliu Zhou, Ge Wang

In order to reduce the potential radiation risk, low-dose CT has attracted more and more attention.

Medical Physics

Matching-Based Selection with Incomplete Lists for Decomposition Multi-Objective Optimization

no code implementations30 Aug 2016 Mengyuan Wu, Ke Li, Sam Kwong, Yu Zhou, Qingfu Zhang

In particular, the stable matching between subproblems and solutions, which achieves an equilibrium between their mutual preferences, implicitly strikes a balance between the convergence and diversity.

Dynamic Multi-Objectives Optimization with a Changing Number of Objectives

no code implementations23 Aug 2016 Renzhi Chen, Ke Li, Xin Yao

Existing studies on dynamic multi-objective optimization focus on problems with time-dependent objective functions, while the ones with a changing number of objectives have rarely been considered in the literature.

Learning to Optimize

no code implementations 2016 2016 Ke Li, Jitendra Malik

Algorithm design is a laborious process and often requires many iterations of ideation and validation.

reinforcement-learning Reinforcement Learning (RL)

Amodal Instance Segmentation

no code implementations27 Apr 2016 Ke Li, Jitendra Malik

We consider the problem of amodal instance segmentation, the objective of which is to predict the region encompassing both visible and occluded parts of each object.

Amodal Instance Segmentation Segmentation +1

Fast k-Nearest Neighbour Search via Dynamic Continuous Indexing

1 code implementation1 Dec 2015 Ke Li, Jitendra Malik

Existing methods for retrieving k-nearest neighbours suffer from the curse of dimensionality.

Iterative Instance Segmentation

no code implementations CVPR 2016 Ke Li, Bharath Hariharan, Jitendra Malik

Existing methods for pixel-wise labelling tasks generally disregard the underlying structure of labellings, often leading to predictions that are visually implausible.

Instance Segmentation Segmentation +2

Wood Species Recognition Based on SIFT Keypoint Histogram

no code implementations5 Nov 2015 Shuaiqi Hu, Ke Li, Xudong Bao

Using the clustering results, an SIFT keypoints histogram is calculated for each wood image.

Clustering

Bandit Label Inference for Weakly Supervised Learning

no code implementations22 Sep 2015 Ke Li, Jitendra Malik

The scarcity of data annotated at the desired level of granularity is a recurring issue in many applications.

Weakly-supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.