You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 29 Mar 2023 • Hyunyoung Jung, Zhuo Hui, Lei Luo, Haitao Yang, Feng Liu, Sungjoo Yoo, Rakesh Ranjan, Denis Demandolx

To apply optical flow in practice, it is often necessary to resize the input to smaller dimensions in order to reduce computational costs.

no code implementations • 10 Mar 2023 • Zhipeng Yu, Jin Lin, Feng Liu, Jiarong Li, Yuxuan Zhao, Yonghua Song

However, multi-timescale electricity, hydrogen, and ammonia storages, minimum power supply for system safety, and the multi-year uncertainty of renewable generation lead to difficulties in planning.

1 code implementation • 9 Mar 2023 • Qizhou Wang, Junjie Ye, Feng Liu, Quanyu Dai, Marcus Kalander, Tongliang Liu, Jianye Hao, Bo Han

It leads to a min-max learning scheme -- searching to synthesize OOD data that leads to worst judgments and learning from such OOD data for uniform performance in OOD detection.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

no code implementations • 18 Feb 2023 • Sirui Wu, Jin Lin, Jiarong Li, Feng Liu, Yonghua Song, Yanhui Xu, Xiang Cheng, Zhipeng Yu

Hence, we develop a multi-timescale trading strategy for the RePtA VPP in the electricity, hydrogen, and ammonia markets.

no code implementations • 17 Feb 2023 • Marlon E. Bran Lorenzana, Shekhar S. Chandra, Feng Liu

Sparse reconstruction is an important aspect of modern medical imaging, reducing the acquisition time of relatively slow modalities such as magnetic resonance imaging (MRI).

no code implementations • 8 Feb 2023 • Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli

Adversarial contrastive learning (ACL) does not require expensive data annotations but outputs a robust representation that withstands adversarial attacks and also generalizes to a wide range of downstream tasks.

no code implementations • 29 Dec 2022 • Feng Liu, Xiaoming Liu

The objective of this paper is to learn dense 3D shape correspondence for topology-varying generic objects in an unsupervised manner.

no code implementations • 19 Dec 2022 • Michael R. Lindstrom, Xiaofu Ding, Feng Liu, Anand Somayajula, Deanna Needell

Nonnegative matrix factorization can be used to automatically detect topics within a corpus in an unsupervised fashion.

no code implementations • 9 Dec 2022 • Zhipeng Yu, Jin Lin, Feng Liu, Jiarong Li, Yuxuan Zhao, Yonghua Song, Yanhua Song, Xinzhen Zhang

This paper proposes an optimal sizing and pricing method for RePtA system planning.

no code implementations • 1 Dec 2022 • Zhifeng Chen, Congyu Liao, Xiaozhi Cao, Benedikt A. Poser, Zhongbiao Xu, Wei-Ching Lo, Manyi Wen, Jaejin Cho, Qiyuan Tian, Yaohui Wang, Yanqiu Feng, Ling Xia, Wufan Chen, Feng Liu, Berkin Bilgic

Purpose: This work aims to develop a novel distortion-free 3D-EPI acquisition and image reconstruction technique for fast and robust, high-resolution, whole-brain imaging as well as quantitative T2* mapping.

no code implementations • 25 Nov 2022 • Zhuang Xiong, Yang Gao, Feng Liu, Hongfu Sun

We propose an end-to-end AFfine Transformation Edited and Refined (AFTER) deep neural network for QSM, which is robust against arbitrary acquisition orientation and spatial resolution up to 0. 6 mm isotropic at the finest.

1 code implementation • 27 Oct 2022 • Qizhou Wang, Feng Liu, Yonggang Zhang, Jing Zhang, Chen Gong, Tongliang Liu, Bo Han

Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

no code implementations • 26 Oct 2022 • Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, Feng Liu

Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.

1 code implementation • 19 Oct 2022 • Minchul Kim, Feng Liu, Anil Jain, Xiaoming Liu

Advances in attention and recurrent modules have led to feature fusion that can model the relationship among the images in the input set.

Ranked #1 on Face Verification on IJB-B (TAR @ FAR=1e-4 metric)

no code implementations • 14 Oct 2022 • Chaoqi Chen, Luyao Tang, Feng Liu, Gangming Zhao, Yue Huang, Yizhou Yu

Domain generalization (DG) enables generalizing a learning machine from multiple seen source domains to an unseen target one.

1 code implementation • 4 Oct 2022 • Qiqi Hou, Abhijay Ghildyal, Feng Liu

In this paper, we present a dedicated perceptual quality metric for measuring video frame interpolation results.

no code implementations • 25 Sep 2022 • Wentian Zhang, Haozhe Liu, Feng Liu, Raghavendra Ramachandra

For reconstruction performance, our method achieves the best performance with 0. 834 mIOU and 0. 937 PA. By comparing with the recognition performance on surface 2D fingerprints, the effectiveness of our proposed method on high quality subsurface fingerprint reconstruction is further proved.

no code implementations • 20 Sep 2022 • Qinfei Long, Junhong Liu, Chenhao Ren, Wenqian Yin, Feng Liu, Yunhe Hou

From the perspectives of service life and Braess paradox, it is important and challenging to jointly optimize the DTR placement and operation schedule for changing system state, which is a two-stage combinatorial problem with only discrete variables, suffering from no approximation guarantee and dimension curse only based on traditional models.

1 code implementation • 5 Aug 2022 • Jichang Li, Guanbin Li, Feng Liu, Yizhou Yu

Specifically, our method is divided into two steps: 1) Neighborhood Collective Noise Verification to separate all training samples into a clean or noisy subset, 2) Neighborhood Collective Label Correction to relabel noisy samples, and then auxiliary techniques are used to assist further model optimization.

1 code implementation • 29 Jul 2022 • Ganlong Zhao, Guanbin Li, Yipeng Qin, Feng Liu, Yizhou Yu

In this paper, we propose a two-stage clean samples identification method to address the aforementioned challenge.

Ranked #2 on Image Classification on Clothing1M (using extra training data)

2 code implementations • 27 Jul 2022 • Abhijay Ghildyal, Feng Liu

This paper studies the effect of small misalignment, specifically a small shift between the input and reference image, on existing metrics, and accordingly develops a shift-tolerant similarity metric.

no code implementations • 20 Jul 2022 • Feng Liu, Xiaoming Liu

In light of this, we propose a novel image-conditioned neural implicit field, which can leverage 2D supervisions from GAN-generated multi-view images and perform the single-view reconstruction of generic objects.

1 code implementation • 20 Jul 2022 • Feng Liu, Minchul Kim, Anil Jain, Xiaoming Liu

To address this problem, we propose a controllable face synthesis model (CFSM) that can mimic the distribution of target datasets in a style latent space.

Ranked #1 on Face Verification on IJB-S

1 code implementation • 7 Jul 2022 • Chengfeng Zhou, Songchang Chen, Chenming Xu, Jun Wang, Feng Liu, Chun Zhang, Juan Ye, Hefeng Huang, Dahong Qian

In this study, we present a novel normalization technique called window normalization (WIN) to improve the model generalization on heterogeneous medical images, which is a simple yet effective alternative to existing normalization methods.

no code implementations • 5 Jul 2022 • Thu Nguyen-Phuoc, Feng Liu, Lei Xiao

This paper presents a stylized novel view synthesis method.

1 code implementation • 15 Jun 2022 • Ruize Gao, Jiongxiao Wang, Kaiwen Zhou, Feng Liu, Binghui Xie, Gang Niu, Bo Han, James Cheng

The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available.

1 code implementation • 11 Jun 2022 • Xiong Peng, Feng Liu, Jingfen Zhang, Long Lan, Junjie Ye, Tongliang Liu, Bo Han

To defend against MI attacks, previous work utilizes a unilateral dependency optimization strategy, i. e., minimizing the dependency between inputs (i. e., features) and outputs (i. e., labels) during training the classifier.

1 code implementation • 9 Jun 2022 • Guangzhi Ma, Jie Lu, Feng Liu, Zhen Fang, Guangquan Zhang

Hence, in this paper, we propose a novel framework to address a new realistic problem called multi-class classification with imprecise observations (MCIMO), where we need to train a classifier with fuzzy-feature observations.

no code implementations • 28 May 2022 • Ye Liu, Chen Shen, Zhaojian Wang, Feng Liu

In multi-infeed hybrid AC-DC (MIDC) systems, the emergency frequency control (EFC) with LCC-HVDC systems participating is of vital importance for system frequency stability.

2 code implementations • 19 May 2022 • Feng Liu, Xiaosong Zhang, Zhiliang Peng, Zonghao Guo, Fang Wan, Xiangyang Ji, Qixiang Ye

Except for the backbone networks, however, other components such as the detector head and the feature pyramid network (FPN) remain trained from scratch, which hinders fully tapping the potential of representation models.

Ranked #1 on Few-Shot Object Detection on MS-COCO (30-shot)

1 code implementation • 16 May 2022 • Haozhe Liu, Haoqin Ji, Yuexiang Li, Nanjun He, Haoqian Wu, Feng Liu, Linlin Shen, Yefeng Zheng

With the regularization and orthogonal classifier, a more compact embedding space can be obtained, which accordingly improves the model robustness against adversarial attacks.

no code implementations • 11 Apr 2022 • Jiazhi Liu, Feng Liu

The new stereo matching pipeline have the following advantages: It 1) has better generalization performance than most of the current stereo matching methods; 2) relaxes the limitation of a fixed disparity search range; 3) can handle the scenes that involve both positive and negative disparities, which has more potential applications, such as view synthesis in 3D multimedia and VR/AR.

1 code implementation • 6 Apr 2022 • Xuanyu Zhu, Yang Gao, Feng Liu, Stuart Crozier, Hongfu Sun

The BFRnet method is compared with three conventional BFR methods and one previous deep learning method using simulated and in vivo brains from 4 healthy and 2 hemorrhagic subjects.

no code implementations • 24 Mar 2022 • Xintao Zhao, Feng Liu, Changhe Song, Zhiyong Wu, Shiyin Kang, Deyi Tuo, Helen Meng

In this paper, we proposed an any-to-one VC method using hybrid bottleneck features extracted from CTC-BNFs and CE-BNFs to complement each other advantages.

Automatic Speech Recognition
Automatic Speech Recognition (ASR)
**+2**

no code implementations • 7 Mar 2022 • Xinwen Liu, Jing Wang, Cheng Peng, Shekhar S. Chandra, Feng Liu, S. Kevin Zhou

In this paper, we investigate the use of such side information as normalisation parameters in a convolutional neural network (CNN) to improve undersampled MRI reconstruction.

no code implementations • 6 Mar 2022 • Xinyu Zhang, Vincent CS. Lee, Jia Rong, James C. Lee, Jiangning Song, Feng Liu

Therefore, this study proposed a novel multi-channel convolutional neural network (CNN) architecture to address the multi-class classification task of thyroid disease.

1 code implementation • 7 Feb 2022 • Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli

Furthermore, we theoretically find that the adversary can also degrade the lower bound of a TST's test power, which enables us to iteratively minimize the test criterion in order to search for adversarial pairs.

1 code implementation • 24 Jan 2022 • Weijun Chen, Yanze Wang, Chengshuo Du, Zhenglong Jia, Feng Liu, Ran Chen

However, current models do not incorporate the trade-off between efficiency and flexibility and lack the guidance of domain knowledge in the design of graph structure learning algorithms.

no code implementations • 9 Jan 2022 • Kien Nguyen, Clinton Fookes, Sridha Sridharan, YingLi Tian, Feng Liu, Xiaoming Liu, Arun Ross

The rapid emergence of airborne platforms and imaging sensors are enabling new forms of aerial surveillance due to their unprecedented advantages in scale, mobility, deployment and covert observation capabilities.

no code implementations • CVPR 2022 • Long Mai, Feng Liu

The model is trained end-to-end on a video to jointly determine the phase-shift values at each time with the mapping from the phase-shifted sinusoidal functions to the corresponding frame, enabling an implicit video representation.

no code implementations • 3 Dec 2021 • Ziwang Fu, Feng Liu, HanYang Wang, Siyuan Shen, Jiahao Zhang, Jiayin Qi, Xiangling Fu, Aimin Zhou

Learning modality-fused representations and processing unaligned multimodal sequences are meaningful and challenging in multimodal emotion recognition.

no code implementations • 27 Nov 2021 • Yiwei Qiu, Jin Lin, Zhipeng Zhou, Ningyi Dai, Feng Liu, Yonghua Song

To fill this gap, this article finds that an accurate SDE model for PV power can be constructed by only using the cheap data from low-resolution public weather reports.

no code implementations • 22 Nov 2021 • Wentian Zhang, Haozhe Liu, Feng Liu, Raghavendra Ramachandra, Christoph Busch

The proposed method, first introduces task specific features from other face related task, then, we design a Cross-Modal Adapter using a Graph Attention Network (GAT) to re-map such features to adapt to PAD task.

1 code implementation • 15 Nov 2021 • Feng Liu, Zhe Kong, Haozhe Liu, Wentian Zhang, Linlin Shen

The proposed method learns important features of fingerprint images by weighing the importance of each channel and identifying discriminative channels and "noise" channels.

2 code implementations • 15 Nov 2021 • Yang Gao, Zhuang Xiong, Amir Fazlollahi, Peter J Nestor, Viktor Vegh, Fatima Nasrallah, Craig Winter, G. Bruce Pike, Stuart Crozier, Feng Liu, Hongfu Sun

In addition, experiments on patients with intracranial hemorrhage and multiple sclerosis were also performed to test the generalization of the novel neural networks.

no code implementations • NeurIPS 2021 • Feng Liu, Xiaoming Liu

With complementary supervision from both 3D detection and reconstruction, one enables the 3D voxel features to be geometry and context preserving, benefiting both tasks. The effectiveness of our approach is demonstrated through 3D detection and reconstruction in single object and multiple object scenarios.

1 code implementation • 3 Nov 2021 • Ziwang Fu, Feng Liu, HanYang Wang, Jiayin Qi, Xiangling Fu, Aimin Zhou, Zhibin Li

Firstly, we perform representation learning for audio and video modalities to obtain the semantic features of the two modalities by efficient ResNeXt and 1D CNN, respectively.

1 code implementation • 22 Oct 2021 • Feng Liu, HanYang Wang, Jiahao Zhang, Ziwang Fu, Aimin Zhou, Jiayin Qi, Zhibin Li

Quantitative and Qualitative results are presented on several compound expressions, and the experimental results demonstrate the feasibility and the potential of EvoGAN.

no code implementations • 29 Sep 2021 • Abhijay Ghildyal, Feng Liu

Perceptual similarity metrics have progressively become more correlated with human judgments on perceptual similarity; however, despite recent advances, the addition of an imperceptible distortion can still compromise these metrics.

no code implementations • 29 Sep 2021 • Jeremy Vonderfecht, Feng Liu

Compared to previously studied models, SISR networks are a uniquely challenging class of image generation model from which to extract and analyze fingerprints, as they can often generate images that closely match the corresponding ground truth and thus likely leave little flexibility to embed signatures.

no code implementations • 26 Sep 2021 • Peng Yang, Feng Liu, Wei Wei, Zhaojian Wang

Estimating the stability boundary is a fundamental and challenging problem in transient stability studies.

no code implementations • 18 Sep 2021 • Haozhe Liu, Hanbang Liang, Xianxu Hou, Haoqian Wu, Feng Liu, Linlin Shen

Generative Adversarial Networks (GANs) have been widely adopted in various fields.

1 code implementation • 9 Sep 2021 • Zhe Kong, Wentian Zhang, Feng Liu, Wenhan Luo, Haozhe Liu, Linlin Shen, Raghavendra Ramachandra

Even though there are numerous Presentation Attack Detection (PAD) techniques based on both deep learning and hand-crafted features, the generalization of PAD for unknown PAI is still a challenging problem.

no code implementations • ICCV 2021 • Kai-En Lin, Guowei Yang, Lei Xiao, Feng Liu, Ravi Ramamoorthi

Image view synthesis has seen great success in reconstructing photorealistic visuals, thanks to deep learning and various novel representations.

1 code implementation • 16 Aug 2021 • Shulun Wang, Bin Liu, Feng Liu

Softmax is widely used in neural networks for multiclass classification, gate structure and attention mechanisms.

no code implementations • ACL 2021 • Xuetao Tian, Liping Jing, Lu He, Feng Liu

Relational triple extraction is critical to understanding massive text corpora and constructing large-scale knowledge graph, which has attracted increasing research interest.

no code implementations • 30 Jun 2021 • Ruize Gao, Feng Liu, Kaiwen Zhou, Gang Niu, Bo Han, James Cheng

However, when tested on attacks different from the given attack simulated in training, the robustness may drop significantly (e. g., even worse than no reweighting).

1 code implementation • 30 Jun 2021 • Zhen Fang, Jie Lu, Anjin Liu, Feng Liu, Guangquan Zhang

In this paper, we target a more challenging and realistic setting: open-set learning (OSL), where there exist test samples from the classes that are unseen during training.

no code implementations • 25 Jun 2021 • Weiwen Liu, Feng Liu, Ruiming Tang, Ben Liao, Guangyong Chen, Pheng Ann Heng

Fairness in recommendation has attracted increasing attention due to bias and discrimination possibly caused by traditional recommenders.

no code implementations • 24 Jun 2021 • Peng Yang, Feng Liu, Tao Liu, David J. Hill

Here, we formulate the empirical wisdom by the concept of augmented synchronization and aim to bridge such a theory-practice gap.

1 code implementation • 24 Jun 2021 • Qiqi Hou, Zhan Li, Carl S Marshall, Selvakumar Panneer, Feng Liu

Specifically, we formulate this fusion task as a super resolution problem that generates a high resolution rendering from a low resolution input (LRHS), assisted with the HRLS rendering.

1 code implementation • NeurIPS 2021 • Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights.

1 code implementation • NeurIPS 2021 • Feng Liu, Wenkai Xu, Jie Lu, Danica J. Sutherland

In realistic scenarios with very limited numbers of data samples, however, it can be challenging to identify a kernel powerful enough to distinguish complex distributions.

1 code implementation • NeurIPS 2021 • Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, William K. Cheung, James T. Kwok

To this end, we propose a target orientated hypothesis adaptation network (TOHAN) to solve the FHA problem, where we generate highly-compatible unlabeled data (i. e., an intermediate domain) to help train a target-domain classifier.

1 code implementation • 11 Jun 2021 • Chenhong Zhou, Feng Liu, Chen Gong, Rongfei Zeng, Tongliang Liu, William K. Cheung, Bo Han

However, in an open world, the unlabeled test images probably contain unknown categories and have different distributions from the labeled images.

no code implementations • 1 Jun 2021 • Xuanyu Zhu, Yang Gao, Feng Liu, Stuart Crozier, Hongfu Sun

Method: A recently proposed deep learning-based QSM method, namely xQSM, is investigated to assess the accuracy of dipole inversion on reduced brain coverages.

no code implementations • 31 May 2021 • Fuxiang Tan, YuTing Kong, Yingying Fan, Feng Liu, Daxin Zhou, Hao Zhang, Long Chen, Liang Gao, Yurong Qian

The former implements the basic rain pattern feature extraction, while the latter fuses different features to further extract and process the image features.

1 code implementation • CVPR 2021 • Feng Liu, Luan Tran, Xiaoming Liu

That is, for a 2D image of a generic object, we decompose it into latent representations of category, shape and albedo, lighting and camera projection matrix, decode the representations to segmented 3D shape and albedo respectively, and fuse these components to render an image well approximating the input image.

no code implementations • 31 Mar 2021 • Xinwen Liu, Jing Wang, Fangfang Tang, Shekhar S. Chandra, Feng Liu, Stuart Crozier

MRI images of the same subject in different contrasts contain shared information, such as the anatomical structure.

2 code implementations • 17 Mar 2021 • Yang Gao, Martijn Cloos, Feng Liu, Stuart Crozier, G. Bruce Pike, Hongfu Sun

In this study, a learning-based Deep Complex Residual Network (DCRNet) is proposed to recover both the magnitude and phase images from incoherently undersampled data, enabling high acceleration of QSM acquisition.

no code implementations • 9 Mar 2021 • Xinwen Liu, Jing Wang, Feng Liu, S. Kevin Zhou

Simply mixing images from multiple anatomies for training a single network does not lead to an ideal universal model due to the statistical shift among datasets of various anatomies, the need to retrain from scratch on all datasets with the addition of a new dataset, and the difficulty in dealing with imbalanced sampling when the new dataset is further of a smaller size.

2 code implementations • CVPR 2021 • Bohao Li, Boyu Yang, Chang Liu, Feng Liu, Rongrong Ji, Qixiang Ye

Few-shot object detection has made substantial progressby representing novel class objects using the feature representation learned upon a set of base class objects.

Ranked #10 on Few-Shot Object Detection on MS-COCO (10-shot)

1 code implementation • ICCV 2021 • Haozhe Liu, Haoqian Wu, Weicheng Xie, Feng Liu, Linlin Shen

The convolutional neural network (CNN) is vulnerable to degraded images with even very small variations (e. g. corrupted and adversarial samples).

Ranked #29 on Domain Generalization on ImageNet-C

no code implementations • 23 Feb 2021 • BESIII Collaboration, M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, R. Aliberti, A. Amoroso, M. R. An, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, Z. J Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, S. X. Du, Y. L. Fan, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, J. H. Feng, M. Fritsch, C. D. Fu, Y. Gao, Y. G. Gao, I. Garzia, P. T. Ge, C. Geng, E. M. Gersabeck, A Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, T. T. Han, W. Y. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, C. H. Heinz, T. Held, Y. K. Heng, C. Herold, M. Himmelreich, T. Holtmann, G. Y. Hou, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, N Hüsken, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, Y. Y. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, M. Q. Jing, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, Z. H. Lei, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, J. S. Li, Ke Li, L. K. Li, Lei LI, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Xiaoyu Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. J. Liu, C. X. Liu, D. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. L. Liu, J. Y. Liu, K. Liu, K. Y. Liu, L. Liu, M. H. Liu, P. L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, W. M. Liu, X. Liu, Y. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. X. Ma, X. Y. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, R. Poling, V. Prasad, H. Qi, H. R. Qi, K. H. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, H. S. Sang, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, M. Scodeggio, D. C. Shan, W. Shan, X. Y. Shan, J. F. Shangguan, M. Shao, C. P. Shen, H. F. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, K. X. Su, P. P. Su, F. F. Sui, G. X. Sun, H. K. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, X Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, J. X. Teng, V. Thoren, W. H. Tian, Y. T. Tian, I. Uman, B. Wang, C. W. Wang, D. Y. Wang, H. J. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Y. Y. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, G. F. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, S. L. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, L. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, A. Q. Zhang, B. X. Zhang, Guangyi Zhang, H. Zhang, H. H. Zhang, H. Y. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, L. M. Zhang, L. Q. Zhang, Lei Zhang, S. Zhang, S. F. Zhang, Shulei Zhang, X. D. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, X. Y. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, T. J. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou

Constraining our measurement to the Standard Model expectation of lepton universality ($R=9. 75$), we find the more precise results $\cal B(D_s^+\to \tau^+\nu_\tau) = (5. 22\pm0. 10\pm 0. 14)\times10^{-2}$ and $A_{\it CP}(\tau^\pm\nu_\tau) = (-0. 1\pm1. 9\pm1. 0)\%$.

High Energy Physics - Experiment

no code implementations • 8 Feb 2021 • M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, R. Aliberti, A. Amoroso, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J Biernat, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, Z. J Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, S. X. Du, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Gao, Y. G. Gao, I. Garzia, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, T. T. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, C. H. Heinz, T. Held, Y. K. Heng, C. Herold, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, N. Hüsken, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, Z. H. Lei, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, Ke Li, L. K. Li, Lei LI, P. L. Li, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. J. Liu, C. X. Liu, D. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, L. Liu, M. H. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, W. M. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. X. Ma, X. Y. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. Qi, H. R. Qi, K. H. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, H. S. Sang, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, M. Scodeggio, D. C. Shan, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, K. X. Su, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, X Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, J. X. Teng, V. Thoren, I. Uman, B. Wang, C. W. Wang, D. Y. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, G. F. Xu, J. J. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, S. L. Yang, Y. H. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, L. Yuan, W. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, Guangyi Zhang, H. Zhang, H. H. Zhang, H. Y. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, Lei Zhang, S. Zhang, S. F. Zhang, X. D. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou

Based on $14. 7~\textrm{fb}^{-1}$ of $e^+e^-$ annihilation data collected with the BESIII detector at the BEPCII collider at 17 different center-of-mass energies between $3. 7730~\textrm{GeV}$ and $4. 5995~\textrm{GeV}$, Born cross sections of the two processes $e^+e^- \to p\bar{p}\eta$ and $e^+e^- \to p\bar{p}\omega$ are measured for the first time.

High Energy Physics - Experiment

1 code implementation • ICLR 2022 • Haoang Chi, Feng Liu, Bo Han, Wenjing Yang, Long Lan, Tongliang Liu, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

In this paper, we demystify assumptions behind NCD and find that high-level semantic features should be shared among the seen and unseen classes.

no code implementations • 26 Jan 2021 • Yunfan Zhang, Feng Liu, Zhaojian Wang, Yifan Su, Shengwei Mei

Virtual power plant (VPP) provides a flexible solution to distributed energy resources integration by aggregating renewable generation units, conventional power plants, energy storages, and flexible demands.

no code implementations • 30 Dec 2020 • Li Zhong, Zhen Fang, Feng Liu, Jie Lu, Bo Yuan, Guangquan Zhang

Experiments show that the proxy can effectively curb the increase of the combined risk when minimizing the source risk and distribution discrepancy.

no code implementations • 29 Dec 2020 • BESIII Collaboration, M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, R. Aliberti, A. Amoroso, M. R. An, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, Z. J Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, S. X. Du, Y. L. Fan, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, J. H. Feng, M. Fritsch, C. D. Fu, Y. Gao, Y. G. Gao, I. Garzia, P. T. Ge, C. Geng, E. M. Gersabeck, A Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, T. T. Han, W. Y. Han, X. Q. Hao, F. A. Harris, N Hüsken, K. L. He, F. H. Heinsius, C. H. Heinz, T. Held, Y. K. Heng, C. Herold, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, Y. Y. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, Z. H. Lei, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, J. S. Li, Ke Li, L. K. Li, Lei LI, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Xiaoyu Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. J. Liu, C. X. Liu, D. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. L. Liu, J. Y. Liu, K. Liu, K. Y. Liu, Ke Liu, L. Liu, M. H. Liu, P. L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, W. M. Liu, X. Liu, Y. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. X. Ma, X. Y. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, R. Poling, V. Prasad, H. Qi, H. R. Qi, K. H. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, H. S. Sang, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, M. Scodeggio, D. C. Shan, W. Shan, X. Y. Shan, J. F. Shangguan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, K. X. Su, P. P. Su, F. F. Sui, G. X. Sun, H. K. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, X Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, J. X. Teng, V. Thoren, W. H. Tian, Y. T. Tian, I. Uman, B. Wang, C. W. Wang, D. Y. Wang, H. J. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Y. Y. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, G. F. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, S. L. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, L. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, Guangyi Zhang, H. Zhang, H. H. Zhang, H. Y. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, L. M. Zhang, L. Q. Zhang, Lei Zhang, S. Zhang, S. F. Zhang, Shulei Zhang, X. D. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, X. Y. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, T. J. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou

During the 2016-17 and 2018-19 running periods, the BESIII experiment collected 7. 5~fb$^{-1}$ of $e^+e^-$ collision data at center-of-mass energies ranging from 4. 13 to 4. 44 GeV.

High Energy Physics - Experiment

no code implementations • 24 Dec 2020 • Haomin Qiu, Feng Liu

In recent years there have been many successes in boosting the performance of Deep Q-Networks (DQN).

no code implementations • 16 Dec 2020 • Mehdi Bahri, Eimear O' Sullivan, Shunwang Gong, Feng Liu, Xiaoming Liu, Michael M. Bronstein, Stefanos Zafeiriou

Compared to the previous state-of-the-art learning algorithms for non-rigid registration of face scans, SMF only requires the raw data to be rigidly aligned (with scaling) with a pre-defined face template.

no code implementations • 4 Dec 2020 • BESIII Collaboration, M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, A. Amoroso, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, J. V. Bennett, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J Biernat, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, J. P. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, S. X. Du, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Fu, X. L. Gao, Y. Gao, Y. G. Gao, I. Garzia, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, S. Han, T. T. Han, T. Z. Han, X. Q. Hao, F. A. Harris, N. Hüsken, K. L. He, F. H. Heinsius, T. Held, Y. K. Heng, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, Ke Li, L. K. Li, Lei LI, P. L. Li, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. Liu, B. J. Liu, C. X. Liu, D. Liu, D. Y. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, Ke Liu, L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, Y. F. Long, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. N. Ma, X. X. Ma, X. Y. Ma, Y. M. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. Qi, H. R. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, D. C. Shan, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, Q. Q. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, V. Thoren, I. Uman, B. Wang, B. L. Wang, C. W. Wang, D. Y. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Y. J. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, X. A. Xiong, G. F. Xu, J. J. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, R. X. Yang, S. L. Yang, Y. H. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, W. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, Guangyi Zhang, H. H. Zhang, H. Y. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, Lei Zhang, S. Zhang, S. F. Zhang, T. J. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou

We search for the process $e^{+}e^{-}\rightarrow \pi ^{+}\pi ^{-} \chi_{cJ}$ ($J=0, 1, 2$) and for a charged charmonium-like state in the $\pi ^{\pm} \chi_{cJ}$ subsystem.

High Energy Physics - Experiment

no code implementations • 20 Nov 2020 • Xiuqiang He, Changjun He, Sisi Pan, Hua Geng, Feng Liu

In contrast, both positive- and negative-sequence synchronizations should be of concern for inverter-based generation (IBG) under asymmetrical faults.

1 code implementation • NeurIPS 2020 • Feng Liu, Xiaoming Liu

The goal of this paper is to learn dense 3D shape correspondence for topology-varying objects in an unsupervised manner.

2 code implementations • 22 Oct 2020 • Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama

However, it has been shown that the MMD test is unaware of adversarial attacks -- the MMD test failed to detect the discrepancy between natural and adversarial data.

no code implementations • 1 Oct 2020 • Simon Niklaus, Xuaner Cecilia Zhang, Jonathan T. Barron, Neal Wadhwa, Rahul Garg, Feng Liu, Tianfan Xue

Traditional reflection removal algorithms either use a single image as input, which suffers from intrinsic ambiguities, or use multiple images from a moving camera, which is inconvenient for users.

1 code implementation • 4 Aug 2020 • Yiyang Zhang, Feng Liu, Zhen Fang, Bo Yuan, Guangquan Zhang, Jie Lu

We consider two cases of this setting, one is that the source domain only contains complementary-label data (completely complementary unsupervised domain adaptation, CC-UDA), and the other is that the source domain has plenty of complementary-label data and a small amount of true-label data (partly complementary unsupervised domain adaptation, PC-UDA).

1 code implementation • 29 Jul 2020 • Yiyang Zhang, Feng Liu, Zhen Fang, Bo Yuan, Guangquan Zhang, Jie Lu

To mitigate this problem, we consider a novel problem setting where the classifier for the target domain has to be trained with complementary-label data from the source domain and unlabeled data from the target domain named budget-friendly UDA (BFUDA).

no code implementations • 7 Jul 2020 • M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, A. Amoroso, Q. An, Anita, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, J. V. Bennett, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J Biernat, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, J. P. Dai, X. C. Dai, A. Dbeyssi, R. B. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, S. X. Du, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Fu, X. L. Gao, Y. Gao, Y. G. Gao, I. Garzia, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, S. Han, T. T. Han, T. Z. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, T. Held, Y. K. Heng, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, N. Huesken, T. Hussain, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. B. Jiang, X. S. Jiang, X. Y. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, L. Lavezzi, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, Ke Li, L. K. Li, Lei LI, P. L. Li, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, L. Z. Liao, J. Libby, C. X. Lin, B. Liu, B. J. Liu, C. X. Liu, D. Liu, D. Y. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, Ke Liu, L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, Y. F. Long, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. N. Ma, X. X. Ma, X. Y. Ma, Y. M. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. Qi, H. R. Qi, M. Qi, T. Y. Qi, S. Qian, W. -B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, D. C. Shan, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, Q. Q. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, V. Thoren, I. Uman, B. Wang, B. L. Wang, C. W. Wang, D. Y. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang,