no code implementations • ECCV 2020 • Shuo Wang, Jun Yue, Jianzhuang Liu, Qi Tian, Meng Wang
It is a challenging problem since (1) the identifying process is susceptible to over-fitting with limited samples of an object, and (2) the sample imbalance between a base (known knowledge) category and a novel category is easy to bias the recognition results.
no code implementations • 25 Nov 2023 • Sixu Li, Yang Zhou, Xinyue Ye, Jiwan Jiang, Meng Wang
Subsequently, the lower-level control employs a longitudinal distributed model predictive control (MPC) supplemented by a virtual car-following (CF) concept to ensure asymptotic local stability, l_2 norm string stability, and safety.
1 code implementation • 25 Nov 2023 • Heng Tao Shen, Cheng Chen, Peng Wang, Lianli Gao, Meng Wang, Jingkuan Song
In this paper, we propose Continual Referring Expression Comprehension (CREC), a new setting for REC, where a model is learning on a stream of incoming tasks.
no code implementations • 20 Nov 2023 • Yanyan Wei, Zhao Zhang, Jiahuan Ren, Xiaogang Xu, Richang Hong, Yi Yang, Shuicheng Yan, Meng Wang
The generalization capability of existing image restoration and enhancement (IRE) methods is constrained by the limited pre-trained datasets, making it difficult to handle agnostic inputs such as different degradation levels and scenarios beyond their design scopes.
no code implementations • 20 Nov 2023 • Zhichao Zuo, Zhao Zhang, Yan Luo, Yang Zhao, Haijun Zhang, Yi Yang, Meng Wang
This paper presents a novel framework termed Cut-and-Paste for real-word semantic video editing under the guidance of text prompt and additional reference image.
1 code implementation • 14 Nov 2023 • GuanYu Lin, Chen Gao, Yu Zheng, Jianxin Chang, Yanan Niu, Yang song, Kun Gai, Zhiheng Li, Depeng Jin, Yong Li, Meng Wang
Recent proposed cross-domain sequential recommendation models such as PiNet and DASL have a common drawback relying heavily on overlapped users in different domains, which limits their usage in practical recommender systems.
1 code implementation • 11 Nov 2023 • Yimeng Shan, Xuerui Qiu, Rui-Jie Zhu, Ruike Li, Meng Wang, Haicheng Qu
Spiking Neural Networks (SNNs) have garnered substantial attention in brain-like computing for their biological fidelity and the capacity to execute energy-efficient spike-driven operations.
1 code implementation • 6 Nov 2023 • Shengkai Sun, Daizong Liu, Jianfeng Dong, Xiaoye Qu, Junyu Gao, Xun Yang, Xun Wang, Meng Wang
In this manner, our framework is able to learn the unified representations of uni-modal or multi-modal skeleton input, which is flexible to different kinds of modality input for robust action understanding in practical cases.
no code implementations • 2 Nov 2023 • Shijie Ma, Huayi Xu, Mengjian Li, Weidong Geng, Meng Wang, Yaxiong Wang
Motivated by this, we further design a semantic-preserving rewriter to enrich the text prompt, where a reference-guided rewriting is devised for reasonable details compensation, and a denoising with a hybrid semantics strategy is proposed to preserve the semantic consistency.
no code implementations • 24 Oct 2023 • Shuai Zhang, Hongkang Li, Meng Wang, Miao Liu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Keerthiram Murugesan, Subhajit Chaudhury
This paper provides the first theoretical convergence and sample complexity analysis of the practical setting of DQNs with $\epsilon$-greedy policy.
no code implementations • 18 Oct 2023 • Lei Wei, S. Travis Waller, Yu Mei, Yunpeng Wang, Meng Wang
The link transmission model (LTM) is an efficient and widely used macro-level approach for simulating traffic flow.
no code implementations • 13 Oct 2023 • Sheng Zhou, Dan Guo, Jia Li, Xun Yang, Meng Wang
The associations between these repetitive objects are superfluous for answer reasoning; (2) two spatially distant OCR tokens detected in the image frequently have weak semantic dependencies for answer reasoning; and (3) the co-existence of nearby objects and tokens may be indicative of important visual cues for predicting answers.
no code implementations • 9 Oct 2023 • Ziyang Zhang, Xiao Sun, Liuwei An, Meng Wang
First, the Adaptive Threshold Learning module generates two thresholds, namely the clean and noisy thresholds, for each category.
Facial Expression Recognition
Facial Expression Recognition (FER)
no code implementations • 12 Sep 2023 • Jiaxiu Li, Kun Li, Jia Li, Guoliang Chen, Dan Guo, Meng Wang
Compared with the general video grounding task, MTVG focuses on meticulous actions and changes on the face.
no code implementations • 11 Sep 2023 • Yabing Wang, Shuhui Wang, Hao Luo, Jianfeng Dong, Fan Wang, Meng Han, Xun Wang, Meng Wang
Therefore, we propose Dual-view Curricular Optimal Transport (DCOT) to learn with noisy correspondence in CCR.
no code implementations • 5 Sep 2023 • Xintong Jiang, Yaxiong Wang, Yujiao Wu, Meng Wang, Xueming Qian
Unlike the general image-text retrieval problem with only one alignment relation, i. e., image-text, we argue for the existence of two types of relations in composed image retrieval.
no code implementations • 25 Aug 2023 • Jia Li, Wei Qian, Kun Li, Qi Li, Dan Guo, Meng Wang
Specifically, we achieve the results of 0. 8492 and 0. 8439 for MuSe-Personalisation in terms of arousal and valence CCC.
no code implementations • 20 Aug 2023 • Sicheng Zhou, Meng Wang, Jindou Jia, Kexin Guo, Xiang Yu, Youmin Zhang, Lei Guo
This paper presents an excitation operator based fault separation architecture for a quadrotor unmanned aerial vehicle (UAV) subject to loss of effectiveness (LoE) faults, actuator aging, and load uncertainty.
no code implementations • 15 Aug 2023 • Wei Qian, Dan Guo, Kun Li, Xilan Tian, Meng Wang
Specifically, the proposed Dual-TL uses a Spatial TokenLearner (S-TL) to explore associations in different facial ROIs, which promises the rPPG prediction far away from noisy ROI disturbances.
no code implementations • 15 Aug 2023 • Yi Liu, Hongrui Xuan, Bohan Li, Meng Wang, Tong Chen, Hongzhi Yin
However, the long-tail distribution of entities leads to sparsity in supervision signals, which weakens the quality of item representation when utilizing KG enhancement.
no code implementations • 11 Aug 2023 • Kun Li, Dan Guo, Meng Wang
First, we employed a sharing feature encoder to project both video and query into a joint feature space before performing cross-modal co-attention (i. e., video-to-query attention and query-to-video attention) to highlight discriminative features in each modality.
no code implementations • 3 Aug 2023 • Kun Li, Dan Guo, Guoliang Chen, Feiyang Liu, Meng Wang
In this paper, we present the solution of our team HFUT-VUT for the MultiMediate Grand Challenge 2023 at ACM Multimedia 2023.
1 code implementation • 20 Jul 2023 • Kun Li, Dan Guo, Guoliang Chen, Xinge Peng, Meng Wang
In this paper, we briefly introduce the solution of our team HFUT-VUT for the Micros-gesture Classification in the MiGA challenge at IJCAI 2023.
no code implementations • 17 Jul 2023 • Qi Mao, Tinghan Yang, Yinuo Zhang, Shuyin Pan, Meng Wang, Shiqi Wang, Siwei Ma
Recent advances in generative compression methods have demonstrated remarkable progress in enhancing the perceptual quality of compressed data, especially in scenarios with low bitrates.
1 code implementation • 11 Jul 2023 • Yonghui Yang, Zhengwei Wu, Le Wu, Kun Zhang, Richang Hong, Zhiqiang Zhang, Jun Zhou, Meng Wang
Second, feature augmentation imposes the same scale noise augmentation on each node, which neglects the unique characteristics of nodes on the graph.
no code implementations • 11 Jul 2023 • Guoyao Deng, Ke Zou, Kai Ren, Meng Wang, Xuedong Yuan, Sancong Ying, Huazhu Fu
Recently, Segmenting Anything has taken an important step towards general artificial intelligence.
no code implementations • 11 Jul 2023 • Guoyao Deng, Ke Zou, Meng Wang, Xuedong Yuan, Sancong Ying, Huazhu Fu
To achieve this, we employ multiple expert models to extract evidence from the abundant neural network information contained in fMRI images.
1 code implementation • 27 Jun 2023 • Anqi Li, Feng Li, Jiaxin Han, Huihui Bai, Runmin Cong, Chunjie Zhang, Meng Wang, Weisi Lin, Yao Zhao
Extensive experiments have demonstrated that our approach outperforms recent state-of-the-art methods in R-D performance, visual quality, and downstream applications, at very low bitrates.
no code implementations • 26 Jun 2023 • Li Ding, Jack Terwilliger, Aishni Parab, Meng Wang, Lex Fridman, Bruce Mehler, Bryan Reimer
Non-intrusive, real-time analysis of the dynamics of the eye region allows us to monitor humans' visual attention allocation and estimate their mental state during the performance of real-world tasks, which can potentially benefit a wide range of human-computer interaction (HCI) applications.
1 code implementation • 15 Jun 2023 • Dongyi Zhang, Feng Li, Man Liu, Runmin Cong, Huihui Bai, Meng Wang, Yao Zhao
In this work, we explore the potential of resolution fields in scalable image compression and propose the reciprocal pyramid network (RPN) that fulfills the need for more adaptable and versatile compression.
1 code implementation • 15 Jun 2023 • Kun Zhang, Le Wu, Guangyi Lv, Enhong Chen, Shulan Ruan, Jing Liu, Zhiqiang Zhang, Jun Zhou, Meng Wang
Then, we propose a novel Relation of Relation Learning Network (R2-Net) for text classification, in which text classification and R2 classification are treated as optimization targets.
no code implementations • 13 Jun 2023 • Meng Liu, Liqiang Nie, Yunxiao Wang, Meng Wang, Yong Rui
Video moment localization, also known as video moment retrieval, aiming to search a target segment within a video described by a given natural language query.
1 code implementation • 7 Jun 2023 • Mohammed Nowaz Rabbani Chowdhury, Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen
In deep learning, mixture-of-experts (MoE) activates one or few experts (sub-networks) on a per-sample or per-token basis, resulting in significant computation reduction.
1 code implementation • 16 May 2023 • Wan Jiang, Yunfeng Diao, He Wang, Jianxin Sun, Meng Wang, Richang Hong
Unfortunately, we find UEs provide a false sense of security, because they cannot stop unauthorized users from utilizing other unprotected data to remove the protection, by turning unlearnable data into learnable again.
1 code implementation • 12 May 2023 • Xuan He, Fan Yang, Kailun Yang, Jiacheng Lin, Haolong Fu, Meng Wang, Jin Yuan, Zhiyong Li
To tackle this problem, this paper proposes a novel "Supervised Scale-aware Deformable Attention" (SSDA) for monocular 3D object detection.
no code implementations • 2 May 2023 • Xinju Wu, Pingping Zhang, Meng Wang, Peilin Chen, Shiqi Wang, Sam Kwong
The emergence of digital avatars has raised an exponential increase in the demand for human point clouds with realistic and intricate details.
3 code implementations • 18 Apr 2023 • Zheng Lian, Haiyang Sun, Licai Sun, Kang Chen, Mingyu Xu, Kexin Wang, Ke Xu, Yu He, Ying Li, Jinming Zhao, Ye Liu, Bin Liu, Jiangyan Yi, Meng Wang, Erik Cambria, Guoying Zhao, Björn W. Schuller, JianHua Tao
The first Multimodal Emotion Recognition Challenge (MER 2023) was successfully held at ACM Multimedia.
1 code implementation • 14 Apr 2023 • Yixuan Li, Bolin Chen, Baoliang Chen, Meng Wang, Shiqi Wang, Weisi Lin
In this paper, we introduce the large-scale Compressed Face Video Quality Assessment (CFVQA) database, which is the first attempt to systematically understand the perceptual quality and diversified compression distortions in face videos.
no code implementations • 10 Apr 2023 • Zan Gao, Shenxun Wei, Weili Guan, Lei Zhu, Meng Wang, Shenyong Chen
Moreover, human semantic information and pedestrian identity information are not fully explored.
no code implementations • 8 Apr 2023 • Meng Wang, Tian Lin, Lianyu Wang, Aidi Lin, Ke Zou, Xinxing Xu, Yi Zhou, Yuanyuan Peng, Qingquan Meng, Yiming Qian, Guoyao Deng, Zhiqun Wu, Junhong Chen, Jianhong Lin, Mingzhi Zhang, Weifang Zhu, Changqing Zhang, Daoqiang Zhang, Rick Siow Mong Goh, Yong liu, Chi Pui Pang, Xinjian Chen, Haoyu Chen, Huazhu Fu
Failure to recognize samples from the classes unseen during training is a major limitation of artificial intelligence in the real-world implementation for recognition and classification of retinal anomalies.
1 code implementation • CVPR 2023 • Xuyang Shen, Dong Li, Jinxing Zhou, Zhen Qin, Bowen He, Xiaodong Han, Aixuan Li, Yuchao Dai, Lingpeng Kong, Meng Wang, Yu Qiao, Yiran Zhong
We explore a new task for audio-visual-language modeling called fine-grained audible video description (FAVD).
no code implementations • 23 Mar 2023 • Meng Wang, Lianyu Wang, Xinxing Xu, Ke Zou, Yiming Qian, Rick Siow Mong Goh, Yong liu, Huazhu Fu
Our TWEU employs an evidential deep layer to produce the uncertainty score with the DR staging results for client reliability evaluation.
1 code implementation • CVPR 2023 • Lianyu Wang, Meng Wang, Daoqiang Zhang, Huazhu Fu
As scientific and technological advancements result from human intellectual labor and computational costs, protecting model intellectual property (IP) has become increasingly important to encourage model creators and owners.
1 code implementation • 17 Mar 2023 • Ke Zou, Tian Lin, Xuedong Yuan, Haoyu Chen, Xiaojing Shen, Meng Wang, Huazhu Fu
To address this issue, we introduce a novel multimodality evidential fusion pipeline for eye disease screening, EyeMoSt, which provides a measure of confidence for unimodality and elegantly integrates the multimodality information from a multi-distribution fusion perspective.
1 code implementation • 17 Mar 2023 • Kai Ren, Ke Zou, Xianjie Liu, Yidi Chen, Xuedong Yuan, Xiaojing Shen, Meng Wang, Huazhu Fu
Our UML has the potential to explore the development of more reliable and explainable medical image analysis models.
no code implementations • 16 Mar 2023 • Ziyang Zhang, Liuwei An, Zishun Cui, Ao Xu, Tengteng Dong, Yueqi Jiang, Jingyi Shi, Xin Liu, Xiao Sun, Meng Wang
In this paper, we present our solutions for the 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW), which includes four sub-challenges of Valence-Arousal (VA) Estimation, Expression (Expr) Classification, Action Unit (AU) Detection and Emotional Reaction Intensity (ERI) Estimation.
1 code implementation • 16 Mar 2023 • Jia Li, Yin Chen, Xuesong Zhang, Jiantao Nie, Ziqiang Li, Yangchen Yu, Yan Zhang, Richang Hong, Meng Wang
In this paper, we present our advanced solutions to the two sub-challenges of Affective Behavior Analysis in the wild (ABAW) 2023: the Emotional Reaction Intensity (ERI) Estimation Challenge and Expression (Expr) Classification Challenge.
1 code implementation • CVPR 2023 • Biao Qian, Yang Wang, Richang Hong, Meng Wang
Data-free quantization (DFQ) recovers the performance of quantized network (Q) without the original data, but generates the fake sample via a generator (G) by learning from full-precision network (P), which, however, is totally independent of Q, overlooking the adaptability of the knowledge from generated samples, i. e., informative or not to the learning process of Q, resulting into the overflow of generalization error.
no code implementations • 4 Mar 2023 • Jinxing Zhou, Dan Guo, Yiran Zhong, Meng Wang
We perform extensive experiments on the LLP dataset and demonstrate that our method can generate high-quality segment-level pseudo labels with the help of our newly proposed loss and the label denoising strategy.
1 code implementation • CVPR 2023 • Zhenwei Shao, Zhou Yu, Meng Wang, Jun Yu
Knowledge-based visual question answering (VQA) requires external knowledge beyond the image to answer the question.
Ranked #1 on
Visual Question Answering (VQA)
on A-OKVQA
1 code implementation • 28 Feb 2023 • Wen Li, Cheng Zou, Meng Wang, Furong Xu, Jianan Zhao, Ruobing Zheng, Yuan Cheng, Wei Chu
In this paper, we propose a Diverse and Compact Transformer (DC-Former) that can achieve a similar effect by splitting embedding space into multiple diverse and compact subspaces.
1 code implementation • 19 Feb 2023 • Biao Qian, Yang Wang, Richang Hong, Meng Wang
how to generate the samples with desirable adaptability to benefit the quantized network?
no code implementations • 16 Feb 2023 • Ke Zou, Zhihao Chen, Xuedong Yuan, Xiaojing Shen, Meng Wang, Huazhu Fu
We further discuss how they can be estimated in medical imaging.
1 code implementation • 13 Feb 2023 • Lei Chen, Le Wu, Kun Zhang, Richang Hong, Defu Lian, Zhiqiang Zhang, Jun Zhou, Meng Wang
We augment imbalanced training data towards balanced data distribution to improve fairness.
no code implementations • 12 Feb 2023 • Hongkang Li, Meng Wang, Sijia Liu, Pin-Yu Chen
Based on a data model characterizing both label-relevant and label-irrelevant tokens, this paper provides the first theoretical analysis of training a shallow ViT, i. e., one self-attention layer followed by a two-layer perceptron, for a classification task.
no code implementations • 9 Feb 2023 • Jiangshe Zhang, Lizhen Ji, Meng Wang
In this paper, we propose an information theoretical importance sampling based approach for clustering problems (ITISC) which minimizes the worst case of expected distortions under the constraint of distribution deviation.
no code implementations • 6 Feb 2023 • Shuai Zhang, Meng Wang, Pin-Yu Chen, Sijia Liu, Songtao Lu, Miao Liu
Due to the significant computational challenge of training large-scale graph neural networks (GNNs), various sparse learning techniques have been exploited to reduce memory and storage costs.
no code implementations • 30 Jan 2023 • Meng Wang, Kai Yu, Chun-Mei Feng, Yiming Qian, Ke Zou, Lianyu Wang, Rick Siow Mong Goh, Yong liu, Huazhu Fu
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling, which enhances the performance and reliability of FL in non-IID domain features.
1 code implementation • 30 Jan 2023 • Jinxing Zhou, Xuyang Shen, Jianyuan Wang, Jiayi Zhang, Weixuan Sun, Jing Zhang, Stan Birchfield, Dan Guo, Lingpeng Kong, Meng Wang, Yiran Zhong
To deal with these problems, we propose a new baseline method that uses a temporal pixel-wise audio-visual interaction module to inject audio semantics as guidance for the visual segmentation process.
no code implementations • 30 Jan 2023 • Xintao Chu, Jianping Liu, Jian Wang, XiaoFeng Wang, Yingfei Wang, Meng Wang, Xunxun Gu
As the number of open and shared scientific datasets on the Internet increases under the open science movement, efficiently retrieving these datasets is a crucial task in information retrieval (IR) research.
no code implementations • 19 Jan 2023 • Yingfei Wang, Jianping Liu, Jian Wang, XiaoFeng Wang, Meng Wang, Xintao Chu
In this paper, We use Transformer as the backbone network of feature extraction, add filter layer innovatively, and propose a new Filter-Enhanced Transformer Click Model (FE-TCM) for web search.
1 code implementation • 17 Jan 2023 • Meng Wang, Xiaojie Guo, Wenjing Dai, Jiawan Zhang
Previous face inverse rendering methods often require synthetic data with ground truth and/or professional equipment like a lighting stage.
1 code implementation • 9 Jan 2023 • Meng Wang, Feng Gao, Junyu Dong, Heng-Chao Li, Qian Du
It is commonly nontrivial to build a robust self-supervised learning model for multisource data classification, due to the fact that the semantic similarities of neighborhood regions are not exploited in existing contrastive learning framework.
1 code implementation • 3 Jan 2023 • Yanwei Fu, Xiaomei Wang, Hanze Dong, Yu-Gang Jiang, Meng Wang, xiangyang xue, Leonid Sigal
Despite significant progress in object categorization, in recent years, a number of important challenges remain; mainly, the ability to learn from limited labeled data and to recognize object classes within large, potentially open, set of labels.
no code implementations • CVPR 2023 • Tianyu Chang, Xun Yang, Tianzhu Zhang, Meng Wang
In this way, we can prevent the model from exploiting the artifacts of synthetic stereo images as shortcut features, thereby estimating the disparity maps more effectively based on the learned robust and shortcut-invariant representation.
no code implementations • CVPR 2023 • Meng Wang, Yu-Shen Liu, Yue Gao, Kanle Shi, Yi Fang, Zhizhong Han
To capture geometry details, current mainstream methods divide 3D shapes into local regions and then learn each one with a local latent code via a decoder, where the decoder shares the geometric similarities among different local regions.
2 code implementations • 1 Jan 2023 • Ke Zou, Xuedong Yuan, Xiaojing Shen, Yidi Chen, Meng Wang, Rick Siow Mong Goh, Yong liu, Huazhu Fu
Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment.
no code implementations • 27 Dec 2022 • Shimin Gong, Meng Wang, Bo Gu, Wenjie Zhang, Dinh Thai Hoang, Dusit Niyato
In this paper, we employ multiple UAVs coordinated by a base station (BS) to help the ground users (GUs) to offload their sensing data.
1 code implementation • 1 Dec 2022 • Meng Wang, Kai Yu, Chun-Mei Feng, Ke Zou, Yanyu Xu, Qingquan Meng, Rick Siow Mong Goh, Yong liu, Huazhu Fu
Specifically, aiming at improving the model's ability to learn the complex pathological features of retinal edema lesions in OCT images, we develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module of our newly designed.
1 code implementation • 27 Nov 2022 • Zhengjie Huang, Zhenguang Liu, Jianhai Chen, Qinming He, Shuang Wu, Lei Zhu, Meng Wang
Meanwhile, decentralized applications have also attracted intense attention from the online gambling community, with more and more decentralized gambling platforms created through the help of smart contracts.
1 code implementation • 18 Nov 2022 • Jinxing Zhou, Dan Guo, Meng Wang
Visual and audio signals often coexist in natural environments, forming audio-visual events (AVEs).
no code implementations • 13 Nov 2022 • Yan Luo, Yangcheng Gao, Zhao Zhang, Haijun Zhang, Mingliang Xu, Meng Wang
We find it is because: 1) a normal generator is hard to obtain high diversity of synthetic data, since it lacks long-range information to allocate attention to global features; 2) the synthetic images aim to simulate the statistics of real data, which leads to weak intra-class heterogeneity and limited feature richness.
no code implementations • 4 Nov 2022 • Bo wang, Zhao Zhang, Mingbo Zhao, Xiaojie Jin, Mingliang Xu, Meng Wang
To obtain rich features, we use the Swin Transformer to calculate multi-level features, and then feed them into a novel dynamic multi-sight embedding module to exploit both global structure and local texture of input images.
no code implementations • 13 Oct 2022 • Ehsan Hallaji, Roozbeh Razavi-Far, Meng Wang, Mehrdad Saif, Bruce Fardanesh
Using this information, the signal retrieval module can easily recover the original control signal and remove the injected false data.
no code implementations • 24 Sep 2022 • Haojie Xu, Weifeng Liu, Jingwei Liu, Mingzheng Li, Yu Feng, Yasi Peng, Yunwei Shi, Xiao Sun, Meng Wang
Our experiments demonstrate the effectiveness of our proposed model and hybrid fusion strategy on multimodal fusion, and the AUC of our proposed model on the test set is 0. 8972.
1 code implementation • 17 Sep 2022 • Haipeng Liu, Yang Wang, Meng Wang, Yong Rui
Our model is orthogonal to the fashionable arts, such as Convolutional Neural Networks (CNNs), Attention and Transformer model, from the perspective of texture and structure information for image inpainting.
1 code implementation • 12 Sep 2022 • Biao Qian, Yang Wang, Hongzhi Yin, Richang Hong, Meng Wang
Instead of focusing on the accuracy gap at test phase by the existing arts, the core idea of SwitOKD is to adaptively calibrate the gap at training phase, namely distillation gap, via a switching strategy between two modes -- expert mode (pause the teacher while keep the student learning) and learning mode (restart the teacher).
no code implementations • 5 Sep 2022 • Lianyu Wang, Meng Wang, Daoqiang Zhang, Huazhu Fu
Specifically, we propose a novel learning strategy of SSID, which selects samples from both source and target domains as anchors, and then randomly fuses the object and style features of these anchors to generate labeled and style-rich intermediate auxiliary features for knowledge transfer.
1 code implementation • COLING 2022 • Zhenxi Lin, Ziheng Zhang, Meng Wang, Yinghui Shi, Xian Wu, Yefeng Zheng
Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs, which consist of structural triples and images associated with entities.
Ranked #2 on
Multi-modal Entity Alignment
on UMVM-oea-d-w-v1
(using extra training data)
no code implementations • 16 Aug 2022 • Zan Gao, Xinglei Cui, Tao Zhuo, Zhiyong Cheng, An-An Liu, Meng Wang, Shenyong Chen
However, the temporal features of a low-level scale lack enough semantics for action classification while a high-level scale cannot provide rich details of the action boundaries.
1 code implementation • 10 Aug 2022 • Yu Zheng, Chen Gao, Jingtao Ding, Lingling Yi, Depeng Jin, Yong Li, Meng Wang
Recommender systems are prone to be misled by biases in the data.
no code implementations • 6 Aug 2022 • Meng Wang, Chuqi Lei, Jianxin Wang, Yaohang Li, Min Li
In conclusion, TripHLApan is a powerful tool for predicting the binding of HLA-I and HLA-II molecular peptides for the synthesis of tumor vaccines.
no code implementations • 6 Aug 2022 • Fangzhou Gao, Meng Wang, Lianghao Zhang, Li Wang, Jiawan Zhang
This paper presents a new method for deep uncalibrated photometric stereo, which efficiently utilizes the inter-image representation to guide the normal estimation.
1 code implementation • 5 Aug 2022 • Jia Li, Ziyang Zhang, Junjie Lang, Yueqi Jiang, Liuwei An, Peng Zou, Yangyang Xu, Sheng Gao, Jie Lin, Chunxiao Fan, Xiao Sun, Meng Wang
In this paper, we present our solutions for the Multimodal Sentiment Analysis Challenge (MuSe) 2022, which includes MuSe-Humor, MuSe-Reaction and MuSe-Stress Sub-challenges.
no code implementations • 22 Jul 2022 • Jia Li, Jiantao Nie, Dan Guo, Richang Hong, Meng Wang
Here, we regard an expressive face as the comprehensive result of a set of facial muscle movements on one's poker face (i. e., emotionless face), inspired by Facial Action Coding System.
Ranked #1 on
Facial Expression Recognition
on FER+
no code implementations • 18 Jul 2022 • Zan Gao, Hongwei Wei, Weili Guan, Jie Nie, Meng Wang, Shenyong Chen
In addition, a visual clothes shielding module (VCS) is also designed to extract a more robust feature representation for the cloth-changing task by covering the clothing regions and focusing the model on the visual semantic information unrelated to the clothes.
1 code implementation • 11 Jul 2022 • Jinxing Zhou, Jianyuan Wang, Jiayi Zhang, Weixuan Sun, Jing Zhang, Stan Birchfield, Dan Guo, Lingpeng Kong, Meng Wang, Yiran Zhong
To deal with the AVS problem, we propose a novel method that uses a temporal pixel-wise audio-visual interaction module to inject audio semantics as guidance for the visual segmentation process.
no code implementations • 7 Jul 2022 • Ming Yi, Meng Wang
Compared with deterministic dictionary learning, the Bayesian dictionary learning-based approach provides the uncertainty measure for the disaggregation results, at the cost of increased computational complexity.
no code implementations • 7 Jul 2022 • Hongkang Li, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Graph convolutional networks (GCNs) have recently achieved great empirical success in learning graph-structured data.
no code implementations • 7 Jul 2022 • Hongkang Li, Shuai Zhang, Meng Wang
In addition, for the first time, this paper characterizes the impact of the input distributions on the sample complexity and the learning rate.
no code implementations • 4 Jul 2022 • Cheng Zou, Furong Xu, Meng Wang, Wen Li, Yuan Cheng
Automatic snake species recognition is important because it has vast potential to help lower deaths and disabilities caused by snakebites.
1 code implementation • 21 Jun 2022 • Xuanhan Wang, Lianli Gao, Yixuan Zhou, Jingkuan Song, Meng Wang
Human densepose estimation, aiming at establishing dense correspondences between 2D pixels of human body and 3D human body template, is a key technique in enabling machines to have an understanding of people in images.
3 code implementations • 19 Jun 2022 • Ke Zou, Xuedong Yuan, Xiaojing Shen, Meng Wang, Huazhu Fu
In our method, uncertainty is modeled explicitly using subjective logic theory, which treats the predictions of backbone neural network as subjective opinions by parameterizing the class probabilities of the segmentation as a Dirichlet distribution.
no code implementations • 3 May 2022 • Xinwei Wang, Zirui Li, Javier Alonso-Mora, Meng Wang
Real-time safety systems are crucial components of intelligent vehicles.
1 code implementation • 26 Apr 2022 • Jie Shuai, Kun Zhang, Le Wu, Peijie Sun, Richang Hong, Meng Wang, Yong Li
Second, while most current models suffer from limited user behaviors, can we exploit the unique self-supervised signals in the review-aware graph to guide two recommendation components better?
no code implementations • 25 Apr 2022 • Chengxin Chen, Meng Wang, Pengyuan Zhang
Recently, audio-visual scene classification (AVSC) has attracted increasing attention from multidisciplinary communities.
no code implementations • 16 Apr 2022 • Suiyi Zhao, Zhao Zhang, Richang Hong, Mingliang Xu, Yi Yang, Meng Wang
Blind image deblurring (BID) remains a challenging and significant task.
1 code implementation • 6 Apr 2022 • Yiyang Shen, Mingqiang Wei, Sen Deng, Wenhan Yang, Yongzhen Wang, Xiao-Ping Zhang, Meng Wang, Jing Qin
To bridge the two domain gaps, we propose a semi-supervised detail-recovery image deraining network (Semi-DRDNet) with dual sample-augmented contrastive learning.
1 code implementation • 6 Apr 2022 • Tong Chen, Hongzhi Yin, Jing Long, Quoc Viet Hung Nguyen, Yang Wang, Meng Wang
Such user and group preferences are commonly represented as points in the vector space (i. e., embeddings), where multiple user embeddings are compressed into one to facilitate ranking for group-item pairs.
no code implementations • 25 Mar 2022 • Haiyang Sun, Zheng Lian, Bin Liu, Ying Li, Licai Sun, Cong Cai, JianHua Tao, Meng Wang, Yuan Cheng
Speech emotion recognition (SER) is an important research topic in human-computer interaction.
no code implementations • 24 Mar 2022 • Liyu Meng, Yuchen Liu, Xiaolong Liu, Zhaopei Huang, Yuan Cheng, Meng Wang, Chuanhe Liu, Qin Jin
In this paper, we briefly introduce our submission to the Valence-Arousal Estimation Challenge of the 3rd Affective Behavior Analysis in-the-wild (ABAW) competition.
no code implementations • 21 Jan 2022 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.
1 code implementation • 6 Jan 2022 • Yuanen Zhou, Zhenzhen Hu, Daqing Liu, Huixia Ben, Meng Wang
In this paper, we introduce a Compact Bidirectional Transformer model for image captioning that can leverage bidirectional context implicitly and explicitly while the decoder can be executed parallelly.
1 code implementation • CVPR 2022 • Zhao Zhang, Huan Zheng, Richang Hong, Mingliang Xu, Shuicheng Yan, Meng Wang
Current low-light image enhancement methods can well improve the illumination.
no code implementations • 30 Dec 2021 • Jiayuan Chen, Boyu Zhang, Yinfei Xu, Meng Wang
Recently, text classification model based on graph neural network (GNN) has attracted more and more attention.
no code implementations • 9 Dec 2021 • Wen Li, Furong Xu, Jianan Zhao, Ruobing Zheng, Cheng Zou, Meng Wang, Yuan Cheng
Triplet loss is a widely adopted loss function in ReID task which pulls the hardest positive pairs close and pushes the hardest negative pairs far away.
no code implementations • 8 Dec 2021 • Meng Wang, Boyu Li, Kun He, John E. Hopcroft
We theoretically show that our method can avoid some situations that a broken community and the local community are regarded as one community in the subgraph, leading to the inaccuracy on detection which can be caused by global hidden community detection methods.
1 code implementation • 29 Nov 2021 • Shijie Hao, Xu Han, Yanrong Guo, Meng Wang
On the other hand, since the parameter matrix learned from the first stage is aware of the lightness distribution and the scene structure, it can be incorporated into the second stage as the complementary information.
no code implementations • 20 Nov 2021 • Xipei Wang, Haoyu Zhang, Yuanbo Zhang, Meng Wang, Jiarui Song, Tin Lai, Matloob Khushi
Our results show that our model can predict 4-hour future trends with high accuracy in the Forex dataset, which is crucial in realistic scenarios to assist foreign exchange trading decision making.
no code implementations • 12 Oct 2021 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer.
no code implementations • 29 Sep 2021 • Xiao Liu, Meng Wang, Zhaorong Wang, Yingfeng Chen, Yujing Hu, Changjie Fan, Chongjie Zhang
Imitation learning is one of the methods for reproducing expert demonstrations adaptively by learning a mapping between observations and actions.
no code implementations • 29 Sep 2021 • Hengtong Hu, Lingxi Xie, Yinquan Wang, Richang Hong, Meng Wang, Qi Tian
We investigate the problem of estimating uncertainty for training data, so that deep neural networks can make use of the results for learning from limited supervision.
no code implementations • ICLR 2022 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.
1 code implementation • 15 Sep 2021 • Zichuan Liu, Zhaoyang Wu, Meng Wang, Rui Zhang
Specifically, we use graph2vec to model the spatial view, dual-channel temporal module to model the trajectory view, and structural embedding to model traffic semantics.
1 code implementation • 12 Sep 2021 • Yongrui Chen, Xinnan Guo, Chaojie Wang, Jian Qiu, Guilin Qi, Meng Wang, Huiying Li
Compared to the larger pre-trained model and the tabular-specific pre-trained model, our approach is still competitive.
1 code implementation • 16 Aug 2021 • Yuhao Cui, Zhou Yu, Chunqi Wang, Zhongzhou Zhao, Ji Zhang, Meng Wang, Jun Yu
Nevertheless, most existing VLP approaches have not fully utilized the intrinsic knowledge within the image-text pairs, which limits the effectiveness of the learned alignments and further restricts the performance of their models.
no code implementations • 10 Aug 2021 • Zan Gao, Chao Sun, Zhiyong Cheng, Weili Guan, AnAn Liu, Meng Wang
In this work, a novel end-to-end two-stream boundary-aware network (abbreviated as TBNet) is proposed for generic image manipulation localization in which the RGB stream, the frequency stream, and the boundary artifact location are explored in a unified framework.
no code implementations • 10 Aug 2021 • Zan Gao, Hongwei Wei, Weili Guan, Weizhi Nie, Meng Liu, Meng Wang
To solve these issues, in this work, a novel multigranular visual-semantic embedding algorithm (MVSE) is proposed for cloth-changing person ReID, where visual semantic information and human attributes are embedded into the network, and the generalized features of human appearance can be well learned to effectively solve the problem of clothing changes.
no code implementations • 6 Aug 2021 • Kun Zhang, Guangyi Lv, Le Wu, Enhong Chen, Qi Liu, Meng Wang
In order to overcome this problem and boost the performance of attention mechanism, we propose a novel dynamic re-read attention, which can pay close attention to one small region of sentences at each step and re-read the important parts for better sentence representations.
1 code implementation • 22 Jul 2021 • Chaoran Cui, Xiaojie Li, Juan Du, Chunyun Zhang, Xiushan Nie, Meng Wang, Yilong Yin
Extensive experiments on real-world data demonstrate the effectiveness of our approach.
no code implementations • 12 Jul 2021 • Yuan Zhou, Yanrong Guo, Shijie Hao, Richang Hong, ZhengJun Zha, Meng Wang
To overcome these problems, we propose a new Global Relatedness Decoupled-Distillation (GRDD) method using the global category knowledge and the Relatedness Decoupled-Distillation (RDD) strategy.
no code implementations • CVPR 2021 • Furong Xu, Meng Wang, Wei zhang, Yuan Cheng, Wei Chu
Therefore, there is a need for a training mechanism that enforces the discriminativeness of all the elements in the feature to capture more the subtle visual cues.
no code implementations • ICCV 2021 • Xiaohan Fei, Henry Wang, Xiangyu Zeng, Lin Lee Cheong, Meng Wang, Joseph Tighe
We propose a fully automated system that simultaneously estimates the camera intrinsics, the ground plane, and physical distances between people from a single RGB image or video captured by a camera viewing a 3-D scene from a fixed vantage point.
1 code implementation • 17 Jun 2021 • Yuanen Zhou, Yong Zhang, Zhenzhen Hu, Meng Wang
To tackle this issue, non-autoregressive image captioning models have recently been proposed to significantly accelerate the speed of inference by generating all words in parallel.
no code implementations • 9 Jun 2021 • Kun Zhang, Guangyi Lv, Meng Wang, Enhong Chen
Then, we develop a Dynamic Gaussian Attention (DGA) to dynamically capture the important parts and corresponding local contexts from a detailed perspective.
no code implementations • 8 Jun 2021 • Siqi Deng, Yuanjun Xiong, Meng Wang, Wei Xia, Stefano Soatto
The common implementation of face recognition systems as a cascade of a detection stage and a recognition or verification stage can cause problems beyond failures of the detector.
no code implementations • 4 Jun 2021 • Tong Chen, Hongzhi Yin, Yujia Zheng, Zi Huang, Yang Wang, Meng Wang
The core idea is to compose elastic embeddings for each item, where an elastic embedding is the concatenation of a set of embedding blocks that are carefully chosen by an automated search function.
1 code implementation • 3 Jun 2021 • Xun Yang, Fuli Feng, Wei Ji, Meng Wang, Tat-Seng Chua
To fill the research gap, we propose a causality-inspired VMR framework that builds structural causal model to capture the true effect of query and video content on the prediction.
no code implementations • 31 May 2021 • Shuai Wang, Kun Zhang, Le Wu, Haiping Ma, Richang Hong, Meng Wang
The teacher model is composed of a heterogeneous graph structure for warm users and items with privileged CF links.
1 code implementation • 16 May 2021 • Lei Chen, Le Wu, Kun Zhang, Richang Hong, Meng Wang
Despite the performance gain of these implicit feedback based models, the recommendation results are still far from satisfactory due to the sparsity of the observed item set for each user.
1 code implementation • 27 Apr 2021 • Le Wu, Xiangnan He, Xiang Wang, Kun Zhang, Meng Wang
Influenced by the great success of deep learning in computer vision and language understanding, research in recommendation has shifted to inventing new recommender models based on neural networks.
no code implementations • 5 Apr 2021 • Tong Chen, Hongzhi Yin, Xiangliang Zhang, Zi Huang, Yang Wang, Meng Wang
As a well-established approach, factorization machine (FM) is capable of automatically learning high-order interactions among features to make predictions without the need for manual feature engineering.
2 code implementations • CVPR 2021 • Jinxing Zhou, Liang Zheng, Yiran Zhong, Shijie Hao, Meng Wang
To encourage the network to extract high correlated features for positive samples, a new audio-visual pair similarity loss is proposed.
no code implementations • 28 Mar 2021 • Tianyi Chen, Meng Wang, Siyuan Gong, Yang Zhou, Bin Ran
In this study, we propose a rotation-based connected automated vehicle (CAV) distributed cooperative control strategy for an on-ramp merging scenario.
no code implementations • 11 Mar 2021 • Jianwei Huang, Zhicai Wang, Hongsheng Pang, Han Wu, Huibo Cao, Sung-Kwan Mo, Avinash Rustagi, A. F. Kemper, Meng Wang, Ming Yi, R. J. Birgeneau
$A$Co$_2$Se$_2$ ($A$=K, Rb, Cs) is a homologue of the iron-based superconductor, $A$Fe$_2$Se$_2$.
Superconductivity Materials Science
1 code implementation • 26 Feb 2021 • Yariv Aizenbud, Ariel Jaffe, Meng Wang, Amber Hu, Noah Amsel, Boaz Nadler, Joseph T. Chang, Yuval Kluger
For large trees, a common approach, termed divide-and-conquer, is to recover the tree structure in two steps.
no code implementations • 23 Feb 2021 • BESIII Collaboration, M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, R. Aliberti, A. Amoroso, M. R. An, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, Z. J Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, S. X. Du, Y. L. Fan, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, J. H. Feng, M. Fritsch, C. D. Fu, Y. Gao, Y. G. Gao, I. Garzia, P. T. Ge, C. Geng, E. M. Gersabeck, A Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, T. T. Han, W. Y. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, C. H. Heinz, T. Held, Y. K. Heng, C. Herold, M. Himmelreich, T. Holtmann, G. Y. Hou, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, N Hüsken, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, Y. Y. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, M. Q. Jing, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, Z. H. Lei, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, J. S. Li, Ke Li, L. K. Li, Lei LI, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Xiaoyu Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. J. Liu, C. X. Liu, D. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. L. Liu, J. Y. Liu, K. Liu, K. Y. Liu, L. Liu, M. H. Liu, P. L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, W. M. Liu, X. Liu, Y. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. X. Ma, X. Y. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, R. Poling, V. Prasad, H. Qi, H. R. Qi, K. H. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, H. S. Sang, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, M. Scodeggio, D. C. Shan, W. Shan, X. Y. Shan, J. F. Shangguan, M. Shao, C. P. Shen, H. F. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, K. X. Su, P. P. Su, F. F. Sui, G. X. Sun, H. K. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, X Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, J. X. Teng, V. Thoren, W. H. Tian, Y. T. Tian, I. Uman, B. Wang, C. W. Wang, D. Y. Wang, H. J. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Y. Y. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, G. F. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, S. L. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, L. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, A. Q. Zhang, B. X. Zhang, Guangyi Zhang, H. Zhang, H. H. Zhang, H. Y. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, L. M. Zhang, L. Q. Zhang, Lei Zhang, S. Zhang, S. F. Zhang, Shulei Zhang, X. D. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, X. Y. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, T. J. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou
Constraining our measurement to the Standard Model expectation of lepton universality ($R=9. 75$), we find the more precise results $\cal B(D_s^+\to \tau^+\nu_\tau) = (5. 22\pm0. 10\pm 0. 14)\times10^{-2}$ and $A_{\it CP}(\tau^\pm\nu_\tau) = (-0. 1\pm1. 9\pm1. 0)\%$.
High Energy Physics - Experiment
1 code implementation • ICLR 2021 • Ren Wang, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Chuang Gan, Meng Wang
Despite the generalization power of the meta-model, it remains elusive that how adversarial robustness can be maintained by MAML in few-shot learning.
1 code implementation • 18 Feb 2021 • Le Wu, Lei Chen, Pengyang Shao, Richang Hong, Xiting Wang, Meng Wang
For each user, this transformation is achieved under the adversarial learning of a user-centric graph, in order to obfuscate each sensitive feature between both the filtered user embedding and the sub graph structures of this user.
no code implementations • 8 Feb 2021 • M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, R. Aliberti, A. Amoroso, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J Biernat, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, Z. J Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, S. X. Du, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Gao, Y. G. Gao, I. Garzia, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, T. T. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, C. H. Heinz, T. Held, Y. K. Heng, C. Herold, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, N. Hüsken, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, Z. H. Lei, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, Ke Li, L. K. Li, Lei LI, P. L. Li, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. J. Liu, C. X. Liu, D. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, L. Liu, M. H. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, W. M. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. X. Ma, X. Y. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. Qi, H. R. Qi, K. H. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian,