1 code implementation • ECCV 2020 • Bo Li, Evgeniy Martyushev, Gim Hee Lee
In this paper, we present a complete comprehensive study of the relative pose estimation problem for a calibrated camera constrained by known $\mathrm{SE}(3)$ invariant, which involves 5 minimal problems in total.
no code implementations • IWSLT (EMNLP) 2018 • Nguyen Bach, Hongjie Chen, Kai Fan, Cheung-Chi Leung, Bo Li, Chongjia Ni, Rong Tong, Pei Zhang, Boxing Chen, Bin Ma, Fei Huang
This work describes the En→De Alibaba speech translation system developed for the evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT) 2018.
1 code implementation • ECCV 2020 • Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li
Deep neural networks (DNNs) have achieved great successes in various vision applications due to their strong expressive power.
no code implementations • EMNLP 2021 • Hengtong Zhang, Tianhang Zheng, Yaliang Li, Jing Gao, Lu Su, Bo Li
To address this problem, we propose a training framework with certified robustness to eliminate the causes that trigger the generation of profanity.
1 code implementation • 30 Mar 2023 • Bo Li
We show that symmetrically padded convolution can be analytically inverted via DFT.
no code implementations • 21 Mar 2023 • Kaixun Jiang, Zhaoyu Chen, Tony Huang, Jiafeng Wang, Dingkang Yang, Bo Li, Yan Wang, Wenqiang Zhang
First, STDE introduces target videos as patch textures and only adds patches on keyframes that are adaptively selected by temporal difference.
no code implementations • 14 Mar 2023 • Hao Tang, Zhenyu Zhang, Humphrey Shi, Bo Li, Ling Shao, Nicu Sebe, Radu Timofte, Luc van Gool
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
1 code implementation • 10 Mar 2023 • Weixin Chen, Dawn Song, Bo Li
To answer these questions, we propose an effective Trojan attack against diffusion models, TrojDiff, which optimizes the Trojan diffusion and generative processes during training.
no code implementations • 2 Mar 2023 • Yu Zhang, Wei Han, James Qin, Yongqiang Wang, Ankur Bapna, Zhehuai Chen, Nanxin Chen, Bo Li, Vera Axelrod, Gary Wang, Zhong Meng, Ke Hu, Andrew Rosenberg, Rohit Prabhavalkar, Daniel S. Park, Parisa Haghani, Jason Riesa, Ginger Perng, Hagen Soltau, Trevor Strohman, Bhuvana Ramabhadran, Tara Sainath, Pedro Moreno, Chung-Cheng Chiu, Johan Schalkwyk, Françoise Beaufays, Yonghui Wu
We introduce the Universal Speech Model (USM), a single large model that performs automatic speech recognition (ASR) across 100+ languages.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
no code implementations • 24 Feb 2023 • Bin Liu, Xiaolin Wei, Bo Li, Junjie Cao, Yu-Kun Lai
In this paper, a novel pose-controllable 3D facial animation synthesis method is proposed by utilizing hierarchical audio-vertex attention.
no code implementations • 24 Feb 2023 • Lin Zhang, Shaohuai Shi, Xiaowen Chu, Wei Wang, Bo Li, Chengjian Liu
Communication scheduling has been shown to be effective in accelerating distributed training, which enables all-reduce communications to be overlapped with backpropagation computations.
no code implementations • 22 Feb 2023 • Chao Zhang, Bo Li, Tara N. Sainath, Trevor Strohman, Shuo-Yiin Chang
Consequently, the UML enables to switch in the interpretation of each output node depending on the language of the input speech.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 19 Feb 2023 • Jie Zhang, Bo Li, Chen Chen, Lingjuan Lyu, Shuang Wu, Shouhong Ding, Chao Wu
In this work, we propose a novel algorithm called Decision Boundary based Federated Adversarial Training (DBFAT), which consists of two components (local re-weighting and global regularization) to improve both accuracy and robustness of FL systems.
no code implementations • 17 Feb 2023 • Ke Hu, Tara N. Sainath, Bo Li, Nan Du, Yanping Huang, Andrew M. Dai, Yu Zhang, Rodrigo Cabrera, Zhifeng Chen, Trevor Strohman
In this work, we propose to train a single multilingual language model (LM) for shallow fusion in multiple languages.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 16 Feb 2023 • Zhong Meng, Weiran Wang, Rohit Prabhavalkar, Tara N. Sainath, Tongzhou Chen, Ehsan Variani, Yu Zhang, Bo Li, Andrew Rosenberg, Bhuvana Ramabhadran
We propose JEIT, a joint end-to-end (E2E) model and internal language model (ILM) training method to inject large-scale unpaired text into ILM during E2E training which improves rare-word speech recognition.
no code implementations • 13 Feb 2023 • Chulin Xie, De-An Huang, Wenda Chu, Daguang Xu, Chaowei Xiao, Bo Li, Anima Anandkumar
Personalized Federated Learning (pFL) has emerged as a promising solution to tackle data heterogeneity across clients in FL.
no code implementations • 11 Feb 2023 • Bo Li, Xiaolin Wei, Fengwei Chen, Bin Liu
In shape prediction module, the reference RGB image is first encoded into a high-level shape feature and then the shape feature is utilized as a condition to predict the reverse geometric noise in diffusion model.
no code implementations • 9 Feb 2023 • Zhuolin Yang, Wei Ping, Zihan Liu, Vijay Korthikanti, Weili Nie, De-An Huang, Linxi Fan, Zhiding Yu, Shiyi Lan, Bo Li, Ming-Yu Liu, Yuke Zhu, Mohammad Shoeybi, Bryan Catanzaro, Chaowei Xiao, Anima Anandkumar
Augmenting pretrained language models (LMs) with a vision encoder (e. g., Flamingo) has obtained state-of-the-art results in image-to-text generation.
no code implementations • 4 Feb 2023 • Jiacheng Zhu, JieLin Qiu, Aritra Guha, Zhuolin Yang, XuanLong Nguyen, Bo Li, Ding Zhao
Our work provides a new perspective of model robustness through the lens of Wasserstein geodesic-based interpolation with a practical off-the-shelf strategy that can be combined with existing robust training methods.
no code implementations • 3 Feb 2023 • Hyoungwook Nam, Raghavendra Pradyumna Pothukuchi, Bo Li, Nam Sung Kim, Josep Torrellas
To address this problem, this paper explores using Adversarial Machine Learning (AML) methods as a defense at the computer architecture layer to obfuscate side channels.
no code implementations • 3 Feb 2023 • Bo Li, Dongseong Hwang, Zhouyuan Huo, Junwen Bai, Guru Prakash, Tara N. Sainath, Khe Chai Sim, Yu Zhang, Wei Han, Trevor Strohman, Francoise Beaufays
The FM encoder adapter and decoder are then finetuned to the target domain with a small amount of supervised in-domain data.
no code implementations • 26 Jan 2023 • Mingqi Yuan, Bo Li, Xin Jin, Wenjun Zeng
We present AIRS: Automatic Intrinsic Reward Shaping that intelligently and adaptively provides high-quality intrinsic rewards to enhance exploration in reinforcement learning (RL).
no code implementations • 19 Jan 2023 • Chao-Han Huck Yang, Bo Li, Yu Zhang, Nanxin Chen, Rohit Prabhavalkar, Tara N. Sainath, Trevor Strohman
In this work, we propose a new parameter-efficient learning framework based on neural model reprogramming for cross-lingual speech recognition, which can \textbf{re-purpose} well-trained English automatic speech recognition (ASR) models to recognize the other languages.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 11 Jan 2023 • Haris Aziz, Alexander Lam, Bo Li, Fahimeh Ramezani, Toby Walsh
On the other hand, in the randomized setting, we identify proportionally fair and strategyproof mechanisms that give an expected welfare within a constant factor of the optimal welfare.
no code implementations • 3 Jan 2023 • Xiaoliang Li, Bo Li
In this paper, we explore a dynamic Bertrand duopoly game with differentiated products, where firms are boundedly rational and consumers are assumed to possess an underlying CES utility function.
no code implementations • 29 Dec 2022 • Bo Li, Wei Ye, Jinglei Zhang, Shikun Zhang
Specifically, for a given sample, we build a label graph to review candidate labels in the Top-k prediction set and learn the connections between them.
Ranked #1 on
Relation Extraction
on TACRED-Revisited
1 code implementation • 29 Dec 2022 • Bo Li, Dingyao Yu, Wei Ye, Jinglei Zhang, Shikun Zhang
Sequence generation demonstrates promising performance in recent information extraction efforts, by incorporating large-scale pre-trained Seq2Seq models.
Ranked #1 on
Relation Extraction
on sciERC-sent
no code implementations • 27 Dec 2022 • Xiaojun Xu, Yue Yu, Hanzhang Wang, Alok Lal, Carl A. Gunter, Bo Li
In this paper, we propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation.
no code implementations • 21 Dec 2022 • Adam Tonks, Trevor Harris, Bo Li, William Brown, Rebecca Smith
Machine learning methods have seen increased application to geospatial environmental problems, such as precipitation nowcasting, haze forecasting, and crop yield prediction.
no code implementations • 15 Dec 2022 • JieLin Qiu, Yi Zhu, Xingjian Shi, Florian Wenzel, Zhiqiang Tang, Ding Zhao, Bo Li, Mu Li
Multimodal image-text models have shown remarkable performance in the past few years.
no code implementations • 5 Dec 2022 • Bo Li, Mikkel N. Schmidt, Tommy S. Alstrøm, Sebastian U. Stich
In this paper, we first revisit the widely used FedAvg algorithm in a deep neural network to understand how data heterogeneity influences the gradient updates across the neural network layers.
1 code implementation • 30 Nov 2022 • Guanglin Niu, Bo Li
To address these challenges, we propose a Logic and Commonsense-Guided Embedding model (LCGE) to jointly learn the time-sensitive representation involving timeliness and causality of events, together with the time-independent representation of events from the perspective of commonsense.
1 code implementation • 30 Nov 2022 • Qingsong Yan, Qiang Wang, Kaiyong Zhao, Bo Li, Xiaowen Chu, Fei Deng
Existing learning-based multi-view stereo (MVS) methods rely on the depth range to build the 3D cost volume and may fail when the range is too large or unreliable.
no code implementations • 28 Nov 2022 • Mingqi Yuan, Xin Jin, Bo Li, Wenjun Zeng
We present MEM: Multi-view Exploration Maximization for tackling complex visual control tasks.
no code implementations • 18 Nov 2022 • Anpeng Wu, Kun Kuang, Ruoxuan Xiong, Bo Li, Fei Wu
This paper studies the confounding effects from the unmeasured confounders and the imbalance of observed confounders in IV regression and aims at unbiased causal effect estimation.
1 code implementation • 10 Nov 2022 • Li SiYao, Yuhang Li, Bo Li, Chao Dong, Ziwei Liu, Chen Change Loy
Existing correspondence datasets for two-dimensional (2D) cartoon suffer from simple frame composition and monotonic movements, making them insufficient to simulate real animations.
1 code implementation • 7 Nov 2022 • Shenglai Zeng, Zonghang Li, Hongfang Yu, Zhihao Zhang, Long Luo, Bo Li, Dusit Niyato
Federated Learning (FL), as a rapidly evolving privacy-preserving collaborative machine learning paradigm, is a promising approach to enable edge intelligence in the emerging Industrial Metaverse.
no code implementations • 4 Nov 2022 • Zhouyuan Huo, Khe Chai Sim, Bo Li, Dongseong Hwang, Tara N. Sainath, Trevor Strohman
Experimental results show that the proposed method can achieve better performance on speech recognition task than existing algorithms with fewer number of trainable parameters, less computational memory cost and faster training speed.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 3 Nov 2022 • Bhaskar Ray Chaudhury, Linyi Li, Mintong Kang, Bo Li, Ruta Mehta
Nonetheless, the heterogeneity nature of distributed data makes it challenging to define and ensure fairness among local agents.
no code implementations • 2 Nov 2022 • Chao-Han Huck Yang, Bo Li, Yu Zhang, Nanxin Chen, Tara N. Sainath, Sabato Marco Siniscalchi, Chin-Hui Lee
We propose a quantum kernel learning (QKL) framework to address the inherent data sparsity issues often encountered in training large-scare acoustic models in low-resource scenarios.
no code implementations • 1 Nov 2022 • Chaowei Xiao, Zhongzhu Chen, Kun Jin, Jiongxiao Wang, Weili Nie, Mingyan Liu, Anima Anandkumar, Bo Li, Dawn Song
By using the highest density point in the conditional distribution as the reversed sample, we identify the robust region of a given instance under the diffusion model's reverse process.
no code implementations • 1 Nov 2022 • Shaan Bijwadia, Shuo-Yiin Chang, Bo Li, Tara Sainath, Chao Zhang, Yanzhang He
In this work, we propose a method to jointly train the ASR and EP tasks in a single end-to-end (E2E) multitask model, improving EP quality by optionally leveraging information from the ASR audio encoder.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
1 code implementation • European Conference on Computer Vision 2022 • Zhaoyu Chen, Bo Li, Shuang Wu, Jianghe Xu, Shouhong Ding, Wenqiang Zhang
Though deep neural networks (DNNs) have demonstrated excellent performance in computer vision, they are susceptible and vulnerable to carefully crafted adversarial examples which can mislead DNNs to incorrect outputs.
1 code implementation • 26 Oct 2022 • YuFei Wang, Yuchao Dai, Qi Liu, Peng Yang, Jiadai Sun, Bo Li
We find that existing depth-only methods can obtain satisfactory results in the areas where the measurement points are almost accurate and evenly distributed (denoted as normal areas), while the performance is limited in the areas where the foreground and background points are overlapped due to occlusion (denoted as overlap areas) and the areas where there are no measurement points around (denoted as blank areas) since the methods have no reliable input information in these areas.
no code implementations • 21 Oct 2022 • Mengdi Xu, Peide Huang, Yaru Niu, Visak Kumar, JieLin Qiu, Chao Fang, Kuan-Hui Lee, Xuewei Qi, Henry Lam, Bo Li, Ding Zhao
One key challenge for multi-task Reinforcement learning (RL) in practice is the absence of task indicators.
1 code implementation • 20 Oct 2022 • Xiaojun Xu, Linyi Li, Bo Li
On the other hand, as existing works show that semi-supervised training helps improve empirical robustness, we aim to bridge the gap and prove that semi-supervised learning also improves the certified robustness of Lipschitz-bounded models.
no code implementations • 17 Oct 2022 • Zexian Yang, Dayan Wu, Bo Li, Weiping Wang
This is challenging as the new data only have local supervision in new cameras with no access to the old data due to privacy issues, and they may also contain persons seen by previous cameras.
1 code implementation • 15 Oct 2022 • Renzhe Xu, Xingxuan Zhang, Bo Li, Yafeng Zhang, Xiaolong Chen, Peng Cui
In this paper, we assume that each consumer can purchase multiple products at will.
1 code implementation • 13 Oct 2022 • Jingkang Yang, Pengyun Wang, Dejian Zou, Zitang Zhou, Kunyuan Ding, Wenxuan Peng, Haoqi Wang, Guangyao Chen, Bo Li, Yiyou Sun, Xuefeng Du, Kaiyang Zhou, Wayne Zhang, Dan Hendrycks, Yixuan Li, Ziwei Liu
Out-of-distribution (OOD) detection is vital to safety-critical machine learning applications and has thus been extensively studied, with a plethora of methods developed in the literature.
no code implementations • 13 Oct 2022 • Tara N. Sainath, Rohit Prabhavalkar, Ankur Bapna, Yu Zhang, Zhouyuan Huo, Zhehuai Chen, Bo Li, Weiran Wang, Trevor Strohman
In addition, we explore JOIST using a streaming E2E model with an order of magnitude more data, which are also novelties compared to previous works.
no code implementations • 13 Oct 2022 • Peng Ye, Zhifeng Jiang, Wei Wang, Bo Li, Baochun Li
To address this problem, we develop a novel feature protection scheme against the reconstruction attack that effectively misleads the search to some pre-specified random values.
no code implementations • 11 Oct 2022 • Ke Hu, Bo Li, Tara N. Sainath
In this work, we investigate second-pass deliberation for multilingual speech recognition.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
1 code implementation • 11 Oct 2022 • Jingru Tan, Bo Li, Xin Lu, Yongqiang Yao, Fengwei Yu, Tong He, Wanli Ouyang
Long-tail distribution is widely spread in real-world applications.
1 code implementation • 11 Oct 2022 • Bo Li, Yongqiang Yao, Jingru Tan, Xin Lu, Fengwei Yu, Ye Luo, Jianwei Lu
Specifically, there are an object detection task (consisting of an instance-classification task and a localization task) and an image-classification task in our framework, responsible for utilizing the two types of supervision.
no code implementations • 10 Oct 2022 • JieLin Qiu, Jiacheng Zhu, Mengdi Xu, Franck Dernoncourt, Trung Bui, Zhaowen Wang, Bo Li, Ding Zhao, Hailin Jin
Multimedia summarization with multimodal output (MSMO) is a recently explored application in language grounding.
1 code implementation • 27 Sep 2022 • Yiming Li, Yang Bai, Yong Jiang, Yong Yang, Shu-Tao Xia, Bo Li
In this paper, we revisit dataset ownership verification.
no code implementations • 25 Sep 2022 • Bo Li, Lv Tang, Senyun Kuang, Mofei Song, Shouhong Ding
In this paper, we present a novel model for simultaneous stable co-saliency detection (CoSOD) and object co-segmentation (CoSEG).
no code implementations • 20 Sep 2022 • Junwei Ma, Bo Li, Qingchun Li, Chao Fan, Ali Mostafavi
To this end, this study creates a network embedding model capturing cross-county visitation networks, as well as heterogeneous features to uncover clusters of counties in the United States based on their pandemic spread transmission trajectories.
no code implementations • 19 Sep 2022 • Mingqi Yuan, Bo Li, Xin Jin, Wenjun Zeng
Exploration is critical for deep reinforcement learning in complex environments with high-dimensional observations and sparse rewards.
no code implementations • 16 Sep 2022 • Mengdi Xu, Zuxin Liu, Peide Huang, Wenhao Ding, Zhepeng Cen, Bo Li, Ding Zhao
A trustworthy reinforcement learning algorithm should be competent in solving challenging real-world problems, including {robustly} handling uncertainties, satisfying {safety} constraints to avoid catastrophic failures, and {generalizing} to unseen scenarios during deployments.
no code implementations • 14 Sep 2022 • Xin Zhang, Qiaoyu Tan, Xiao Huang, Bo Li
Thus, blindly augmenting all graphs without considering their individual characteristics may undermine the performance of GCL arts. To deal with this, we propose the first principled framework, termed as \textit{G}raph contrastive learning with \textit{P}ersonalized \textit{A}ugmentation (GPA), to advance conventional GCL by allowing each graph to choose its own suitable augmentation operations. In essence, GPA infers tailored augmentation strategies for each graph based on its topology and node attributes via a learnable augmentation selector, which is a plug-and-play module and can be effectively trained with downstream GCL models end-to-end.
no code implementations • 13 Sep 2022 • Chao Zhang, Bo Li, Tara Sainath, Trevor Strohman, Sepand Mavandadi, Shuo-Yiin Chang, Parisa Haghani
Language identification is critical for many downstream tasks in automatic speech recognition (ASR), and is beneficial to integrate into multilingual end-to-end ASR as an additional task.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
1 code implementation • 12 Sep 2022 • Jiawei Zhang, Linyi Li, Ce Zhang, Bo Li
In particular, we propose a certifiably robust learning with reasoning pipeline (CARE), which consists of a learning component and a reasoning component.
no code implementations • 8 Sep 2022 • Chulin Xie, Zhong Cao, Yunhui Long, Diange Yang, Ding Zhao, Bo Li
However, training AVs usually requires a large amount of training data collected from different driving environments (e. g., cities) as well as different types of personal information (e. g., working hours and routes).
no code implementations • 8 Sep 2022 • Chulin Xie, Yunhui Long, Pin-Yu Chen, Bo Li
We then provide two robustness certification criteria: certified prediction and certified attack cost for DPFL on both levels.
no code implementations • 4 Sep 2022 • Ayoosh Bansal, Simon Yu, Hunmin Kim, Bo Li, Naira Hovakimyan, Marco Caccamo, Lui Sha
The synergistic safety layer uses only verifiable and logically analyzable software to fulfill its tasks.
1 code implementation • 1 Sep 2022 • Jie Zhang, Zhiqi Li, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Chao Wu
Extensive experiments on federated datasets and real-world datasets demonstrate that FedLC leads to a more accurate global model and much improved performance.
1 code implementation • 30 Aug 2022 • Ayoosh Bansal, Hunmin Kim, Simon Yu, Bo Li, Naira Hovakimyan, Marco Caccamo, Lui Sha
Perception of obstacles remains a critical safety concern for autonomous vehicles.
no code implementations • 29 Aug 2022 • Qingsong Yan, Qiang Wang, Kaiyong Zhao, Bo Li, Xiaowen Chu, Fei Deng
The panorama image can simultaneously demonstrate complete information of the surrounding environment and has many advantages in virtual tourism, games, robotics, etc.
Ranked #17 on
Depth Estimation
on Stanford2D3D Panoramic
no code implementations • 29 Aug 2022 • Hao Xu, Bo Li, Fei Zhong
Fire-detection technology is of great importance for successful fire-prevention measures.
no code implementations • 29 Aug 2022 • Shuo-Yiin Chang, Guru Prakash, Zelin Wu, Qiao Liang, Tara N. Sainath, Bo Li, Adam Stambler, Shyam Upadhyay, Manaal Faruqui, Trevor Strohman
In voice-enabled applications, a predetermined hotword isusually used to activate a device in order to attend to the query. However, speaking queries followed by a hotword each timeintroduces a cognitive burden in continued conversations.
no code implementations • 29 Aug 2022 • Shuo-Yiin Chang, Bo Li, Tara N. Sainath, Chao Zhang, Trevor Strohman, Qiao Liang, Yanzhang He
This makes doing speech recognition with conversational speech, including one with multiple queries, a challenging task.
no code implementations • 29 Aug 2022 • Bo Li, Tara N. Sainath, Ruoming Pang, Shuo-Yiin Chang, Qiumin Xu, Trevor Strohman, Vince Chen, Qiao Liang, Heguang Liu, Yanzhang He, Parisa Haghani, Sameer Bidichandani
On-device end-to-end (E2E) models have shown improvements over a conventional model on English Voice Search tasks in both quality and latency.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
1 code implementation • 23 Aug 2022 • Anpeng Wu, Kun Kuang, Ruoxuan Xiong, Minqing Zhu, Yuxuan Liu, Bo Li, Furui Liu, Zhihua Wang, Fei Wu
The advent of the big data era brought new opportunities and challenges to draw treatment effect in data fusion, that is, a mixed dataset collected from multiple sources (each source with an independent treatment assignment mechanism).
no code implementations • 15 Aug 2022 • Carole H. Sudre, Kimberlin Van Wijnen, Florian Dubost, Hieab Adams, David Atkinson, Frederik Barkhof, Mahlet A. Birhanu, Esther E. Bron, Robin Camarasa, Nish Chaturvedi, Yuan Chen, Zihao Chen, Shuai Chen, Qi Dou, Tavia Evans, Ivan Ezhov, Haojun Gao, Marta Girones Sanguesa, Juan Domingo Gispert, Beatriz Gomez Anson, Alun D. Hughes, M. Arfan Ikram, Silvia Ingala, H. Rolf Jaeger, Florian Kofler, Hugo J. Kuijf, Denis Kutnar, Minho Lee, Bo Li, Luigi Lorenzini, Bjoern Menze, Jose Luis Molinuevo, Yiwei Pan, Elodie Puybareau, Rafael Rehwald, Ruisheng Su, Pengcheng Shi, Lorna Smith, Therese Tillin, Guillaume Tochon, Helene Urien, Bas H. M. van der Velden, Isabelle F. van der Velpen, Benedikt Wiestler, Frank J. Wolters, Pinar Yilmaz, Marius de Groot, Meike W. Vernooij, Marleen de Bruijne
This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels.
2 code implementations • 11 Aug 2022 • huan zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter
Our generalized bound propagation method, GCP-CROWN, opens up the opportunity to apply general cutting plane methods for neural network verification while benefiting from the efficiency and GPU acceleration of bound propagation methods.
no code implementations • 11 Aug 2022 • Bo Li, Chi Ho Yeung
In this paper, we employ techniques in statistical physics to analyze the MAB model, which facilitates to characterize the distribution of cumulative regrets at a finite short time, the central quantity of interest in an MAB algorithm, as well as the intricate dynamical behaviours of the model.
1 code implementation • 10 Aug 2022 • William Han, JieLin Qiu, Jiacheng Zhu, Mengdi Xu, Douglas Weber, Bo Li, Ding Zhao
In addition, we provide interpretations of the performance improvement by: (1) visualizing the original feature distribution and the transformed feature distribution, showing the effectiveness of the alignment module for discovering and encoding the relationship between EEG and language; (2) visualizing word-level and sentence-level EEG-language alignment weights, showing the influence of different language semantics as well as EEG frequency features; and (3) visualizing brain topographical maps to provide an intuitive demonstration of the connectivity of EEG and language response in the brain regions.
no code implementations • 5 Aug 2022 • Sandy Ritchie, You-Chi Cheng, Mingqing Chen, Rajiv Mathews, Daan van Esch, Bo Li, Khe Chai Sim
Almost none of the 2, 000+ languages spoken in Africa have widely available automatic speech recognition systems, and the required data is also only available for a few languages.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 2 Aug 2022 • Jiacheng Zhu, JieLin Qiu, Zhuolin Yang, Douglas Weber, Michael A. Rosenberg, Emerson Liu, Bo Li, Ding Zhao
In this paper, we propose a physiologically-inspired data augmentation method to improve performance and increase the robustness of heart disease detection based on ECG signals.
1 code implementation • 25 Jul 2022 • Zhuowen Yuan, Fan Wu, Yunhui Long, Chaowei Xiao, Bo Li
We first explore different statistical information which can discriminate the private training distribution from other distributions.
no code implementations • 21 Jul 2022 • Wenda Chu, Chulin Xie, Boxin Wang, Linyi Li, Lang Yin, Han Zhao, Bo Li
However, due to the heterogeneous nature of local data, it is challenging to optimize or even define fairness of the trained global model for the agents.
1 code implementation • 21 Jul 2022 • Xiaoyuan Liu, Tianneng Shi, Chulin Xie, Qinbin Li, Kangping Hu, Haoyu Kim, Xiaojun Xu, Bo Li, Dawn Song
Federated Learning (FL) has become a practical and popular paradigm in machine learning.
no code implementations • 20 Jul 2022 • Chulin Xie, Pin-Yu Chen, Ce Zhang, Bo Li
Moreover, we show that a byproduct of our framework is that the weights of learned linear heads reflect the importance of local clients.
1 code implementation • 19 Jul 2022 • Wenhao Ding, Haohong Lin, Bo Li, Ding Zhao
As a pivotal component to attaining generalizable solutions in human intelligence, reasoning provides great potential for reinforcement learning (RL) agents' generalization towards varied goals by summarizing part-to-whole arguments and discovering cause-and-effect relations.
no code implementations • 19 Jul 2022 • Yuzheng Hu, Tianle Cai, Jinyong Shan, Shange Tang, Chaochao Cai, Ethan Song, Bo Li, Dawn Song
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks, where the protocols might differ between one another, yet a procedure of obtaining local gradients is implicitly shared.
no code implementations • 13 Jul 2022 • Dinuka Sahabandu, Arezoo Rajabi, Luyao Niu, Bo Li, Bhaskar Ramasubramanian, Radha Poovendran
The results show that (i) with Submodular Trojan algorithm, the adversary needs to embed a Trojan trigger into a very small fraction of samples to achieve high accuracy on both Trojan and clean samples, and (ii) the MM Trojan algorithm yields a trained Trojan model that evades detection with probability 1.
1 code implementation • 30 Jun 2022 • Lin Zhang, Shaohuai Shi, Wei Wang, Bo Li
The second-order optimization methods, notably the D-KFAC (Distributed Kronecker Factored Approximate Curvature) algorithms, have gained traction on accelerating deep neural network (DNN) training on GPU clusters.
1 code implementation • 28 Jun 2022 • Mantas Mazeika, Bo Li, David Forsyth
To meet these challenges, we present a new approach to model stealing defenses called gradient redirection.
no code implementations • 26 Jun 2022 • Yezhen Wang, Tong Che, Bo Li, Kaitao Song, Hengzhi Pei, Yoshua Bengio, Dongsheng Li
Autoregressive generative models are commonly used, especially for those tasks involving sequential data.
no code implementations • 18 Jun 2022 • Zhifeng Jiang, Wei Wang, Baochun Li, Bo Li
Current FL systems employ a participant selection strategy to select fast clients with quality data in each iteration.
1 code implementation • 16 Jun 2022 • Linyi Li, Jiawei Zhang, Tao Xie, Bo Li
To overcome this hurdle, we propose a Double Sampling Randomized Smoothing (DSRS) framework, which exploits the sampled probability from an additional smoothing distribution to tighten the robustness certification of the previous smoothed classifier.
1 code implementation • 16 Jun 2022 • Xiaoshuai Hao, Yi Zhu, Srikar Appalaraju, Aston Zhang, Wanqian Zhang, Bo Li, Mu Li
Data augmentation is a necessity to enhance data efficiency in deep learning.
1 code implementation • 15 Jun 2022 • Zhangheng Li, Tianlong Chen, Linyi Li, Bo Li, Zhangyang Wang
Given the fact that neural networks are often over-parameterized, one effective way to reduce such computational overhead is neural network pruning, by removing redundant parameters from trained neural networks.
1 code implementation • 8 Jun 2022 • Bo Li, Yifei Shen, Jingkang Yang, Yezhen Wang, Jiawei Ren, Tong Che, Jun Zhang, Ziwei Liu
It is motivated by an empirical finding that transformer-based models trained with empirical risk minimization (ERM) outperform CNN-based models employing state-of-the-art (SOTA) DG algorithms on multiple DG datasets.
Ranked #7 on
Domain Generalization
on DomainNet
(using extra training data)
no code implementations • 7 Jun 2022 • Jiashuo Liu, Jiayun Wu, Jie Peng, Zheyan Shen, Bo Li, Peng Cui
We reformulate the invariant learning problem under latent heterogeneity into a relaxed form that pursues the distributional invariance, based on which we propose our novel Distributionally Invariant Learning (DIL) framework as well as two implementations named DIL-MMD and DIL-KL.
no code implementations • 6 Jun 2022 • Yuzhe Li, Yong liu, Bo Li, Weiping Wang, Nan Liu
In this paper, we focus our attention on private Empirical Risk Minimization (ERM), which is one of the most commonly used data analysis method.
1 code implementation • 31 May 2022 • Mintong Kang, Linyi Li, Maurice Weber, Yang Liu, Ce Zhang, Bo Li
In this paper, we first formulate the certified fairness of an ML model trained on a given data distribution as an optimization problem based on the model performance loss bound on a fairness constrained distribution, which is within bounded distributional distance with the training distribution.
1 code implementation • 29 May 2022 • Zuxin Liu, Zijian Guo, Zhepeng Cen, huan zhang, Jie Tan, Bo Li, Ding Zhao
One interesting and counter-intuitive finding is that the maximum reward attack is strong, as it can both induce unsafe behaviors and make the attack stealthy by maintaining the reward.
no code implementations • 28 May 2022 • Kan Xie, Zhe Zhang, Bo Li, Jiawen Kang, Dusit Niyato, Shengli Xie, Yi Wu
However, for machine learning-based traffic sign recognition on the Internet of Vehicles (IoV), a large amount of traffic sign data from distributed vehicles is needed to be gathered in a centralized server for model training, which brings serious privacy leakage risk because of traffic sign data containing lots of location privacy information.
no code implementations • 25 May 2022 • Xiangshan Gao, Xingjun Ma, Jingyi Wang, Youcheng Sun, Bo Li, Shouling Ji, Peng Cheng, Jiming Chen
One desirable property for FL is the implementation of the right to be forgotten (RTBF), i. e., a leaving participant has the right to request to delete its private data from the global model.
1 code implementation • 16 May 2022 • Bo Li, Kaitao Xue, Bin Liu, Yu-Kun Lai
In this paper, a novel image-to-image translation method based on the Brownian Bridge Diffusion Model (BBDM) is proposed, which models image-to-image translation as a stochastic Brownian bridge process, and learns the translation between two domains directly through the bidirectional diffusion process rather than a conditional generation process.
1 code implementation • Findings (NAACL) 2022 • Boxin Wang, Chejian Xu, Xiangyu Liu, Yu Cheng, Bo Li
In particular, SemAttack optimizes the generated perturbations constrained on generic semantic spaces, including typo space, knowledge space (e. g., WordNet), contextualized semantic space (e. g., the embedding space of BERT clusterings), or the combination of these spaces.
1 code implementation • CVPR 2022 • Mengzhe He, Yali Wang, Jiaxi Wu, Yiru Wang, Hanqing Li, Bo Li, Weihao Gan, Wei Wu, Yu Qiao
It can adaptively enhance source detector to perceive objects in a target image, by leveraging target proposal contexts from iterative cross-attention.
1 code implementation • 23 Apr 2022 • Bojan Karlaš, David Dao, Matteo Interlandi, Bo Li, Sebastian Schelter, Wentao Wu, Ce Zhang
We present DataScope (ease. ml/datascope), the first system that efficiently computes Shapley values of training examples over an end-to-end ML pipeline, and illustrate its applications in data debugging for ML training.
1 code implementation • 20 Apr 2022 • Zhuangwei Shi, Bo Li
This paper proposes three models for protein classification.
1 code implementation • 18 Apr 2022 • Haoxiang Wang, Bo Li, Han Zhao
Gradual domain adaptation (GDA), on the other hand, assumes a path of $(T-1)$ unlabeled intermediate domains bridging the source and target, and aims to provide better generalization in the target domain by leveraging the intermediate ones.
no code implementations • CVPR 2022 • Jiaxi Wu, Jiaxin Chen, Mengzhe He, Yiru Wang, Bo Li, Bingqi Ma, Weihao Gan, Wei Wu, Yali Wang, Di Huang
Specifically, TRKP adopts the teacher-student framework, where the multi-head teacher network is built to extract knowledge from labeled source domains and guide the student network to learn detectors in unlabeled target domain.
no code implementations • 7 Apr 2022 • JieLin Qiu, Jiacheng Zhu, Mengdi Xu, Franck Dernoncourt, Trung Bui, Zhaowen Wang, Bo Li, Ding Zhao, Hailin Jin
Multimedia summarization with multimodal output can play an essential role in real-world applications, i. e., automatically generating cover images and titles for news articles or providing introductions to online videos.
1 code implementation • CVPR 2022 • Qiuhong Shen, Lei Qiao, Jinyang Guo, Peixia Li, Xin Li, Bo Li, Weitao Feng, Weihao Gan, Wei Wu, Wanli Ouyang
As unlimited self-supervision signals can be obtained by tracking a video along a cycle in time, we investigate evolving a Siamese tracker by tracking videos forward-backward.
no code implementations • 4 Apr 2022 • Mansur Arief, Zhepeng Cen, Zhenyuan Liu, Zhiyuang Huang, Henry Lam, Bo Li, Ding Zhao
In this work, we present Deep Importance Sampling (Deep IS) framework that utilizes a deep neural network to obtain an efficient IS that is on par with the state-of-the-art, capable of reducing the required sample size 43 times smaller than the naive sampling method to achieve 10% relative error and while producing an estimate that is much less conservative.
no code implementations • 29 Mar 2022 • Bo Li, Xiao-Liang Guo, Yu-Tao Li
This work describes a thermoelectric generation system with an optimized heat sink for wireless monitoring sensor power supply at high temperature.
1 code implementation • 19 Mar 2022 • Jiacheng Zhu, Gregory Darnell, Agni Kumar, Ding Zhao, Bo Li, XuanLong Nguyen, Shirley You Ren
The proposed method learns an individual-specific predictive model from heterogeneous observations, and enables estimation of an optimal transport map that yields a push forward operation onto the demographic features for each task.
1 code implementation • CVPR 2022 • Haoxiang Wang, Yite Wang, Ruoyu Sun, Bo Li
We show that the performance of MetaNTK-NAS is comparable or better than the state-of-the-art NAS method designed for few-shot learning while enjoying more than 100x speedup.
1 code implementation • ICLR 2022 • Fan Wu, Linyi Li, Chejian Xu, huan zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, Bo Li
We leverage COPA to certify three RL environments trained with different algorithms and conclude: (1) The proposed robust aggregation protocols such as temporal aggregation can significantly improve the certifications; (2) Our certification for both per-state action stability and cumulative reward bound are efficient and tight; (3) The certification for different training algorithms and environments are different, implying their intrinsic robustness properties.
no code implementations • CVPR 2022 • Zhaoyu Chen, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Wenqiang Zhang
To move towards a practical certifiable patch defense, we introduce Vision Transformer (ViT) into the framework of Derandomized Smoothing (DS).
no code implementations • 14 Mar 2022 • Yi Liu, Lei Xu, Xingliang Yuan, Cong Wang, Bo Li
Existing machine unlearning techniques focus on centralized training, where access to all holders' training data is a must for the server to conduct the unlearning process.
1 code implementation • 10 Mar 2022 • BoYu Chen, Peixia Li, Lei Bai, Lei Qiao, Qiuhong Shen, Bo Li, Weihao Gan, Wei Wu, Wanli Ouyang
Exploiting a general-purpose neural architecture to replace hand-wired designs or inductive biases has recently drawn extensive interest.
no code implementations • 9 Mar 2022 • Ziyue Li, Kan Ren, Xinyang Jiang, Bo Li, Haipeng Zhang, Dongsheng Li
Fine-tuning pretrained models is a common practice in domain generalization (DG) tasks.
Ranked #3 on
Domain Generalization
on VLCS
no code implementations • 28 Feb 2022 • Bo Li, Ting Wang, Peng Yang, Mingsong Chen, Shui Yu, Mounir Hamdi
To support the needs of ever-growing cloud-based services, the number of servers and network devices in data centers is increasing exponentially, which in turn results in high complexities and difficulties in network optimization.
no code implementations • 25 Feb 2022 • Bo Li, Mikkel N. Schmidt, Tommy S. Alstrøm
We propose a new machine learning technique for Raman spectrum matching, based on contrastive representation learning, that requires no preprocessing and works with as little as a single reference spectrum from each class.
1 code implementation • ACL 2022 • Guanglin Niu, Bo Li, Yongfei Zhang, ShiLiang Pu
The previous knowledge graph embedding (KGE) techniques suffer from invalid negative sampling and the uncertainty of fact-view link prediction, limiting KGC's performance.
1 code implementation • 9 Feb 2022 • Renzhe Xu, Xingxuan Zhang, Peng Cui, Bo Li, Zheyan Shen, Jiazheng Xu
Personalized pricing is a business strategy to charge different prices to individual consumers based on their characteristics and behaviors.
no code implementations • 8 Feb 2022 • Trevor Harris, Bo Li, Ryan Sriver
Multi-model ensemble analysis integrates information from multiple climate models into a unified projection.
1 code implementation • 8 Feb 2022 • Boxin Wang, Wei Ping, Chaowei Xiao, Peng Xu, Mostofa Patwary, Mohammad Shoeybi, Bo Li, Anima Anandkumar, Bryan Catanzaro
In this work, we systematically explore domain-adaptive training to reduce the toxicity of language models.
1 code implementation • 3 Feb 2022 • Maurice Weber, Linyi Li, Boxin Wang, Zhikuan Zhao, Bo Li, Ce Zhang
As a result, the wider application of these techniques is currently limited by its scalability and flexibility -- these techniques often do not scale to large-scale datasets with modern deep neural networks or cannot handle loss functions which may be non-smooth such as the 0-1 loss.
no code implementations • 3 Feb 2022 • Xiaojun Xu, Jacky Yibo Zhang, Evelyn Ma, Danny Son, Oluwasanmi Koyejo, Bo Li
We propose a general theoretical framework proving that factors involving the model function class regularization are sufficient conditions for relative domain transferability.
1 code implementation • 30 Jan 2022 • Wenda Chu, Linyi Li, Bo Li
In this paper, we propose a transformation-specific smoothing framework TPC, which provides tight and scalable robustness guarantees for point cloud models against semantic transformation attacks.
1 code implementation • 30 Jan 2022 • Haoxiang Wang, Haozhe Si, Bo Li, Han Zhao
Our first algorithm, ISR-Mean, can identify the subspace spanned by invariant features from the first-order moments of the class-conditional distributions, and achieve provable domain generalization with $d_s+1$ training environments under the data model of Rosenfeld et al. (2021).
1 code implementation • 28 Jan 2022 • Zuxin Liu, Zhepeng Cen, Vladislav Isenbaev, Wei Liu, Zhiwei Steven Wu, Bo Li, Ding Zhao
Safe reinforcement learning (RL) aims to learn policies that satisfy certain constraints before deploying them to safety-critical applications.
no code implementations • 25 Jan 2022 • Chao Zhang, Bo Li, Zhiyun Lu, Tara N. Sainath, Shuo-Yiin Chang
The recurrent neural network transducer (RNN-T) has recently become the mainstream end-to-end approach for streaming automatic speech recognition (ASR).
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
1 code implementation • 24 Jan 2022 • Bo Li, Qiulin Wang, JiQuan Pei, Yu Yang, Xiangyang Ji
First, we propose a novel approach to disentangle latent subspace semantics by exploiting existing face analysis models, e. g., face parsers and face landmark detectors.
no code implementations • 17 Jan 2022 • Jie Song, Huawei Yi, Wenqian Xu, Xiaohui Li, Bo Li, Yuanyuan Liu
The proposal of perceptual loss solves the problem that per-pixel difference loss function causes the reconstructed image to be overly-smooth, which acquires a significant progress in the field of single image super-resolution reconstruction.
1 code implementation • CVPR 2022 • Bo Li, Yongqiang Yao, Jingru Tan, Gang Zhang, Fengwei Yu, Jianwei Lu, Ye Luo
The conventional focal loss balances the training process with the same modulating factor for all categories, thus failing to handle the long-tailed problem.
1 code implementation • CVPR 2022 • Jie Zhang, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Lei Zhang, Chao Wu
The proposed method can efficiently imitate the target model through a small number of queries and achieve high attack success rate.
1 code implementation • CVPR 2022 • Shaoqian Wang, Bo Li, Yuchao Dai
Specifically, a lightweight 3D CNN is utilized to generate the coarsest initial depth map which is essential to launch the GRU and guarantee a fast convergence.
1 code implementation • CVPR 2022 • Yijie Zhong, Bo Li, Lv Tang, Senyun Kuang, Shuang Wu, Shouhong Ding
We first design a novel frequency enhancement module (FEM) to dig clues of camouflaged objects in the frequency domain.
1 code implementation • 23 Dec 2021 • Jie Zhang, Chen Chen, Bo Li, Lingjuan Lyu, Shuang Wu, Shouhong Ding, Chunhua Shen, Chao Wu
One-shot Federated Learning (FL) has recently emerged as a promising approach, which allows the central server to learn a model in a single communication round.
1 code implementation • 15 Dec 2021 • Xiaohua Chen, Yucan Zhou, Dayan Wu, Wanqian Zhang, Yu Zhou, Bo Li, Weiping Wang
Since the covariance matrix of each category represents the feature transformation directions, we can sample new directions from similar categories to generate definitely different instances.
Ranked #47 on
Long-tail Learning
on CIFAR-10-LT (ρ=10)
2 code implementations • CVPR 2022 • Dan Hendrycks, Andy Zou, Mantas Mazeika, Leonard Tang, Bo Li, Dawn Song, Jacob Steinhardt
In real-world applications of machine learning, reliable and safe systems must consider measures of performance beyond standard test set accuracy.
no code implementations • COLING 2022 • Guanglin Niu, Bo Li, Yongfei Zhang, ShiLiang Pu
Knowledge graph (KG) inference aims to address the natural incompleteness of KGs, including rule learning-based and KG embedding (KGE) models.
no code implementations • NeurIPS 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.
no code implementations • NeurIPS 2021 • Bo Li, Minming Li, Ruilong Zhang
We study a fair resource scheduling problem, where a set of interval jobs are to be allocated to heterogeneous machines controlled by intellectual agents. Each job is associated with release time, deadline, and processing time such that it can be processed if its complete processing period is between its release time and deadline.
no code implementations • 19 Nov 2021 • Tao Bai, Jun Zhao, Jinlin Zhu, Shoudong Han, Jiefeng Chen, Bo Li, Alex Kot
Through extensive experiments, AI-GAN achieves high attack success rates, outperforming existing methods, and reduces generation time significantly.
no code implementations • 15 Nov 2021 • Junwen Bai, Bo Li, Yu Zhang, Ankur Bapna, Nikhil Siddhartha, Khe Chai Sim, Tara N. Sainath
Our average WER of all languages outperforms average monolingual baseline by 33. 3%, and the state-of-the-art 2-stage XLSR by 32%.
1 code implementation • 9 Nov 2021 • Qiulin Wang, Lu Zhang, Bo Li
On the other hand, some area of the generated image might be occluded in the source image, which makes it difficult for GAN to generate realistic appearance.
no code implementations • 4 Nov 2021 • Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li
In this paper, we present Adversarial GLUE (AdvGLUE), a new multi-task benchmark to quantitatively and thoroughly explore and evaluate the vulnerabilities of modern large-scale language models under various types of adversarial attacks.
Ranked #1 on
Adversarial Robustness
on AdvGLUE
no code implementations • 26 Oct 2021 • Wenhao Ding, Haohong Lin, Bo Li, Ding Zhao
Generating safety-critical scenarios, which are crucial yet difficult to collect, provides an effective way to evaluate the robustness of autonomous driving systems.
1 code implementation • 25 Oct 2021 • Dan Hendrycks, Mantas Mazeika, Andy Zou, Sahil Patel, Christine Zhu, Jesus Navarro, Dawn Song, Bo Li, Jacob Steinhardt
When making everyday decisions, people are guided by their conscience, an internal sense of right and wrong.
no code implementations • 25 Oct 2021 • Yijie Zhong, Bo Li, Lv Tang, Hao Tang, Shouhong Ding
With a lightweight basic convolution block, we build a two-stages framework: Segmentation Network (SN) is designed to capture sufficient semantics and classify the pixels into unknown, foreground and background regions; Matting Refine Network (MRN) aims at capturing detailed texture information and regressing accurate alpha values.
1 code implementation • 24 Oct 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.
1 code implementation • NeurIPS 2021 • Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, Xingjun Ma
From this view, we identify two inherent characteristics of backdoor attacks as their weaknesses: 1) the models learn backdoored data much faster than learning with clean data, and the stronger the attack the faster the model converges on backdoored data; 2) the backdoor task is tied to a specific class (the backdoor target class).
no code implementations • WACV 2021 • Sam Leroux, Bo Li, Pieter Simoens
Automated anomaly detection in surveillance videos has attracted much interest as it provides a scalable alternative to manual monitoring.
Anomaly Detection In Surveillance Videos
Video Anomaly Detection
no code implementations • 4 Oct 2021 • Bo Li, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, JiQuan Pei, JinFeng Yi, BoWen Zhou
In this review, we provide AI practitioners with a comprehensive guide for building trustworthy AI systems.
no code implementations • 29 Sep 2021 • Jiawei Zhang, Linyi Li, Bo Li
In particular, we express the domain knowledge as first-order logic rules and embed these logic rules in a probabilistic graphical model.
no code implementations • 29 Sep 2021 • Xiaosen Wang, Bhavya Kailkhura, Krishnaram Kenthapadi, Bo Li
Finally, to demonstrate the generality of I-PGD-AT, we integrate it into PGD adversarial training and show that it can even further improve the robustness.
no code implementations • 29 Sep 2021 • Chulin Xie, Yunhui Long, Pin-Yu Chen, Krishnaram Kenthapadi, Bo Li
Federated learning (FL) provides an efficient training paradigm to jointly train a global model leveraging data from distributed users.
no code implementations • 29 Sep 2021 • Jing Liu, Chulin Xie, Krishnaram Kenthapadi, Oluwasanmi O Koyejo, Bo Li
Vertical Federated Learning (VFL) is a distributed learning paradigm that allows multiple agents to jointly train a global model when each agent holds a different subset of features for the same sample(s).
no code implementations • 27 Sep 2021 • Yu Zhang, Daniel S. Park, Wei Han, James Qin, Anmol Gulati, Joel Shor, Aren Jansen, Yuanzhong Xu, Yanping Huang, Shibo Wang, Zongwei Zhou, Bo Li, Min Ma, William Chan, Jiahui Yu, Yongqiang Wang, Liangliang Cao, Khe Chai Sim, Bhuvana Ramabhadran, Tara N. Sainath, Françoise Beaufays, Zhifeng Chen, Quoc V. Le, Chung-Cheng Chiu, Ruoming Pang, Yonghui Wu
We summarize the results of a host of efforts using giant automatic speech recognition (ASR) models pre-trained using large, diverse unlabeled datasets containing approximately a million hours of audio.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 27 Sep 2021 • Jonathan S. Kent, Bo Li
Deep Learning models possess two key traits that, in combination, make their use in the real world a risky prospect.
no code implementations • 9 Sep 2021 • Zhifeng Jiang, Wei Wang, Bo Li, Qiang Yang
The increasing demand for privacy-preserving collaborative learning has given rise to a new computing paradigm called federated learning (FL), in which clients collaboratively train a machine learning (ML) model without revealing their private training data.
no code implementations • 9 Sep 2021 • Yutong Wu, Ali Khodabakhsh, Bo Li, Evdokia Nikolova, Emmanouil Pountourakis
Namely, besides charging the player the reported value, the mechanism charges a penalty proportional to her inconsistent reports.
1 code implementation • 1 Sep 2021 • Wennan Chang, Pengtao Dang, Changlin Wan, Xiaoyu Lu, Yue Fang, Tong Zhao, Yong Zang, Bo Li, Chi Zhang, Sha Cao
Compared with existing spatial regression models, our proposed model assumes the existence a few distinct regression models that are estimated based on observations that exhibit similar response-predictor relationships.
no code implementations • 30 Aug 2021 • Bo Li, Xinyang Jiang, Donglin Bai, Yuge Zhang, Ningxin Zheng, Xuanyi Dong, Lu Liu, Yuqing Yang, Dongsheng Li
The energy consumption of deep learning models is increasing at a breathtaking rate, which raises concerns due to potential negative effects on carbon neutrality in the context of global warming and climate change.