no code implementations • EMNLP 2020 • Mi Zhang, Tieyun Qian
Moreover, we build a concept hierarchy on both the syntactic and lexical graphs for differentiating various types of dependency relations or lexical word pairs.
no code implementations • 7 Sep 2023 • Yifan Lu, Wenxuan Li, Mi Zhang, Xudong Pan, Min Yang
In this paper, we propose a novel Model Inversion-based Removal Attack (\textsc{Mira}), which is watermark-agnostic and effective against most of mainstream black-box DNN watermarking schemes.
no code implementations • 6 Sep 2023 • Che Liu, Zhongwei Wan, Sibo Cheng, Mi Zhang, Rossella Arcucci
In the domain of cardiovascular healthcare, the Electrocardiogram (ECG) serves as a critical, non-invasive diagnostic tool.
no code implementations • 15 Jun 2023 • Tiantian Feng, Digbalay Bose, Tuo Zhang, Rajat Hebbar, Anil Ramakrishna, Rahul Gupta, Mi Zhang, Salman Avestimehr, Shrikanth Narayanan
In order to facilitate the research in multimodal FL, we introduce FedMultimodal, the first FL benchmark for multimodal learning covering five representative multimodal applications from ten commonly used datasets with a total of eight unique modalities.
no code implementations • 3 Jun 2023 • Tuo Zhang, Tiantian Feng, Samiul Alam, Dimitrios Dimitriadis, Mi Zhang, Shrikanth S. Narayanan, Salman Avestimehr
Through comprehensive ablation analysis, we discover that the downstream model generated by synthetic data plays a crucial role in controlling the direction of gradient diversity during FL training, which enhances convergence speed and contributes to the notable accuracy boost observed with GPT-FL.
1 code implementation • 31 May 2023 • Zhongwei Wan, Che Liu, Mi Zhang, Jie Fu, Benyou Wang, Sibo Cheng, Lei Ma, César Quilodrán-Casas, Rossella Arcucci
Med-UniC reaches superior performance across 5 medical image tasks and 10 datasets encompassing over 30 diseases, offering a versatile framework for unifying multi-modal medical data within diverse linguistic communities.
1 code implementation • 20 Apr 2023 • Jialuo Du, Yidong Ren, Mi Zhang, Yunhao Liu, Zhichao Cao
The dataset shows that NELoRa can achieve 1. 84-2. 35 dB SNR gain over the standard LoRa decoder.
no code implementations • 18 Apr 2023 • Yang Liu, Shen Yan, Yuge Zhang, Kan Ren, Quanlu Zhang, Zebin Ren, Deng Cai, Mi Zhang
Vision Transformers have shown great performance in single tasks such as classification and segmentation.
no code implementations • 14 Apr 2023 • Tuo Zhang, Lei Gao, Sunwoo Lee, Mi Zhang, Salman Avestimehr
However, we show empirically that this method can lead to a substantial drop in training accuracy as well as a slower convergence rate.
no code implementations • 17 Mar 2023 • Yifan Yan, Xudong Pan, Mi Zhang, Min Yang
Copyright protection for deep neural networks (DNNs) is an urgent need for AI corporations.
no code implementations • 17 Mar 2023 • Qifan Xiao, Xudong Pan, Yifan Lu, Mi Zhang, Jiarun Dai, Min Yang
In this paper, we propose a novel plug-and-play defensive module which works by side of a trained LiDAR-based object detector to eliminate forged obstacles where a major proportion of local parts have low objectness, i. e., to what degree it belongs to a real object.
no code implementations • CVPR 2023 • Daizong Ding, Erling Jiang, Yuanmin Huang, Mi Zhang, Wenxuan Li, Min Yang
Recently, deep neural networks have shown great success on 3D point cloud classification tasks, which simultaneously raises the concern of adversarial attacks that cause severe damage to real-world applications.
no code implementations • CVPR 2023 • Bingnan Yang, Mi Zhang, Zhan Zhang, Zhili Zhang, Xiangyun Hu
In this work, we propose an innovative class-agnostic model, namely TopDiG, to directly extract topological directional graphs from remote sensing images and solve these issues.
no code implementations • 9 Dec 2022 • Shen Yan, Tao Zhu, ZiRui Wang, Yuan Cao, Mi Zhang, Soham Ghosh, Yonghui Wu, Jiahui Yu
We explore an efficient approach to establish a foundational video-text model.
Ranked #1 on
Video Question Answering
on ActivityNet-QA
(using extra training data)
1 code implementation • 6 Dec 2022 • Zhimeng Jiang, Kaixiong Zhou, Mi Zhang, Rui Chen, Xia Hu, Soo-Hyun Choi
In this work, we explicitly factor in the uncertainty of estimated ad impression values and model the risk preference of a DSP under a specific state and market environment via a sequential decision process.
2 code implementations • 3 Dec 2022 • Samiul Alam, Luyang Liu, Ming Yan, Mi Zhang
Most cross-device federated learning (FL) studies focus on the model-homogeneous setting where the global server model and local client models are identical.
1 code implementation • 24 Nov 2022 • Huanle Zhang, Lei Fu, Mi Zhang, Pengfei Hu, Xiuzhen Cheng, Prasant Mohapatra, Xin Liu
In this paper, we propose FedTune, an automatic FL hyper-parameter tuning algorithm tailored to applications' diverse system requirements in FL training.
no code implementations • 3 Nov 2022 • Lei Fu, Huanle Zhang, Ge Gao, Mi Zhang, Xin Liu
As a privacy-preserving paradigm for training Machine Learning (ML) models, Federated Learning (FL) has received tremendous attention from both industry and academia.
no code implementations • 18 Jul 2022 • Xudong Pan, Qifan Xiao, Mi Zhang, Min Yang
To address this design flaw, we propose a simple yet effective security patch for KF-based MOT, the core of which is an adaptive strategy to balance the focus of KF on observations and predictions according to the anomaly index of the observation-prediction deviation, and has certified effectiveness against a generalized hijacking attack model.
no code implementations • 29 Jun 2022 • Xudong Pan, Yifan Yan, Shengyao Zhang, Mi Zhang, Min Yang
In this paper, we present a novel insider attack called Matryoshka, which employs an irrelevant scheduled-to-publish DNN model as a carrier model for covert transmission of multiple secret models which memorize the functionality of private ML data stored in local data centers.
no code implementations • 30 Apr 2022 • Yifan Yan, Xudong Pan, Yining Wang, Mi Zhang, Min Yang
On $9$ state-of-the-art white-box watermarking schemes and a broad set of industry-level DNN architectures, our attack for the first time reduces the embedded identity message in the protected models to be almost random.
1 code implementation • 11 Mar 2022 • Yu Zheng, Zhi Zhang, Shen Yan, Mi Zhang
In this work, instead of fixing a set of hand-picked default augmentations alongside the searched data augmentations, we propose a fully automated approach for data augmentation search named Deep AutoAugment (DeepAA).
Ranked #1 on
Data Augmentation
on ImageNet
no code implementations • 22 Feb 2022 • Mi Zhang, Tieyun Qian, Ting Zhang
In this paper, we formulate the problem of automatically generating CAD for RC tasks from an entity-centric viewpoint, and develop a novel approach to derive contextual counterfactuals for entities.
1 code implementation • CVPR 2022 • Shen Yan, Xuehan Xiong, Anurag Arnab, Zhichao Lu, Mi Zhang, Chen Sun, Cordelia Schmid
Video understanding requires reasoning at multiple spatiotemporal resolutions -- from short fine-grained motions to events taking place over longer durations.
Ranked #4 on
Action Recognition
on EPIC-KITCHENS-100
(using extra training data)
no code implementations • 15 Nov 2021 • Tuo Zhang, Lei Gao, Chaoyang He, Mi Zhang, Bhaskar Krishnamachari, Salman Avestimehr
In this paper, we will discuss the opportunities and challenges of FL in IoT platforms, as well as how it can enable diverse IoT applications.
1 code implementation • 6 Oct 2021 • Huanle Zhang, Mi Zhang, Xin Liu, Prasant Mohapatra, Michael DeLucia
Federated learning (FL) hyper-parameters significantly affect the training overheads in terms of computation time, transmission time, computation load, and transmission load.
no code implementations • 29 Sep 2021 • Huanle Zhang, Mi Zhang, Xin Liu, Prasant Mohapatra, Michael DeLucia
Federated Learning (FL) is a distributed model training paradigm that preserves clients' data privacy.
no code implementations • 28 Jul 2021 • Mi Zhang, Tieyun Qian
Specifically, we first develop a multi-scale convolutional neural network to aggregate the non-successive mainstays in the lexical sequence.
2 code implementations • 14 Jul 2021 • Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horvath, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecny, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtarik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, Wennan Zhu
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection.
1 code implementation • 14 Feb 2021 • Shen Yan, Kaiqiang Song, Fei Liu, Mi Zhang
Our experiments show that CATE is beneficial to the downstream search, especially in the large search space.
no code implementations • 26 Oct 2020 • Xudong Pan, Mi Zhang, Yifan Yan, Jiaming Zhu, Min Yang
Among existing privacy attacks on the gradient of neural networks, \emph{data reconstruction attack}, which reverse engineers the training batch from the gradient, poses a severe threat on the private training data.
no code implementations • 17 Oct 2020 • Mi Zhang, Faen Zhang, Nicholas D. Lane, Yuanchao Shu, Xiao Zeng, Biyi Fang, Shen Yan, Hui Xu
The era of edge computing has arrived.
5 code implementations • 27 Jul 2020 • Chaoyang He, Songze Li, Jinhyun So, Xiao Zeng, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, Xinghua Zhu, Jianzong Wang, Li Shen, Peilin Zhao, Yan Kang, Yang Liu, Ramesh Raskar, Qiang Yang, Murali Annavaram, Salman Avestimehr
Federated learning (FL) is a rapidly growing research field in machine learning.
1 code implementation • NeurIPS 2020 • Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, Mi Zhang
Existing Neural Architecture Search (NAS) methods either encode neural architectures using discrete encodings that do not scale well, or adopt supervised learning-based methods to jointly learn architecture representations and optimize architecture search on such representations which incurs search bias.
no code implementations • ICLR 2021 • Ruozi Huang, Huang Hu, Wei Wu, Kei Sawada, Mi Zhang, Daxin Jiang
In this paper, we formalize the music-conditioned dance generation as a sequence-to-sequence learning problem and devise a novel seq2seq architecture to efficiently process long sequences of music features and capture the fine-grained correspondence between music and dance.
Ranked #1 on
Motion Synthesis
on BRACE
1 code implementation • 15 Apr 2020 • Mi Zhang, Ali Siahkoohi, Felix J. Herrmann
Because different frequency slices share information, we propose the use the method of transfer training to make our approach computationally more efficient by warm starting the training with CNN weights obtained from a neighboring frequency slices.
2 code implementations • ECCV 2020 • Taojiannan Yang, Sijie Zhu, Chen Chen, Shen Yan, Mi Zhang, Andrew Willis
We propose the width-resolution mutual learning method (MutualNet) to train a network that is executable at dynamic resource constraints to achieve adaptive accuracy-efficiency trade-offs at runtime.
no code implementations • 25 Sep 2019 • Taojiannan Yang, Sijie Zhu, Yan Shen, Mi Zhang, Andrew Willis, Chen Chen
We propose a framework to mutually learn from different input resolutions and network widths.
no code implementations • 31 Aug 2019 • Shen Yan, Biyi Fang, Faen Zhang, Yu Zheng, Xiao Zeng, Hui Xu, Mi Zhang
Without the constraint imposed by the hand-designed heuristics, our searched networks contain more flexible and meaningful architectures that existing weight sharing based NAS approaches are not able to discover.
no code implementations • 16 Aug 2019 • Ruozi Huang, Mi Zhang, Xudong Pan, Beina Sheng
Style is ubiquitous in our daily language uses, while what is language style to learning machines?
no code implementations • 23 Oct 2018 • Biyi Fang, Xiao Zeng, Mi Zhang
These systems usually run multiple applications concurrently and their available resources at runtime are dynamic due to events such as starting new applications, closing existing applications, and application priority changes.
no code implementations • ICML 2018 • Xudong Pan, Mi Zhang, Daizong Ding
Recently, a unified model for image-to-image translation tasks within adversarial learning framework has aroused widespread research interests in computer vision practitioners.
no code implementations • 21 Feb 2018 • Biyi Fang, Jillian Co, Mi Zhang
There is an undeniable communication barrier between deaf people and people with normal hearing ability.
no code implementations • CVPR 2015 • Mi Zhang, Jian Yao, Menghan Xia, Kai Li, Yi Zhang, Yaping Liu
Fisheye image rectification and estimation of intrinsic parameters for real scenes have been addressed in the literature by using line information on the distorted images.