Search Results for author: Jianyu Wang

Found 40 papers, 15 papers with code

Semantic Part Segmentation using Compositional Model combining Shape and Appearance

no code implementations CVPR 2015 Jianyu Wang, Alan Yuille

This is more challenging than standard object detection, object segmentation and pose estimation tasks because semantic parts of animals often have similar appearance and highly varying shapes.

Object object-detection +4

Unsupervised learning of object semantic parts from internal states of CNNs by population encoding

1 code implementation21 Nov 2015 Jianyu Wang, Zhishuai Zhang, Cihang Xie, Vittal Premachandran, Alan Yuille

We address the key question of how object part representations can be found from the internal states of CNNs that are trained for high-level tasks, such as object classification.

Clustering Keypoint Detection +1

Visual Concepts and Compositional Voting

no code implementations13 Nov 2017 Jianyu Wang, Zhishuai Zhang, Cihang Xie, Yuyin Zhou, Vittal Premachandran, Jun Zhu, Lingxi Xie, Alan Yuille

We use clustering algorithms to study the population activities of the features and extract a set of visual concepts which we show are visually tight and correspond to semantic parts of vehicles.

Clustering Semantic Part Detection

Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms

no code implementations22 Aug 2018 Jianyu Wang, Gauri Joshi

Communication-efficient SGD algorithms, which allow nodes to perform local updates and periodically synchronize local models, are highly effective in improving the speed and scalability of distributed SGD.

Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD

no code implementations19 Oct 2018 Jianyu Wang, Gauri Joshi

Large-scale machine learning training, in particular distributed stochastic gradient descent, needs to be robust to inherent system variability such as node straggling and random communication delays.

Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks

1 code implementation ICCV 2019 Jianyu Wang, Haichao Zhang

To generate the adversarial image, we use one-step targeted attack with the target label being the most confusing class.

MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling

4 code implementations23 May 2019 Jianyu Wang, Anit Kumar Sahu, Zhouyi Yang, Gauri Joshi, Soummya Kar

This paper studies the problem of error-runtime trade-off, typically encountered in decentralized training based on stochastic gradient descent (SGD) using a given network.

SlowMo: Improving Communication-Efficient Distributed SGD with Slow Momentum

1 code implementation ICLR 2020 Jianyu Wang, Vinayak Tantia, Nicolas Ballas, Michael Rabbat

We provide theoretical convergence guarantees showing that SlowMo converges to a stationary point of smooth non-convex losses.

Blocking Distributed Optimization +3

Deep topic modeling by multilayer bootstrap network and lasso

no code implementations24 Oct 2019 Jianyu Wang, Xiao-Lei Zhang

Specifically, we first apply multilayer bootstrap network (MBN), which is an unsupervised deep model, to reduce the dimension of documents, and then use the low-dimensional data representations or their clustering results as the target of supervised Lasso for topic word discovery.

Clustering Dimensionality Reduction +1

Overlap Local-SGD: An Algorithmic Approach to Hide Communication Delays in Distributed SGD

1 code implementation21 Feb 2020 Jianyu Wang, Hao Liang, Gauri Joshi

In this paper, we propose an algorithmic approach named Overlap-Local-SGD (and its momentum variant) to overlap the communication and computation so as to speedup the distributed training procedure.

Slow and Stale Gradients Can Win the Race

no code implementations23 Mar 2020 Sanghamitra Dutta, Jianyu Wang, Gauri Joshi

Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in runtime as it waits for the slowest workers (stragglers).

Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization

1 code implementation NeurIPS 2020 Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, H. Vincent Poor

In federated optimization, heterogeneity in the clients' local datasets and computation speeds results in large variations in the number of local updates performed by each client in each communication round.

Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies

no code implementations3 Oct 2020 Yae Jee Cho, Jianyu Wang, Gauri Joshi

Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing.

Distributed Optimization Federated Learning +1

Deep NMF Topic Modeling

no code implementations24 Feb 2021 Jianyu Wang, Xiao-Lei Zhang

In this paper, we propose a deep NMF (DNMF) topic modeling framework to alleviate the aforementioned problems.

Local Adaptivity in Federated Learning: Convergence and Consistency

no code implementations4 Jun 2021 Jianyu Wang, Zheng Xu, Zachary Garrett, Zachary Charles, Luyang Liu, Gauri Joshi

Popular optimization algorithms of FL use vanilla (stochastic) gradient descent for both local updates at clients and global updates at the aggregating server.

Federated Learning

DualVGR: A Dual-Visual Graph Reasoning Unit for Video Question Answering

1 code implementation10 Jul 2021 Jianyu Wang, Bing-Kun Bao, Changsheng Xu

However, existing graph-based methods fail to perform multi-step reasoning well, neglecting two properties of VideoQA: (1) Even for the same video, different questions may require different amount of video clips or objects to infer the answer with relational reasoning; (2) During reasoning, appearance and motion features have complicated interdependence which are correlated and complementary to each other.

Graph Attention Question Answering +3

Personalized Federated Learning for Heterogeneous Clients with Clustered Knowledge Transfer

no code implementations16 Sep 2021 Yae Jee Cho, Jianyu Wang, Tarun Chiruvolu, Gauri Joshi

Personalized federated learning (FL) aims to train model(s) that can perform well for individual clients that are highly data and system heterogeneous.

Personalized Federated Learning Transfer Learning

FedLite: A Scalable Approach for Federated Learning on Resource-constrained Clients

no code implementations28 Jan 2022 Jianyu Wang, Hang Qi, Ankit Singh Rawat, Sashank Reddi, Sagar Waghmare, Felix X. Yu, Gauri Joshi

In classical federated learning, the clients contribute to the overall training by communicating local updates for the underlying model on their private data to a coordinating server.

Federated Learning

On the Unreasonable Effectiveness of Federated Averaging with Heterogeneous Data

no code implementations9 Jun 2022 Jianyu Wang, Rudrajit Das, Gauri Joshi, Satyen Kale, Zheng Xu, Tong Zhang

Motivated by this observation, we propose a new quantity, average drift at optimum, to measure the effects of data heterogeneity, and explicitly use it to present a new theoretical analysis of FedAvg.

Federated Learning

Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning

2 code implementations30 Jun 2022 John Nguyen, Jianyu Wang, Kshitiz Malik, Maziar Sanjabi, Michael Rabbat

Surprisingly, we also find that starting federated learning from a pre-trained initialization reduces the effect of both data and system heterogeneity.

Federated Learning

Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning

1 code implementation14 Oct 2022 John Nguyen, Jianyu Wang, Kshitiz Malik, Maziar Sanjabi, Michael Rabbat

Surprisingly, we also find that starting federated learning from a pre-trained initialization reduces the effect of both data and system heterogeneity.

Federated Learning

FedFM: Anchor-based Feature Matching for Data Heterogeneity in Federated Learning

no code implementations14 Oct 2022 Rui Ye, Zhenyang Ni, Chenxin Xu, Jianyu Wang, Siheng Chen, Yonina C. Eldar

This method attempts to mitigate the negative effects of data heterogeneity in FL by aligning each client's feature space.

Federated Learning

Non-line-of-sight imaging with arbitrary illumination and detection pattern

no code implementations1 Nov 2022 Xintong Liu, Jianyu Wang, Leping Xiao, Zuoqiang Shi, Xing Fu, Lingyun Qiu

Non-line-of-sight (NLOS) imaging aims at reconstructing targets obscured from the direct line of sight.

Autonomous Driving

Robust Manifold Nonnegative Tucker Factorization for Tensor Data Representation

no code implementations8 Nov 2022 Jianyu Wang, Linruize Tang, Jie Chen, Jingdong Chen

Nonnegative Tucker Factorization (NTF) minimizes the euclidean distance or Kullback-Leibler divergence between the original data and its low-rank approximation which often suffers from grossly corruptions or outliers and the neglect of manifold structures of data.

Few-shot Non-line-of-sight Imaging with Signal-surface Collaborative Regularization

no code implementations CVPR 2023 Xintong Liu, Jianyu Wang, Leping Xiao, Xing Fu, Lingyun Qiu, Zuoqiang Shi

In this work, we propose a signal-surface collaborative regularization (SSCR) framework that provides noise-robust reconstructions with a minimal number of measurements.

Autonomous Driving Bayesian Inference

Non-Line-of-Sight Imaging With Signal Superresolution Network

no code implementations CVPR 2023 Jianyu Wang, Xintong Liu, Leping Xiao, Zuoqiang Shi, Lingyun Qiu, Xing Fu

This paper proposes a general learning-based pipeline for increasing imaging quality with only a few scanning points.

FedDisco: Federated Learning with Discrepancy-Aware Collaboration

1 code implementation30 May 2023 Rui Ye, Mingkai Xu, Jianyu Wang, Chenxin Xu, Siheng Chen, Yanfeng Wang

However, based on our empirical observations and theoretical analysis, we find that the dataset size is not optimal and the discrepancy between local and global category distributions could be a beneficial and complementary indicator for determining aggregation weights.

Federated Learning

FedHyper: A Universal and Robust Learning Rate Scheduler for Federated Learning with Hypergradient Descent

no code implementations4 Oct 2023 Ziyao Wang, Jianyu Wang, Ang Li

The theoretical landscape of federated learning (FL) undergoes rapid evolution, but its practical application encounters a series of intricate challenges, and hyperparameter optimization is one of these critical challenges.

Federated Learning Hyperparameter Optimization

PLGSLAM: Progressive Neural Scene Represenation with Local to Global Bundle Adjustment

no code implementations15 Dec 2023 Tianchen Deng, Guole Shen, Tong Qin, Jianyu Wang, Wentao Zhao, Jingchuan Wang, Danwei Wang, Weidong Chen

To this end, we introduce PLGSLAM, a neural visual SLAM system capable of high-fidelity surface reconstruction and robust camera tracking in real-time.

Surface Reconstruction

Wasserstein Nonnegative Tensor Factorization with Manifold Regularization

no code implementations3 Jan 2024 Jianyu Wang, Linruize Tang

Nonnegative tensor factorization (NTF) has become an important tool for feature extraction and part-based representation with preserved intrinsic structure information from nonnegative high-order data.

Momentum Approximation in Asynchronous Private Federated Learning

no code implementations14 Feb 2024 Tao Yu, Congzheng Song, Jianyu Wang, Mona Chitnis

Asynchronous protocols have been shown to improve the scalability of federated learning (FL) with a massive number of clients.

Federated Learning

CoGenesis: A Framework Collaborating Large and Small Language Models for Secure Context-Aware Instruction Following

no code implementations5 Mar 2024 Kaiyan Zhang, Jianyu Wang, Ermo Hua, Biqing Qi, Ning Ding, BoWen Zhou

With the advancement of language models (LMs), their exposure to private data is increasingly inevitable, and their deployment (especially for smaller ones) on personal devices, such as PCs and smartphones, has become a prevailing trend.

Instruction Following

Cannot find the paper you are looking for? You can Submit a new open access paper.