no code implementations • 6 Sep 2023 • Hongyi Zhang, Jan Bosch, Helena Holmström Olsson
By leveraging EdgeFL, software engineers can harness the benefits of federated learning while overcoming the challenges associated with existing FL platforms/frameworks.
no code implementations • 4 Feb 2022 • Hongyi Zhang, Zhiqiang Qi, Jingya Li, Anders Aronsson, Jan Bosch, Helena Holmström Olsson
Fast and reliable wireless communication has become a critical demand in human life.
no code implementations • 14 Dec 2021 • Hongyi Zhang, Jingya Li, Zhiqiang Qi, Xingqin Lin, Anders Aronsson, Jan Bosch, Helena Holmström Olsson
A deep reinforcement learning algorithm is designed to jointly optimize the access and backhaul antenna tilt as well as the three-dimensional location of the UAV-BS in order to best serve the on-ground MC users while maintaining a good backhaul connection.
1 code implementation • 20 Jun 2021 • Zihan Huang, Charles Low, Mengqiu Teng, Hongyi Zhang, Daniel E. Ho, Mark S. Krass, Matthias Grabmair
Lawyers and judges spend a large amount of time researching the proper legal authority to cite while drafting decisions.
no code implementations • 27 Apr 2021 • Chaosheng Dong, Xiaojie Jin, Weihao Gao, Yijia Wang, Hongyi Zhang, Xiang Wu, Jianchao Yang, Xiaobing Liu
Deep learning models in large-scale machine learning systems are often continuously trained with enormous data from production environments.
no code implementations • 22 Mar 2021 • Hongyi Zhang, Jan Bosch, Helena Holmström Olsson
With the development and the increasing interests in ML/DL fields, companies are eager to apply Machine Learning/Deep Learning approaches to increase service quality and customer experience.
2 code implementations • ICLR 2022 • Oscar Li, Jiankai Sun, Xin Yang, Weihao Gao, Hongyi Zhang, Junyuan Xie, Virginia Smith, Chong Wang
Two-party split learning is a popular technique for learning a model across feature-partitioned data.
no code implementations • 1 Jan 2021 • Hongyi Zhang, Jan Bosch, Helena Holmström Olsson
Because of its characteristics such as model-only exchange and parallel training, the technique can not only preserve user data privacy but also accelerate model training speed.
7 code implementations • ICLR 2019 • Hongyi Zhang, Yann N. Dauphin, Tengyu Ma
Normalization layers are a staple in state-of-the-art deep neural network architectures.
Ranked #9 on Image Classification on SVHN
no code implementations • 10 Nov 2018 • Jingzhao Zhang, Hongyi Zhang, Suvrit Sra
We study smooth stochastic optimization problems on Riemannian manifolds.
no code implementations • 7 Jun 2018 • Hongyi Zhang, Suvrit Sra
We propose a Riemannian version of Nesterov's Accelerated Gradient algorithm (RAGD), and show that for geodesically smooth and strongly convex problems, within a neighborhood of the minimizer whose radius depends on the condition number as well as the sectional curvature of the manifold, RAGD converges to the minimizer with acceleration.
no code implementations • 20 Apr 2018 • Zicheng Liao, Kevin Karsch, Hongyi Zhang, David Forsyth
We present an object relighting system that allows an artist to select an object from an image and insert it into a target scene.
71 code implementations • ICLR 2018 • Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz
We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
Ranked #1 on Out-of-Distribution Generalization on ImageNet-W
no code implementations • 8 Feb 2017 • David Gamarnik, Quan Li, Hongyi Zhang
Under a certain incoherence assumption on $M$ and for the case when both the rank and the condition number of $M$ are bounded, it was shown in \cite{CandesRecht2009, CandesTao2010, keshavan2010, Recht2011, Jain2012, Hardt2014} that $M$ can be recovered exactly or approximately (depending on some trade-off between accuracy and computational complexity) using $O(n \, \text{poly}(\log n))$ samples in super-linear time $O(n^{a} \, \text{poly}(\log n))$ for some constant $a \geq 1$.
no code implementations • NeurIPS 2016 • Hongyi Zhang, Sashank J. Reddi, Suvrit Sra
We study optimization of finite sums of geodesically smooth functions on Riemannian manifolds.
no code implementations • 19 Feb 2016 • Hongyi Zhang, Suvrit Sra
Geodesic convexity generalizes the notion of (vector space) convexity to nonlinear metric spaces.