Search Results for author: Tao Luo

Found 27 papers, 7 papers with code

Privacy Budget Scheduling

1 code implementation29 Jun 2021 Tao Luo, Mingen Pan, Pierre Tholoniat, Asaf Cidon, Roxana Geambasu, Mathias Lécuyer

We describe PrivateKube, an extension to the popular Kubernetes datacenter orchestrator that adds privacy as a new type of resource to be managed alongside other traditional compute resources, such as CPU, GPU, and memory.

Fairness

Embedding Principle of Loss Landscape of Deep Neural Networks

no code implementations30 May 2021 Yaoyu Zhang, Zhongwang Zhang, Tao Luo, Zhi-Qin John Xu

Understanding the structure of loss landscape of deep neural networks (DNNs)is obviously important.

Protein Folding

DTNN: Energy-efficient Inference with Dendrite Tree Inspired Neural Networks for Edge Vision Applications

no code implementations25 May 2021 Tao Luo, Wai Teng Tang, Matthew Kay Fei Lee, Chuping Qu, Weng-Fai Wong, Rick Goh

DTNN achieved significant energy saving (19. 4X and 64. 9X improvement on ResNet-18 and VGG-11 with ImageNet, respectively) with negligible loss of accuracy.

Quantization

Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training

no code implementations25 May 2021 Zhi-Qin John Xu, Hanxu Zhou, Tao Luo, Yaoyu Zhang

This work makes a step towards understanding how small initialization implicitly leads NNs to condensation at initial training stage, which lays a foundation for the future study of the nonlinear dynamics of NNs and its implicit regularization effect at a later stage of training.

Efficient Spiking Neural Networks with Radix Encoding

no code implementations14 May 2021 Zhehui Wang, Xiaozhe Gu, Rick Goh, Joey Tianyi Zhou, Tao Luo

Traditionally, a spike train needs around one thousand time steps to approach similar accuracy as its ANN counterpart.

Nonlinear Weighted Directed Acyclic Graph and A Priori Estimates for Neural Networks

no code implementations30 Mar 2021 Yuqing Li, Tao Luo, Chao Ma

In an attempt to better understand structural benefits and generalization power of deep neural networks, we firstly present a novel graph theoretical formulation of neural network models, including fully connected, residual network~(ResNet) and densely connected networks~(DenseNet).

RCT: Resource Constrained Training for Edge AI

no code implementations26 Mar 2021 Tian Huang, Tao Luo, Ming Yan, Joey Tianyi Zhou, Rick Goh

For example, quantisation-aware training (QAT) method involves two copies of model parameters, which is usually beyond the capacity of on-chip memory in edge devices.

QROSS: QUBO Relaxation Parameter Optimisation via Learning Solver Surrogates

no code implementations19 Mar 2021 Tian Huang, Siong Thye Goh, Sabrish Gopalakrishnan, Tao Luo, Qianxiao Li, Hoong Chuin Lau

In this way, we are able capture the common structure of the instances and their interactions with the solver, and produce good choices of penalty parameters with fewer number of calls to the QUBO solver.

Traveling Salesman Problem

Linear Frequency Principle Model to Understand the Absence of Overfitting in Neural Networks

no code implementations30 Jan 2021 Yaoyu Zhang, Tao Luo, Zheng Ma, Zhi-Qin John Xu

Why heavily parameterized neural networks (NNs) do not overfit the data is an important long standing open question.

Meta-Reinforcement Learning for Reliable Communication in THz/VLC Wireless VR Networks

1 code implementation29 Jan 2021 Yining Wang, Mingzhe Chen, Zhaohui Yang, Walid Saad, Tao Luo, Shuguang Cui, H. Vincent Poor

To control the energy consumption of the studied THz/VLC wireless VR network, VLC access points (VAPs) must be selectively turned on so as to ensure accurate and extensive positioning for VR users.

Meta Reinforcement Learning Virtual Reality

Adaptive Precision Training for Resource Constrained Devices

no code implementations23 Dec 2020 Tian Huang, Tao Luo, Joey Tianyi Zhou

We use model of the same precision for both forward and backward pass in order to reduce memory usage for training.

A comprehensive study on the semileptonic decay of heavy flavor mesons

no code implementations8 Dec 2020 Lu Zhang, Xian-Wei Kang, Xin-Heng Guo, Ling-Yun Dai, Tao Luo, Chao Wang

The semileptonic decay of heavy flavor mesons offers a clean environment for extraction of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements, which describes the CP-violating and flavor changing process in the Standard Model.

High Energy Physics - Phenomenology High Energy Physics - Experiment

Fourier-domain Variational Formulation and Its Well-posedness for Supervised Learning

no code implementations6 Dec 2020 Tao Luo, Zheng Ma, Zhiwei Wang, Zhi-Qin John Xu, Yaoyu Zhang

A supervised learning problem is to find a function in a hypothesis function space given values on isolated data points.

On the exact computation of linear frequency principle dynamics and its generalization

1 code implementation15 Oct 2020 Tao Luo, Zheng Ma, Zhi-Qin John Xu, Yaoyu Zhang

Recent works show an intriguing phenomenon of Frequency Principle (F-Principle) that deep neural networks (DNNs) fit the target function from low to high frequency during the training, which provides insight into the training and generalization behavior of DNNs in complex tasks.

A regularized deep matrix factorized model of matrix completion for image restoration

1 code implementation29 Jul 2020 Zhemin Li, Zhi-Qin John Xu, Tao Luo, Hongxia Wang

In this work, we propose a Regularized Deep Matrix Factorized (RDMF) model for image restoration, which utilizes the implicit bias of the low rank of deep neural networks and the explicit bias of total variation.

Image Restoration Matrix Completion

Phase diagram for two-layer ReLU neural networks at infinite-width limit

no code implementations15 Jul 2020 Tao Luo, Zhi-Qin John Xu, Zheng Ma, Yaoyu Zhang

In this work, inspired by the phase diagram in statistical mechanics, we draw the phase diagram for the two-layer ReLU neural network at the infinite-width limit for a complete characterization of its dynamical regimes and their dependence on hyperparameters related to initialization.

Towards an Understanding of Residual Networks Using Neural Tangent Hierarchy (NTH)

no code implementations7 Jul 2020 Yuqing Li, Tao Luo, Nung Kwan Yip

Gradient descent yields zero training loss in polynomial time for deep neural networks despite non-convex nature of the objective function.

Two-Layer Neural Networks for Partial Differential Equations: Optimization and Generalization Theory

no code implementations28 Jun 2020 Tao Luo, Haizhao Yang

The problem of solving partial differential equations (PDEs) can be formulated into a least-squares minimization problem, where neural networks are used to parametrize PDE solutions.

EDCompress: Energy-Aware Model Compression for Dataflows

no code implementations8 Jun 2020 Zhehui Wang, Tao Luo, Joey Tianyi Zhou, Rick Siow Mong Goh

EDCompress could also find the optimal dataflow type for specific neural networks in terms of energy consumption, which can guide the deployment of CNN models on hardware systems.

Model Compression

Deep Learning for Optimal Deployment of UAVs with Visible Light Communications

no code implementations28 Nov 2019 Yining Wang, Mingzhe Chen, Zhaohui Yang, Tao Luo, Walid Saad

Using GRUs and CNNs, the UAVs can model the long-term historical illumination distribution and predict the future illumination distribution.

Gated Recurrent Units Learning for Optimal Deployment of Visible Light Communications Enabled UAVs

no code implementations17 Sep 2019 Yining Wang, Mingzhe Chen, Zhaohui Yang, Xue Hao, Tao Luo, Walid Saad

This problem is formulated as an optimization problem whose goal is to minimize the total transmit power while meeting the illumination and communication requirements of users.

Theory of the Frequency Principle for General Deep Neural Networks

1 code implementation21 Jun 2019 Tao Luo, Zheng Ma, Zhi-Qin John Xu, Yaoyu Zhang

Along with fruitful applications of Deep Neural Networks (DNNs) to realistic problems, recently, some empirical studies of DNNs reported a universal phenomenon of Frequency Principle (F-Principle): a DNN tends to learn a target function from low to high frequencies during the training.

Explicitizing an Implicit Bias of the Frequency Principle in Two-layer Neural Networks

1 code implementation24 May 2019 Yaoyu Zhang, Zhi-Qin John Xu, Tao Luo, Zheng Ma

It remains a puzzle that why deep neural networks (DNNs), with more parameters than samples, often generalize well.

A type of generalization error induced by initialization in deep neural networks

no code implementations19 May 2019 Yaoyu Zhang, Zhi-Qin John Xu, Tao Luo, Zheng Ma

Overall, our work serves as a baseline for the further investigation of the impact of initialization and loss function on the generalization of DNNs, which can potentially guide and improve the training of DNNs in practice.

Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks

3 code implementations19 Jan 2019 Zhi-Qin John Xu, Yaoyu Zhang, Tao Luo, Yanyang Xiao, Zheng Ma

We study the training process of Deep Neural Networks (DNNs) from the Fourier analysis perspective.

Cannot find the paper you are looking for? You can Submit a new open access paper.