You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 8 Jul 2021 • Lulu Zhang, Tao Luo, Yaoyu Zhang, Zhi-Qin John Xu, Zheng Ma

In this paper, we propose a model-operator-data network (MOD-Net) for solving PDEs.

1 code implementation • 29 Jun 2021 • Tao Luo, Mingen Pan, Pierre Tholoniat, Asaf Cidon, Roxana Geambasu, Mathias Lécuyer

We describe PrivateKube, an extension to the popular Kubernetes datacenter orchestrator that adds privacy as a new type of resource to be managed alongside other traditional compute resources, such as CPU, GPU, and memory.

no code implementations • 30 May 2021 • Yaoyu Zhang, Zhongwang Zhang, Tao Luo, Zhi-Qin John Xu

Understanding the structure of loss landscape of deep neural networks (DNNs)is obviously important.

no code implementations • 25 May 2021 • Tao Luo, Wai Teng Tang, Matthew Kay Fei Lee, Chuping Qu, Weng-Fai Wong, Rick Goh

DTNN achieved significant energy saving (19. 4X and 64. 9X improvement on ResNet-18 and VGG-11 with ImageNet, respectively) with negligible loss of accuracy.

no code implementations • 25 May 2021 • Tao Luo, Zheng Ma, Zhiwei Wang, Zhi-Qin John Xu, Yaoyu Zhang

frequency in DNN training.

no code implementations • 25 May 2021 • Zhi-Qin John Xu, Hanxu Zhou, Tao Luo, Yaoyu Zhang

This work makes a step towards understanding how small initialization implicitly leads NNs to condensation at initial training stage, which lays a foundation for the future study of the nonlinear dynamics of NNs and its implicit regularization effect at a later stage of training.

no code implementations • 14 May 2021 • Zhehui Wang, Xiaozhe Gu, Rick Goh, Joey Tianyi Zhou, Tao Luo

Traditionally, a spike train needs around one thousand time steps to approach similar accuracy as its ANN counterpart.

no code implementations • 30 Mar 2021 • Yuqing Li, Tao Luo, Chao Ma

In an attempt to better understand structural benefits and generalization power of deep neural networks, we firstly present a novel graph theoretical formulation of neural network models, including fully connected, residual network~(ResNet) and densely connected networks~(DenseNet).

no code implementations • 26 Mar 2021 • Tian Huang, Tao Luo, Ming Yan, Joey Tianyi Zhou, Rick Goh

For example, quantisation-aware training (QAT) method involves two copies of model parameters, which is usually beyond the capacity of on-chip memory in edge devices.

no code implementations • 19 Mar 2021 • Tian Huang, Siong Thye Goh, Sabrish Gopalakrishnan, Tao Luo, Qianxiao Li, Hoong Chuin Lau

In this way, we are able capture the common structure of the instances and their interactions with the solver, and produce good choices of penalty parameters with fewer number of calls to the QUBO solver.

no code implementations • 30 Jan 2021 • Yaoyu Zhang, Tao Luo, Zheng Ma, Zhi-Qin John Xu

Why heavily parameterized neural networks (NNs) do not overfit the data is an important long standing open question.

1 code implementation • 29 Jan 2021 • Yining Wang, Mingzhe Chen, Zhaohui Yang, Walid Saad, Tao Luo, Shuguang Cui, H. Vincent Poor

To control the energy consumption of the studied THz/VLC wireless VR network, VLC access points (VAPs) must be selectively turned on so as to ensure accurate and extensive positioning for VR users.

no code implementations • 23 Dec 2020 • Tian Huang, Tao Luo, Joey Tianyi Zhou

We use model of the same precision for both forward and backward pass in order to reduce memory usage for training.

no code implementations • 8 Dec 2020 • Lu Zhang, Xian-Wei Kang, Xin-Heng Guo, Ling-Yun Dai, Tao Luo, Chao Wang

The semileptonic decay of heavy flavor mesons offers a clean environment for extraction of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements, which describes the CP-violating and flavor changing process in the Standard Model.

High Energy Physics - Phenomenology High Energy Physics - Experiment

no code implementations • 6 Dec 2020 • Tao Luo, Zheng Ma, Zhiwei Wang, Zhi-Qin John Xu, Yaoyu Zhang

A supervised learning problem is to find a function in a hypothesis function space given values on isolated data points.

1 code implementation • 15 Oct 2020 • Tao Luo, Zheng Ma, Zhi-Qin John Xu, Yaoyu Zhang

Recent works show an intriguing phenomenon of Frequency Principle (F-Principle) that deep neural networks (DNNs) fit the target function from low to high frequency during the training, which provides insight into the training and generalization behavior of DNNs in complex tasks.

1 code implementation • 29 Jul 2020 • Zhemin Li, Zhi-Qin John Xu, Tao Luo, Hongxia Wang

In this work, we propose a Regularized Deep Matrix Factorized (RDMF) model for image restoration, which utilizes the implicit bias of the low rank of deep neural networks and the explicit bias of total variation.

no code implementations • 15 Jul 2020 • Tao Luo, Zhi-Qin John Xu, Zheng Ma, Yaoyu Zhang

In this work, inspired by the phase diagram in statistical mechanics, we draw the phase diagram for the two-layer ReLU neural network at the infinite-width limit for a complete characterization of its dynamical regimes and their dependence on hyperparameters related to initialization.

no code implementations • 7 Jul 2020 • Yuqing Li, Tao Luo, Nung Kwan Yip

Gradient descent yields zero training loss in polynomial time for deep neural networks despite non-convex nature of the objective function.

no code implementations • 28 Jun 2020 • Tao Luo, Haizhao Yang

The problem of solving partial differential equations (PDEs) can be formulated into a least-squares minimization problem, where neural networks are used to parametrize PDE solutions.

no code implementations • 8 Jun 2020 • Zhehui Wang, Tao Luo, Joey Tianyi Zhou, Rick Siow Mong Goh

EDCompress could also find the optimal dataflow type for specific neural networks in terms of energy consumption, which can guide the deployment of CNN models on hardware systems.

no code implementations • 28 Nov 2019 • Yining Wang, Mingzhe Chen, Zhaohui Yang, Tao Luo, Walid Saad

Using GRUs and CNNs, the UAVs can model the long-term historical illumination distribution and predict the future illumination distribution.

no code implementations • 17 Sep 2019 • Yining Wang, Mingzhe Chen, Zhaohui Yang, Xue Hao, Tao Luo, Walid Saad

This problem is formulated as an optimization problem whose goal is to minimize the total transmit power while meeting the illumination and communication requirements of users.

1 code implementation • 21 Jun 2019 • Tao Luo, Zheng Ma, Zhi-Qin John Xu, Yaoyu Zhang

Along with fruitful applications of Deep Neural Networks (DNNs) to realistic problems, recently, some empirical studies of DNNs reported a universal phenomenon of Frequency Principle (F-Principle): a DNN tends to learn a target function from low to high frequencies during the training.

1 code implementation • 24 May 2019 • Yaoyu Zhang, Zhi-Qin John Xu, Tao Luo, Zheng Ma

It remains a puzzle that why deep neural networks (DNNs), with more parameters than samples, often generalize well.

no code implementations • 19 May 2019 • Yaoyu Zhang, Zhi-Qin John Xu, Tao Luo, Zheng Ma

Overall, our work serves as a baseline for the further investigation of the impact of initialization and loss function on the generalization of DNNs, which can potentially guide and improve the training of DNNs in practice.

3 code implementations • 19 Jan 2019 • Zhi-Qin John Xu, Yaoyu Zhang, Tao Luo, Yanyang Xiao, Zheng Ma

We study the training process of Deep Neural Networks (DNNs) from the Fourier analysis perspective.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.