Search Results for author: Jikai Hou

Found 6 papers, 1 papers with code

Inferences on Mixing Probabilities and Ranking in Mixed-Membership Models

no code implementations29 Aug 2023 Sohom Bhattacharya, Jianqing Fan, Jikai Hou

Network data is prevalent in numerous big data applications including economics and health networks where it is of prime importance to understand the latent structure of network.

Uncertainty Quantification

Uncertainty Quantification of MLE for Entity Ranking with Covariates

no code implementations20 Dec 2022 Jianqing Fan, Jikai Hou, Mengxin Yu

This paper concerns with statistical estimation and inference for the ranking problems based on pairwise comparisons with additional covariate information such as the attributes of the compared items.

Uncertainty Quantification

Distillation $\approx$ Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized Neural Network

1 code implementation2 Oct 2019 Bin Dong, Jikai Hou, Yiping Lu, Zhihua Zhang

Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping.

Information Retrieval Retrieval

Distillation $\approx$ Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized NN

no code implementations25 Sep 2019 Bin Dong, Jikai Hou, Yiping Lu, Zhihua Zhang

Assuming that the teacher network is overparameterized, we argue that the teacher network is essentially harvesting dark knowledge from the data via early stopping.

Information Retrieval Retrieval

Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems

no code implementations28 May 2019 Tianle Cai, Ruiqi Gao, Jikai Hou, Siyu Chen, Dong Wang, Di He, Zhihua Zhang, Li-Wei Wang

First-order methods such as stochastic gradient descent (SGD) are currently the standard algorithm for training deep neural networks.

regression Second-order methods

Distributionally Robust Optimization Leads to Better Generalization: on SGD and Beyond

no code implementations ICLR 2019 Jikai Hou, Kaixuan Huang, Zhihua Zhang

In this paper, we adopt distributionally robust optimization (DRO) (Ben-Tal et al., 2013) in hope to achieve a better generalization in deep learning tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.