1 code implementation • 3 Nov 2022 • Ruicheng Xian, Lang Yin, Han Zhao
To mitigate the bias exhibited by machine learning models, fairness criteria can be integrated into the training process to ensure fair treatment across all demographics, but it often comes at the expense of model performance.
no code implementations • ICLR 2022 • Ruicheng Xian, Heng Ji, Han Zhao
Recent advances in neural modeling have produced deep multilingual language models capable of extracting cross-lingual knowledge from unparallel texts, as evidenced by their decent zero-shot transfer performance.
no code implementations • ICLR 2020 • Ziwei Ji, Matus Telgarsky, Ruicheng Xian
This paper establishes rates of universal approximation for the shallow neural tangent kernel (NTK): network weights are only allowed microscopic changes from random initialization, which entails that activations are mostly unchanged, and the network is nearly equivalent to its linearization.
no code implementations • 18 Jun 2019 • Bolton Bailey, Ziwei Ji, Matus Telgarsky, Ruicheng Xian
This paper investigates the approximation power of three types of random neural networks: (a) infinite width networks, with weights following an arbitrary distribution; (b) finite width networks obtained by subsampling the preceding infinite width networks; (c) finite width networks obtained by starting with standard Gaussian initialization, and then adding a vanishingly small correction to the weights.