no code implementations • WMT (EMNLP) 2020 • Jiayi Wang, Ke Wang, Kai Fan, Yuqi Zhang, Jun Lu, Xin Ge, Yangbin Shi, Yu Zhao
We also apply an imitation learning strategy to augment a reasonable amount of pseudo APE training data, potentially preventing the model to overfit on the limited real training data and boosting the performance on held-out data.
no code implementations • WMT (EMNLP) 2020 • Jun Lu, Xin Ge, Yangbin Shi, Yuqi Zhang
In the filtering task, three main methods are applied to evaluate the quality of the parallel corpus, i. e. a) Dual Bilingual GPT-2 model, b) Dual Conditional Cross-Entropy Model and c) IBM word alignment model.
1 code implementation • 1 Mar 2023 • Kun Yang, Jun Lu
The proposed method in this paper proposes an end-to-end unsupervised semantic segmentation architecture DMSA based on four loss functions.
no code implementations • 18 Feb 2023 • Jun Lu
The sole aim of this book is to give a self-contained introduction to concepts and mathematical tools in Bayesian matrix decomposition in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections.
no code implementations • 29 Sep 2022 • Jun Lu, Joerg Osterrieder
In this paper, we propose a probabilistic model for computing an interpolative decomposition (ID) in which each column of the observed matrix has its own priority or importance, so that the end result of the decomposition finds a set of features that are representative of the entire set of features, and the selected features also have higher priority than others.
no code implementations • 26 Aug 2022 • Lingsheng Kong, Bo Hu, Xiongchang Liu, Jun Lu, Jane You, Xiaofeng Liu
Deep learning is usually data starved, and the unsupervised domain adaptation (UDA) is developed to introduce the knowledge in the labeled source domain to the unlabeled target domain.
no code implementations • 22 Aug 2022 • Jun Lu, Christine P. Chai
We introduce a probabilistic model with implicit norm regularization for learning nonnegative matrix factorization (NMF) that is commonly used for predicting missing values and finding hidden patterns in the data, in which the matrix factors are latent variables associated with each data dimension.
no code implementations • 16 Aug 2022 • Xiaofeng Liu, Fangxu Xing, Jia You, Jun Lu, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo
In TPN, while the closeness of class centers between source and target domains is explicitly enforced in a latent space, the underlying fine-grained subtype structure and the cross-domain within-class compactness have not been fully investigated.
no code implementations • 13 Jul 2022 • Jun Lu, Danny Ding
The limitation of the CGAN or ACGAN framework stands in putting too much emphasis on generating series and finding the internal trends of the series rather than predicting the future trends.
no code implementations • 8 Jul 2022 • Jun Lu, Minhui Wu
In this note, we introduce how to use Volatility Index (VIX) for postprocessing quantitative strategies so as to increase the Sharpe ratio and reduce trading risks.
no code implementations • 29 Jun 2022 • Jun Lu
In this paper, we propose a probabilistic model with automatic relevance determination (ARD) for learning interpolative decomposition (ID), which is commonly used for low-rank approximation, feature selection, and identifying hidden patterns in data, where the matrix factors are latent variables associated with each data dimension.
no code implementations • 17 Jun 2022 • Jun Lu, Shao Yi
Over the decades, the Markowitz framework has been used extensively in portfolio analysis though it puts too much emphasis on the analysis of the market uncertainty rather than on the trend prediction.
no code implementations • 30 May 2022 • Jun Lu
In this paper, we introduce a probabilistic model for learning interpolative decomposition (ID), which is commonly used for feature selection, low-rank approximation, and identifying hidden patterns in data, where the matrix factors are latent variables associated with each data dimension.
no code implementations • 23 May 2022 • Jun Lu, Xuanyu Ye
In this paper, we introduce a probabilistic model for learning nonnegative matrix factorization (NMF) that is commonly used for predicting missing values and finding hidden patterns in the data, in which the matrix factors are latent variables associated with each data dimension.
no code implementations • 2 May 2022 • Jun Lu
In deep neural networks, the gradient followed by a single sample or a batch of samples is employed to save computational resources and escape from saddle points.
no code implementations • 2 Apr 2022 • Jun Lu
It is well known that we need to choose the hyper-parameters in Momentum, AdaGrad, AdaDelta, and other alternative stochastic optimizers.
no code implementations • 15 Mar 2022 • Jun Lu, Shao Yi
SVR-GARCH model tends to "backward eavesdrop" when forecasting the financial time series volatility in which case it tends to simply produce the prediction by deviating the previous volatility.
no code implementations • 23 Feb 2022 • Jun Lu
It aims to build a solid foundation on how and why the techniques work.
no code implementations • 1 Jan 2022 • Jun Lu
In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices.
no code implementations • 6 Sep 2021 • Hui Xie, Zhuang Zhao, Jing Han, Yi Zhang, Lianfa Bai, Jun Lu
Various methods using CNNs have been developed in recent years to reconstruct HSIs, but most of the supervised deep learning methods aimed to fit a brute-force mapping relationship between the captured compressed image and standard HSIs.
no code implementations • 20 Aug 2021 • Jun Lu
The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in Bayesian inference for finite and infinite Gaussian mixture model in order to seamlessly introduce their applications in subsequent sections.
no code implementations • 10 Aug 2021 • Jun Lu
This survey is meant to provide an introduction to the fundamental theorem of linear algebra and the theories behind them.
no code implementations • ICCV 2021 • Xiaofeng Liu, Site Li, Yubin Ge, Pengyi Ye, Jane You, Jun Lu
The UDA for ordinal classification requires inducing non-trivial ordinal distribution prior to the latent space.
no code implementations • 22 Jul 2021 • Xiaofeng Liu, Bo Hu, Linghao Jin, Xu Han, Fangxu Xing, Jinsong Ouyang, Jun Lu, Georges El Fakhri, Jonghye Woo
In this work, we propose a domain generalization (DG) approach to learn on several labeled source domains and transfer knowledge to a target domain that is inaccessible in training.
no code implementations • 10 May 2021 • Jun Lu
We then describe linear models from different perspectives and find the properties and theories behind the models.
no code implementations • 4 Jan 2021 • Heng Lu, Chen Yang, Ye Tian, Jun Lu, Fanqi Xu, FengNan Chen, Yan Ying, Kevin G. Schädler, Chinhua Wang, Frank H. L. Koppens, Antoine Reserbat-Plantey, Joel Moser
With it we characterize the lowest frequency mode of a FLG resonator by measuring its frequency response as a function of position on the membrane.
Mesoscale and Nanoscale Physics
no code implementations • 1 Jan 2021 • Xiaofeng Liu, Linghao Jin, Xu Han, Jun Lu, Jane You, Lingsheng Kong
In the up to two orders of magnitude compressed domain, we can explicitly infer the expression from the residual frames and possible to extract identity factors from the I frame with a pre-trained face recognition network.
no code implementations • 1 Jan 2021 • Xiaofeng Liu, Xiongchang Liu, Bo Hu, Wenxuan Ji, Fangxu Xing, Jun Lu, Jane You, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo
Recent advances in unsupervised domain adaptation (UDA) show that transferable prototypical learning presents a powerful means for class conditional alignment, which encourages the closeness of cross-domain class centroids.
no code implementations • 1 Jan 2021 • Xiaofeng Liu, Bo Hu, Xiongchang Liu, Jun Lu, Jane You, Lingsheng Kong
Unsupervised domain adaptation (UDA) aims to transfer the knowledge on a labeled source domain distribution to perform well on an unlabeled target domain.
no code implementations • WS 2018 • Yongchao Deng, Shanbo Cheng, Jun Lu, Kai Song, Jingang Wang, Shenglan Wu, Liang Yao, Guchun Zhang, Haibo Zhang, Pei Zhang, Changfeng Zhu, Boxing Chen
We participated in 5 translation directions including English ↔ Russian, English ↔ Turkish in both directions and English → Chinese.
no code implementations • WS 2018 • Jun Lu, Xiaoyu Lv, Yangbin Shi, Boxing Chen
This paper describes the Alibaba Machine Translation Group submissions to the WMT 2018 Shared Task on Parallel Corpus Filtering.
no code implementations • 27 Apr 2018 • Jun Lu, Wei Ma, Boi Faltings
We explored $CompNet$, in which case we morph a well-trained neural network to a deeper one where network function can be preserved and the added layer is compact.
no code implementations • 15 Feb 2018 • Jun Lu, Meng Li, David Dunson
Dirichlet process mixture (DPM) models tend to produce many small clusters regardless of whether they are needed to accurately characterize the data - this is particularly true for large data sets.
1 code implementation • 4 Dec 2017 • Wei Ma, Jun Lu
The article is helpful for the beginners of the neural network to understand how fully connected layer and the convolutional layer work in the backend.
no code implementations • 28 Aug 2017 • Jun Lu
In this article we introduce how to put vague hyperprior on Dirichlet distribution, and we update the parameter of it by adaptive rejection sampling (ARS).
1 code implementation • 19 May 2017 • Jun Lu
Based on the data over a 103 day period, we trained our models, getting the best model - which is AdaBoost-Decision Tree Classification.