no code implementations • 19 Sep 2024 • Peichao Lai, Zhengfeng Zhang, Wentao Zhang, Fangcheng Fu, Bin Cui
Recently, using large language models (LLMs) for data augmentation has led to considerable improvements in unsupervised sentence embedding models.
no code implementations • 10 Oct 2021 • Jian Lin, Zhengfeng Zhang, Junping Zhang, Xiaopeng Li
Prime factorization is a difficult problem with classical computing, whose exponential hardness is the foundation of Rivest-Shamir-Adleman (RSA) cryptography.
no code implementations • 27 Sep 2021 • Zhaorun Chen, Binhao Chen, Shenghan Xie, Liang Gong, Chengliang Liu, Zhengfeng Zhang, Junping Zhang
In complex environments with high dimension, training a reinforcement learning (RL) model from scratch often suffers from lengthy and tedious collection of agent-environment interactions.