no code implementations • NAACL 2022 • Jiangang Bai, Yujing Wang, Hong Sun, Ruonan Wu, Tianmeng Yang, Pengfei Tang, Defu Cao, Mingliang Zhang1, Yunhai Tong, Yaming Yang, Jing Bai, Ruofei Zhang, Hao Sun, Wei Shen
Large-scale pre-trained language models have attracted extensive attentions in the research community and shown promising results on various tasks of natural language processing.
1 code implementation • 25 May 2023 • Wei Wang, Yang Liu, Hao Sun
Note that the FT and SVD blocks are capable of learning global information, while the Conv blocks focus on learning local information.
no code implementations • 24 May 2023 • Hao Sun, Xiao Liu, Yeyun Gong, Yan Zhang, Nan Duan
However, there are cases where answering a question requires implicit knowledge that is not directly retrievable from the question itself.
no code implementations • 24 May 2023 • Yilong Xu, Yang Liu, Hao Sun
Biding of these modules yields the state-of-the-art performance of RSRM in symbolic regression as demonstrated by multiple sets of benchmark examples.
1 code implementation • 23 May 2023 • Rui Li, Xu Chen, Chaozhuo Li, Yanming Shen, Jianan Zhao, Yujing Wang, Weihao Han, Hao Sun, Weiwei Deng, Qi Zhang, Xing Xie
Embedding models have shown great power in knowledge graph completion (KGC) task.
no code implementations • 13 May 2023 • Han Fang, Zhifei Yang, Xianghao Zang, Chao Ban, Hao Sun
Specifically, after applying attention-based video masking to generate high-informed and low-informed masks, we propose Informed Semantics Completion to recover masked semantics information.
no code implementations • 9 May 2023 • Pu Ren, Chengping Rao, Hao Sun, Yang Liu
In this paper, we present a PINN framework for seismic wave inversion in layered (1D) semi-infinite domain.
no code implementations • 24 Apr 2023 • Yan Zhou, Jie Guo, Hao Sun, Bin Song, Fei Richard Yu
The main idea of multimodal recommendation is the rational utilization of the item's multimodal information to improve the recommendation performance.
1 code implementation • 20 Apr 2023 • Hao Sun, Zhexin Zhang, Jiawen Deng, Jiale Cheng, Minlie Huang
To further promote the safe deployment of LLMs, we develop a Chinese LLM safety assessment benchmark.
no code implementations • 19 Apr 2023 • Bingxuan Xu, Rui Meng, Yue Chen, Xiaodong Xu, Chen Dong, Hao Sun
Upon the designed DNSC architecture, we further combine adversarial learning, variational autoencoder, and diffusion model to propose the Latent Diffusion DNSC (Latent-Diff DNSC) scheme to realize intelligent online de-noising.
no code implementations • 17 Mar 2023 • Yidan Zhang, Ting Zhang, Dong Chen, Yujing Wang, Qi Chen, Xing Xie, Hao Sun, Weiwei Deng, Qi Zhang, Fan Yang, Mao Yang, Qingmin Liao, Baining Guo
While generative modeling has been ubiquitous in natural language processing and computer vision, its application to image retrieval remains unexplored.
1 code implementation • 15 Mar 2023 • Daixuan Cheng, Shaohan Huang, Junyu Bi, Yuefeng Zhan, Jianfeng Liu, Yujing Wang, Hao Sun, Furu Wei, Denvy Deng, Qi Zhang
Large Language Models (LLMs) are popular for their impressive abilities, but the need for model-specific fine-tuning or task-specific prompt engineering can hinder their generalization.
no code implementations • 1 Mar 2023 • Hao Sun, Li Shen, Qihuang Zhong, Liang Ding, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, DaCheng Tao
Integrating SAM with adaptive learning rate and momentum acceleration, dubbed AdaSAM, has already been explored empirically to train large-scale deep neural networks without theoretical guarantee due to the triple difficulties in analyzing the coupled perturbation step, adaptive learning rate and momentum step.
2 code implementations • 24 Feb 2023 • Samuel Holt, Alihan Hüyük, Zhaozhi Qian, Hao Sun, Mihaela van der Schaar
Many real-world offline reinforcement learning (RL) problems involve continuous-time environments with delays.
1 code implementation • 24 Feb 2023 • Boris van Breugel, Hao Sun, Zhaozhi Qian, Mihaela van der Schaar
In this work we argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
no code implementations • 18 Feb 2023 • Jiawen Deng, Hao Sun, Zhexin Zhang, Jiale Cheng, Minlie Huang
With the development of artificial intelligence, dialogue systems have been endowed with amazing chit-chat capabilities, and there is widespread interest and discussion about whether the generated contents are socially beneficial.
no code implementations • 4 Feb 2023 • Nick Pears, Hang Dai, Will Smith, Hao Sun
We present a progressive 3D registration framework that is a highly-efficient variant of classical non-rigid Iterative Closest Points (N-ICP).
no code implementations • 30 Jan 2023 • Hao Sun, Nick Pears
Rather than regressing gaze direction directly from images, we show that adding a 3D shape model can: i) improve gaze estimation accuracy, ii) perform well with lower resolution inputs and iii) provide a richer understanding of the eye-region and its constituent gaze system.
no code implementations • 21 Dec 2022 • Hao Sun, Zhexin Zhang, Fei Mi, Yasheng Wang, Wei Liu, Jianwei Cui, Bin Wang, Qun Liu, Minlie Huang
In this paper, we propose a framework, MoralDial to train and evaluate moral dialogue systems.
no code implementations • 19 Dec 2022 • Jiale Cheng, Sahand Sabour, Hao Sun, Zhuang Chen, Minlie Huang
As previous studies have demonstrated that seekers' persona is an important factor for effective support, we investigate whether there are benefits to modeling such information in dialogue models for support.
1 code implementation • 10 Dec 2022 • Hao Sun, Xiao Liu, Yeyun Gong, Anlei Dong, Jian Jiao, Jingwen Lu, Yan Zhang, Daxin Jiang, Linjun Yang, Rangan Majumder, Nan Duan
Knowledge distillation is often used to transfer knowledge from a strong teacher model to a relatively weak student model.
no code implementations • 9 Dec 2022 • Qi Jiang, Hao Sun, Xi Zhang
LiDAR and camera are two essential sensors for 3D object detection in autonomous driving.
no code implementations • 6 Dec 2022 • R. Bailey Bond, Pu Ren, Jerome F. Hajjar, Hao Sun
Clustering analysis of sequence data continues to address many applications in engineering design, aided with the rapid growth of machine learning in applied science.
1 code implementation • 4 Dec 2022 • Zhexin Zhang, Jiale Cheng, Hao Sun, Jiawen Deng, Fei Mi, Yasheng Wang, Lifeng Shang, Minlie Huang
In order to detect such toxic generations, existing methods rely on templates, real-world data extraction, crowdsourcing workers, or automatic generation to construct adversarial contexts that are likely to induce toxic generations.
1 code implementation • CVPR 2023 • Hongwei Xue, Peng Gao, Hongyang Li, Yu Qiao, Hao Sun, Houqiang Li, Jiebo Luo
However, unlike the low-level features such as pixel values, we argue the features extracted by powerful teacher models already encode rich semantic correlation across regions in an intact image. This raises one question: is reconstruction necessary in Masked Image Modeling (MIM) with a teacher model?
no code implementations • 25 Oct 2022 • Pu Ren, Chengping Rao, Su Chen, Jian-Xun Wang, Hao Sun, Yang Liu
In this paper, we present a novel physics-informed neural network (PINN) model for seismic wave modeling in semi-infinite domain without the nedd of labeled data.
1 code implementation • 14 Oct 2022 • Luning Sun, Daniel Zhengyu Huang, Hao Sun, Jian-Xun Wang
The equation residuals are used to inform the spline learning in a Bayesian manner, where approximate Bayesian uncertainty calibration techniques are employed to approximate posterior distributions of the trainable parameters.
1 code implementation • 15 Sep 2022 • Hao Sun, Lei Han, Rui Yang, Xiaoteng Ma, Jian Guo, Bolei Zhou
We validate our insight on a range of RL tasks and show its improvement over baselines: (1) In offline RL, the conservative exploitation leads to improved performance based on off-the-shelf algorithms; (2) In online continuous control, multiple value functions with different shifting constants can be used to tackle the exploration-exploitation dilemma for better sample efficiency; (3) In discrete control tasks, a negative reward shifting yields an improvement over the curiosity-based exploration method.
1 code implementation • 12 Sep 2022 • Bing Su, Dazhao Du, Zhao Yang, Yujie Zhou, Jiangmeng Li, Anyi Rao, Hao Sun, Zhiwu Lu, Ji-Rong Wen
Although artificial intelligence (AI) has made significant progress in understanding molecules in a wide range of fields, existing models generally acquire the single cognitive ability from the single molecular modality.
no code implementations • 29 Aug 2022 • Qasim Ali, Sen Ma, Umar Farooq, Jiakuan Niu, Fen Li, Muhammad Abaidullah, Boshuai Liu, Shaokai La, Defeng Li, Zhichang Wang, Hao Sun, Yalei Cui, Yinghua Shi
In the gut microbiota analysis, meat geese supplemented with pasture demonstrated a significant reduction in microbial richness and diversity compared to IHF meat geese demonstrating antimicrobial, antioxidation, and anti-inflammatory ability of AGF system.
1 code implementation • 17 Aug 2022 • Haoyu Lu, Qiongyi Zhou, Nanyi Fei, Zhiwu Lu, Mingyu Ding, Jingyuan Wen, Changde Du, Xin Zhao, Hao Sun, Huiguang He, Ji-Rong Wen
Further, from the perspective of neural encoding (based on our foundation model), we find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.
1 code implementation • 2 Aug 2022 • Pu Ren, Chengping Rao, Yang Liu, Zihan Ma, Qi Wang, Jian-Xun Wang, Hao Sun
High-fidelity simulation of complex physical systems is exorbitantly expensive and inaccessible across spatiotemporal scales.
1 code implementation • 28 Jul 2022 • Hao Sun, Hongyi Wang, Jiaqing Liu, Yen-Wei Chen, Lanfen Lin
Multimodal sentiment analysis and depression estimation are two important research topics that aim to predict human mental states using multimodal data.
no code implementations • 27 Jul 2022 • Hao Sun, Junting Chen
This paper proposes to integrate interpolation with matrix completion to exploit both the spatial correlation and the potential low rank structure of the propagation map.
no code implementations • 16 Jul 2022 • Weiqing Ren, Yuben Qu, Chao Dong, Yuqian Jing, Hao Sun, Qihui Wu, Song Guo
With the vigorous development of artificial intelligence (AI), the intelligent applications based on deep neural network (DNN) change people's lifestyles and the production efficiency.
no code implementations • 11 Jul 2022 • Hao Sun, Boris van Breugel, Jonathan Crabbe, Nabeel Seedat, Mihaela van der Schaar
Uncertainty quantification (UQ) is essential for creating trustworthy machine learning models.
1 code implementation • 6 Jun 2022 • Yujing Wang, Yingyan Hou, Haonan Wang, Ziming Miao, Shibin Wu, Hao Sun, Qi Chen, Yuqing Xia, Chengmin Chi, Guoshuai Zhao, Zheng Liu, Xing Xie, Hao Allen Sun, Weiwei Deng, Qi Zhang, Mao Yang
To this end, we propose Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates relevant document identifiers directly for a designated query.
no code implementations • 26 May 2022 • Fangzheng Sun, Yang Liu, Jian-Xun Wang, Hao Sun
The key concept is to interpret mathematical operations and system state variables by computational rules and symbols, establish symbolic reasoning of mathematical formulas via expression trees, and employ a Monte Carlo tree search (MCTS) agent to explore optimal expression trees based on measurement data.
1 code implementation • 9 May 2022 • Xin-Yang Liu, Hao Sun, Min Zhu, Lu Lu, Jian-Xun Wang
A more promising way is to leverage our prior physics knowledge in scientific deep learning models, known as physics-informed deep learning (PiDL).
1 code implementation • 3 May 2022 • Lele Luan, Yang Liu, Hao Sun
Distilling interpretable physical laws from videos has led to expanded interest in the computer vision community recently thanks to the advances in deep learning, but still remains a great challenge.
2 code implementations • 1 Apr 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Defu Lian, Yeyun Gong, Qi Chen, Fan Yang, Hao Sun, Yingxia Shao, Denvy Deng, Qi Zhang, Xing Xie
We perform comprehensive explorations for the optimal conduct of knowledge distillation, which may provide useful insights for the learning of VQ based ANN index.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
1 code implementation • 17 Mar 2022 • Yuxian Gu, Jiaxin Wen, Hao Sun, Yi Song, Pei Ke, Chujie Zheng, Zheng Zhang, Jianzhu Yao, Xiaoyan Zhu, Jie Tang, Minlie Huang
Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.
1 code implementation • 16 Feb 2022 • Rui Li, Jianan Zhao, Chaozhuo Li, Di He, Yiqi Wang, Yuming Liu, Hao Sun, Senzhang Wang, Weiwei Deng, Yanming Shen, Xing Xie, Qi Zhang
The effectiveness of knowledge graph embedding (KGE) largely depends on the ability to model intrinsic relation patterns and mapping properties.
no code implementations • 13 Feb 2022 • Jianjin Zhang, Zheng Liu, Weihao Han, Shitao Xiao, Ruicheng Zheng, Yingxia Shao, Hao Sun, Hanqing Zhu, Premkumar Srinivasan, Denvy Deng, Qi Zhang, Xing Xie
On the other hand, the capability of making high-CTR retrieval is optimized by learning to discriminate user's clicked ads from the entire corpus.
1 code implementation • ICLR 2022 • Rui Yang, Yiming Lu, Wenzhe Li, Hao Sun, Meng Fang, Yali Du, Xiu Li, Lei Han, Chongjie Zhang
In this paper, we revisit the theoretical property of GCSL -- optimizing a lower bound of the goal reaching objective, and extend GCSL as a novel offline goal-conditioned RL algorithm.
no code implementations • ICLR 2022 • Chengping Rao, Pu Ren, Yang Liu, Hao Sun
There have been growing interests in leveraging experimental measurements to discover the underlying partial differential equations (PDEs) that govern complex physical phenomena.
1 code implementation • 16 Jan 2022 • Jiawen Deng, Jingyan Zhou, Hao Sun, Chujie Zheng, Fei Mi, Helen Meng, Minlie Huang
To this end, we propose a benchmark --COLD for Chinese offensive language analysis, including a Chinese Offensive Language Dataset --COLDATASET and a baseline detector --COLDETECTOR which is trained on the dataset.
2 code implementations • 14 Jan 2022 • Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Yingxia Shao, Defu Lian, Chaozhuo Li, Hao Sun, Denvy Deng, Liangjie Zhang, Qi Zhang, Xing Xie
In this work, we tackle this problem with Bi-Granular Document Representation, where the lightweight sparse embeddings are indexed and standby in memory for coarse-grained candidate search, and the heavyweight dense embeddings are hosted in disk for fine-grained post verification.
1 code implementation • 2 Jan 2022 • Hao Sun, Taiyi Wang
Although it is well known that exploration plays a key role in Reinforcement Learning (RL), prevailing exploration strategies for continuous control tasks in RL are mainly based on naive isotropic Gaussian noise regardless of the causality relationship between action space and the task and consider all dimensions of actions equally important.
1 code implementation • 27 Oct 2021 • Nanyi Fei, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen, Haoyu Lu, Ruihua Song, Xin Gao, Tao Xiang, Hao Sun, Ji-Rong Wen
To overcome this limitation and take a solid step towards artificial general intelligence (AGI), we develop a foundation model pre-trained with huge multimodal data, which can be quickly adapted for various downstream cognitive tasks.
no code implementations • 25 Oct 2021 • Jianan Zhao, Chaozhuo Li, Qianlong Wen, Yiqi Wang, Yuming Liu, Hao Sun, Xing Xie, Yanfang Ye
Existing graph transformer models typically adopt fully-connected attention mechanism on the whole input graph and thus suffer from severe scalability issues and are intractable to train in data insufficient cases.
1 code implementation • Findings (ACL) 2022 • Hao Sun, Guangxuan Xu, Jiawen Deng, Jiale Cheng, Chujie Zheng, Hao Zhou, Nanyun Peng, Xiaoyan Zhu, Minlie Huang
We propose a taxonomy for dialogue safety specifically designed to capture unsafe behaviors in human-bot dialogue settings, with focuses on context-sensitive unsafety, which is under-explored in prior works.
no code implementations • 29 Sep 2021 • Hao Sun, Lei Han, Jian Guo, Bolei Zhou
We verify our insight on a range of tasks: (1) In offline RL, the conservative exploitation leads to improved learning performance based on off-the-shelf algorithms; (2) In online continuous control, multiple value functions with different shifting constants can be used to trade-off between exploration and exploitation thus improving learning efficiency; (3) In online RL with discrete action space, a negative reward shifting brings an improvement over the previous curiosity-based exploration method.
no code implementations • 29 Sep 2021 • Zhihan Liu, Hao Sun, Bolei Zhou
To this end, we propose a novel meta-algorithm Self-Imitation Policy Learning through Iterative Distillation (SPLID) which relies on the concept of $\delta$-distilled policy to iteratively level up the quality of the target data and agent mimics from the relabeled target data.
no code implementations • 10 Aug 2021 • Yiqi Wang, Chaozhuo Li, Mingzheng Li, Wei Jin, Yuming Liu, Hao Sun, Xing Xie, Jiliang Tang
These methods often make recommendations based on the learned user and item embeddings.
2 code implementations • 3 Aug 2021 • Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, Bosi Wen, Xiaoyan Zhu, Minlie Huang, Jie Tang
Although pre-trained language models have remarkably enhanced the generation ability of dialogue systems, open-domain Chinese dialogue systems are still limited by the dialogue data and the model size compared with English ones.
no code implementations • 9 Jul 2021 • Hao Sun, Ziping Xu, Meng Fang, Zhenghao Peng, Jiadong Guo, Bo Dai, Bolei Zhou
Safe exploration is crucial for the real-world application of reinforcement learning (RL).
2 code implementations • 26 Jun 2021 • Pu Ren, Chengping Rao, Yang Liu, JianXun Wang, Hao Sun
Partial differential equations (PDEs) play a fundamental role in modeling and simulating problems across a wide range of disciplines.
no code implementations • 9 Jun 2021 • Lele Luan, Yang Liu, Hao Sun
Distilling analytical models from data has the potential to advance our understanding and prediction of nonlinear dynamics.
2 code implementations • 9 Jun 2021 • Chengping Rao, Pu Ren, Qi Wang, Oral Buyukozturk, Hao Sun, Yang Liu
Modeling complex spatiotemporal dynamical systems, such as the reaction-diffusion processes, have largely relied on partial differential equations (PDEs).
2 code implementations • Findings (ACL) 2021 • Hao Sun, Zhenru Lin, Chujie Zheng, Siyang Liu, Minlie Huang
In this paper, we propose PsyQA, a Chinese dataset of psychological health support in the form of question and answer pair.
1 code implementation • 5 May 2021 • Fangzheng Sun, Yang Liu, Hao Sun
Dynamical systems are typically governed by a set of linear/nonlinear differential equations.
no code implementations • 2 May 2021 • Chengping Rao, Hao Sun, Yang Liu
Modeling nonlinear spatiotemporal dynamical systems has primarily relied on partial differential equations (PDEs).
1 code implementation • 25 Apr 2021 • Chaozhuo Li, Bochen Pang, Yuming Liu, Hao Sun, Zheng Liu, Xing Xie, Tianqi Yang, Yanling Cui, Liangjie Zhang, Qi Zhang
Our motivation lies in incorporating the tremendous amount of unsupervised user behavior data from the historical search logs as the complementary graph to facilitate relevance modeling.
no code implementations • 4 Mar 2021 • Zhiqun Zhao, Hengyou Wang, Hao Sun, Zhihai He
In this work, we propose to develop a structure-preserving progressive low-rank image completion (SPLIC) method to remove unneeded texture details from the input images and shift the bias of deep neural networks towards global object structures and semantic cues.
no code implementations • IEEE Transactions on Image Processing 2021 • Hao Sun, Xiangtao Zheng, Xiaoqiang Lu
To explore the spatial information for HSI classification, pixels with its adjacent pixels are usually directly cropped from hyperspectral data to form HSI cubes in CNN-based methods.
2 code implementations • 15 Jan 2021 • Jason Yue Zhu, Yanling Cui, Yuming Liu, Hao Sun, Xue Li, Markus Pelger, Tianqi Yang, Liangjie Zhang, Ruofei Zhang, Huasha Zhao
Text encoders based on C-DSSM or transformers have demonstrated strong performance in many Natural Language Processing (NLP) tasks.
no code implementations • 6 Jan 2021 • Shankha Banerjee, Fawzi Boudjema, Nabarun Chakrabarty, Hao Sun
One-loop electroweak corrections to the annihilation cross-sections of dark matter in the Higgs resonance region of the inert doublet model (IDM) are investigated.
High Energy Physics - Phenomenology High Energy Physics - Experiment
no code implementations • 6 Jan 2021 • Shankha Banerjee, Fawzi Boudjema, Nabarun Chakrabarty, Hao Sun
We examine the relic density of the light mass dark matter region in the inert doublet model (IDM) when the dominant process is due to co-annihilation between the lightest neutral scalars of the model.
High Energy Physics - Phenomenology High Energy Physics - Experiment
no code implementations • 6 Jan 2021 • Shankha Banerjee, Fawzi Boudjema, Nabarun Chakrabarty, Hao Sun
These are the dominant processes that enter the computation of the relic density for the low mass region of the inert doublet model (IDM) when annihilations to two on-shell vector bosons are closed.
High Energy Physics - Phenomenology High Energy Physics - Experiment
no code implementations • 6 Jan 2021 • Shankha Banerjee, Fawzi Boudjema, Nabarun Chakrabarty, Hao Sun
The theoretical uncertainty brought by the scale dependence leads us to introduce a new criterion on the perturbativity of the IDM.
High Energy Physics - Phenomenology High Energy Physics - Experiment
no code implementations • 1 Jan 2021 • Hao Sun, Ziping Xu, Meng Fang, Yuhang Song, Jiechao Xiong, Bo Dai, Zhengyou Zhang, Bolei Zhou
Despite the remarkable progress made by the policy gradient algorithms in reinforcement learning (RL), sub-optimal policies usually result from the local exploration property of the policy gradient update.
no code implementations • 16 Oct 2020 • Zhihao Cheng, Liu Liu, Aishan Liu, Hao Sun, Meng Fang, DaCheng Tao
By contrast, this paper proves that LfO is almost equivalent to LfD in the deterministic robot environment, and more generally even in the robot environment with bounded randomness.
no code implementations • 7 Oct 2020 • Hao Sun, Nick Pears, Hang Dai
The ear, as an important part of the human head, has received much less attention compared to the human face in the area of computer vision.
no code implementations • 31 Aug 2020 • Lele Luan, Jingwei Zheng, Yongchao Yang, Ming L. Wang, Hao Sun
This paper develops a deep learning framework based on convolutional neural networks (CNNs) that enable real-time extraction of full-field subpixel structural displacements from videos.
no code implementations • 1 Jul 2020 • Pu Ren, Xinyu Chen, Lijun Sun, Hao Sun
To address this fundamental issue, this paper presents an incremental Bayesian tensor learning method for reconstruction of spatiotemporal missing data in SHM and forecasting of structural response.
no code implementations • 14 Jun 2020 • Zhenghao Peng, Hao Sun, Bolei Zhou
Conventional Reinforcement Learning (RL) algorithms usually have one single agent learning to solve the task independently.
no code implementations • 11 Jun 2020 • Hao Sun, Ziping Xu, Yuhang Song, Meng Fang, Jiechao Xiong, Bo Dai, Bolei Zhou
However, PG algorithms rely on exploiting the value function being learned with the first-order update locally, which results in limited sample efficiency.
1 code implementation • 10 Jun 2020 • Chengping Rao, Hao Sun, Yang Liu
In this paper, we present a physics-informed neural network (PINN) with mixed-variable output to model elastodynamics problems without resort to labeled data, in which the I/BCs are hardly imposed.
1 code implementation • 8 Jun 2020 • Mingjian Chen, Xu Tan, Yi Ren, Jin Xu, Hao Sun, Sheng Zhao, Tao Qin, Tie-Yan Liu
Transformer-based text to speech (TTS) model (e. g., Transformer TTS~\cite{li2019neural}, FastSpeech~\cite{ren2019fastspeech}) has shown the advantages of training and inference efficiency over RNN-based model (e. g., Tacotron~\cite{shen2018natural}) due to its parallel computation in training and/or inference.
no code implementations • 6 Jun 2020 • Zhao Chen, Hao Sun
Specifically, an $\ell_2$ Bayesian learning method is firstly developed for updating the intact model and uncertainty quantification so as to set forward a baseline for damage detection.
1 code implementation • 21 May 2020 • Hao Sun, Zhenghao Peng, Bo Dai, Jian Guo, Dahua Lin, Bolei Zhou
In problem-solving, we humans can come up with multiple novel solutions to the same problem.
1 code implementation • 5 May 2020 • Zhao Chen, Yang Liu, Hao Sun
Harnessing data to discover the underlying governing laws or equations that describe the behavior of complex physical systems can significantly advance our modeling, simulation and understanding of such systems in various science and engineering disciplines.
no code implementations • 27 Apr 2020 • Kaitao Song, Hao Sun, Xu Tan, Tao Qin, Jianfeng Lu, Hongzhi Liu, Tie-Yan Liu
While pre-training and fine-tuning, e. g., BERT~\citep{devlin2018bert}, GPT-2~\citep{radford2019language}, have achieved great success in language understanding and generation tasks, the pre-trained models are usually too big for online deployment in terms of both memory cost and inference speed, which hinders them from practical online usage.
1 code implementation • 27 Apr 2020 • Hao Sun, Xinyu Pan, Bo Dai, Dahua Lin, Bolei Zhou
Solving the Goal-Conditioned Reward Sparse (GCRS) task is a challenging reinforcement learning problem due to the sparsity of reward signals.
no code implementations • CVPR 2020 • Hao Sun, Zhiqun Zhao, Zhihai He
Based on this unique property, we develop a new approach, called reciprocal learning, for human trajectory prediction.
no code implementations • 19 Mar 2020 • Xia Zhang, Hao Sun, Xuanzhu Jin, Moses Olabhele Esangbedo
We set up a new fuzzy binary relation on the consumption set to evaluate the fuzzy preferences.
Computer Science and Game Theory
1 code implementation • 24 Feb 2020 • Chengping Rao, Hao Sun, Yang Liu
Physics-informed deep learning has drawn tremendous interest in recent years to solve computational physics problems, whose basic concept is to embed physical laws to constrain/inform neural networks, with the need of less data for training a reliable model.
1 code implementation • NeurIPS 2019 • Hao Sun, Zhizhong Li, Xiaotong Liu, Dahua Lin, Bolei Zhou
This approach learns from Hindsight Inverse Dynamics based on Hindsight Experience Replay, enabling the learning process in a self-imitated manner and thus can be trained with supervised learning.
no code implementations • 30 Oct 2019 • Hao Sun, Jiadong Guo, Edward J. Kim, Robert J. Brunner
The increasing amount of data in astronomy provides great challenges for machine learning research.
no code implementations • 19 Oct 2019 • Henry H. Yu, Xue Feng, Hao Sun, Ziwen Wang
Convolutional neural networks (CNNs) have been successfully applied to medical image classification, segmentation, and related tasks.
no code implementations • 8 Oct 2019 • Henry H. Yu, Jiang Liu, Hao Sun, Ziwen Wang, Haotian Zhang
Image pairing is an important research task in the field of computer vision.
no code implementations • 25 Sep 2019 • Hao Sun, Bo Dai, Jiankai Sun, Zhenghao Peng, Guodong Xu, Dahua Lin, Bolei Zhou
In this work we model the social influence into the scheme of reinforcement learning, enabling the agents to learn both from the environment and from their peers.
no code implementations • 4 Sep 2019 • Yang Li, Jianhe Yuan, Zhiqun Zhao, Hao Sun, Zhihai He
In this work, we develop a joint sample discovery and iterative model evolution method for semi-supervised learning on very small labeled training sets.
no code implementations • 15 Aug 2019 • Qianggang Ding, Sifan Wu, Hao Sun, Jiadong Guo, Shu-Tao Xia
In addition, label regularization techniques such as label smoothing and label disturbance have also been proposed with the motivation of adding a stochastic perturbation to labels.
no code implementations • 30 Apr 2019 • Xiaolong Ma, Geng Yuan, Sheng Lin, Zhengang Li, Hao Sun, Yanzhi Wang
The state-of-art DNN structures involve high computation and great demand for memory storage which pose intensive challenge on DNN framework resources.
1 code implementation • 24 Apr 2019 • Hao Sun, Xianxu Zeng, Tao Xu, Gang Peng, Yutao Ma
In the ten-fold cross-validation process, the CADx approach, HIENet, achieved a 76. 91 $\pm$ 1. 17% (mean $\pm$ s. d.) classification accuracy for four classes of endometrial tissue, namely normal endometrium, endometrial polyp, endometrial hyperplasia, and endometrial adenocarcinoma.
no code implementations • 6 Apr 2019 • Hao Sun, Xu Tan, Jun-Wei Gan, Hongzhi Liu, Sheng Zhao, Tao Qin, Tie-Yan Liu
Recently, G2P conversion is viewed as a sequence to sequence task and modeled by RNN or CNN based encoder-decoder framework.
Ranked #1 on
Text-To-Speech Synthesis
on CMUDict 0.7b
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+3
no code implementations • 4 Apr 2019 • Tengfei Zhang, Yue Zhang, Xian Sun, Hao Sun, Menglong Yan, Xue Yang, Kun fu
A two-stage detector for OSCD is introduced to compare the extracted query and target features with the learnable metric to approach the optimized non-linear conditional probability.
no code implementations • 30 Jan 2019 • Xue Li, Zhipeng Luo, Hao Sun, Jianjin Zhang, Weihao Han, Xianqi Chu, Liangjie Zhang, Qi Zhang
The proposed training framework targets on mitigating both issues, by treating the stronger but undeployable models as annotators, and learning a deployable model from both human provided relevance labels and weakly annotated search log data.
2 code implementations • 12 Jul 2018 • Paul Muntean, Martin Monperrus, Hao Sun, Jens Grossklags, Claudia Eckert
Thus, in this paper, we propose a novel technique to provide automatic repairs of integer overflows in C source code.
Software Engineering
3 code implementations • 13 Jun 2018 • Xue Yang, Hao Sun, Xian Sun, Menglong Yan, Zhi Guo, Kun fu
The complexity of application scenarios, the redundancy of detection region, and the difficulty of dense ship detection are all the main obstacles that limit the successful operation of traditional methods in ship detection.
6 code implementations • 12 Jun 2018 • Xue Yang, Hao Sun, Kun fu, Jirui Yang, Xian Sun, Menglong Yan, Zhi Guo
Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall.
no code implementations • ICLR 2019 • Liangbo He, Hao Sun
But how can we train the neural net model with different input size?
no code implementations • 25 Apr 2018 • Zhi Zhang, Guanghan Ning, Yigang Cen, Yang Li, Zhiqun Zhao, Hao Sun, Zhihai He
The inference structures and computational complexity of existing deep neural networks, once trained, are fixed and remain the same for all test images.
no code implementations • 17 Mar 2017 • Sheng Zou, Hao Sun, Alina Zare
A semi-supervised Partial Membership Latent Dirichlet Allocation approach is developed for hyperspectral unmixing and endmember estimation while accounting for spectral variability and spatial information.
no code implementations • 8 Mar 2017 • Tianran Hu, Han Guo, Hao Sun, Thuy-vy Thi Nguyen, Jiebo Luo
Second, from a perspective of message recipients, we further study the sentiment effects of emojis, as well as their duplications, on verbal messages.
no code implementations • 6 Jan 2017 • Hao Sun, Alina Zare
A map-guided superpixel segmentation method for hyperspectral imagery is developed and introduced.
no code implementations • 20 Oct 2016 • Hao Sun
We prove that W([n]) can be written as the sum of n!
Combinatorics