Search Results for author: Hao Tian

Found 55 papers, 27 papers with code

Smooth and Stepwise Self-Distillation for Object Detection

no code implementations9 Mar 2023 Jieren Deng, Xin Zhou, Hao Tian, Zhihong Pan, Derek Aguiar

Distilling the structured information captured in feature maps has contributed to improved results for object detection tasks, but requires careful selection of baseline architectures and substantial pre-training.

object-detection Object Detection

ERNIE-Music: Text-to-Waveform Music Generation with Diffusion Models

no code implementations9 Feb 2023 Pengfei Zhu, Chao Pang, Shuohuan Wang, Yekun Chai, Yu Sun, Hao Tian, Hua Wu

In recent years, there has been an increased popularity in image and speech generation using diffusion models.

Music Generation

PASSerRank: Prediction of Allosteric Sites with Learning to Rank

1 code implementation2 Feb 2023 Hao Tian, Sian Xiao, Xi Jiang, Peng Tao

Allostery plays a crucial role in regulating protein activity, making it a highly sought-after target in drug development.

Drug Discovery Learning-To-Rank

ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization

1 code implementation9 Jan 2023 Weixin Liu, Xuyi Chen, Jiaxiang Liu, Shikun Feng, Yu Sun, Hao Tian, Hua Wu

Experimental results demonstrate that our method yields a student with much better generalization, significantly outperforms existing baselines, and establishes a new state-of-the-art result on in-domain, out-domain, and low-resource datasets in the setting of task-agnostic distillation.

Knowledge Distillation Language Modelling +1

Attention-Aware Anime Line Drawing Colorization

1 code implementation21 Dec 2022 Yu Cao, Hao Tian, P. Y. Mok

Automatic colorization of anime line drawing has attracted much attention in recent years since it can substantially benefit the animation industry.

Colorization Semantic correspondence

ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages

no code implementations13 Dec 2022 Yekun Chai, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu

Extensive results show that ERNIE-Code outperforms previous multilingual LLMs for PL or NL across a wide range of end tasks of code intelligence, including multilingual code-to-text, text-to-code, code-to-code, and text-to-text generation.

Code Summarization Language Modelling +2

BEVFormer v2: Adapting Modern Image Backbones to Bird's-Eye-View Recognition via Perspective Supervision

2 code implementations18 Nov 2022 Chenyu Yang, Yuntao Chen, Hao Tian, Chenxin Tao, Xizhou Zhu, Zhaoxiang Zhang, Gao Huang, Hongyang Li, Yu Qiao, Lewei Lu, Jie zhou, Jifeng Dai

The proposed method is verified with a wide spectrum of traditional and modern image backbones and achieves new SoTA results on the large-scale nuScenes dataset.

3D Object Detection

Arbitrary Style Guidance for Enhanced Diffusion-Based Text-to-Image Generation

no code implementations14 Nov 2022 Zhihong Pan, Xin Zhou, Hao Tian

Diffusion-based text-to-image generation models like GLIDE and DALLE-2 have gained wide success recently for their superior performance in turning complex text inputs into images of high quality and wide diversity.

Style Transfer Text-to-Image Generation

Extreme Generative Image Compression by Learning Text Embedding from Diffusion Models

no code implementations14 Nov 2022 Zhihong Pan, Xin Zhou, Hao Tian

With the recent success of diffusion models for text-to-image generation, we propose a generative image compression method that demonstrates the potential of saving an image as a short text embedding which in turn can be used to generate high-fidelity images which is equivalent to the original one perceptually.

Image Compression Text-to-Image Generation

ERNIE-UniX2: A Unified Cross-lingual Cross-modal Framework for Understanding and Generation

no code implementations9 Nov 2022 Bin Shan, Yaqian Han, Weichong Yin, Shuohuan Wang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

Recent cross-lingual cross-modal works attempt to extend Vision-Language Pre-training (VLP) models to non-English inputs and achieve impressive performance.

Contrastive Learning Language Modelling +5

Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of Rewards

no code implementations21 Oct 2022 Yekun Chai, Shuohuan Wang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

Derivative-free prompt learning has emerged as a lightweight alternative to prompt tuning, which only requires model inference to optimize the prompts.

ERNIE-ViL 2.0: Multi-view Contrastive Learning for Image-Text Pre-training

1 code implementation30 Sep 2022 Bin Shan, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

They attempt to learn cross-modal representation using contrastive learning on image-text pairs, however, the built inter-modal correlations only rely on a single view for each modality.

Contrastive Learning Cross-Modal Retrieval +4

Graph Kernels Based on Multi-scale Graph Embeddings

1 code implementation2 Jun 2022 Wei Ye, Hao Tian, Qijun Chen

To mitigate the two challenges, we propose a novel graph kernel called the Multi-scale Path-pattern Graph kernel (MPG), at the heart of which is the multi-scale path-pattern node feature map.

ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval

no code implementations18 May 2022 Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang

Our method 1) introduces a self on-the-fly distillation method that can effectively distill late interaction (i. e., ColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation process to further improve the performance with a cross-encoder teacher.

Knowledge Distillation Open-Domain Question Answering +2

LAST: Latent Space Assisted Adaptive Sampling for Protein Trajectories

1 code implementation27 Apr 2022 Hao Tian, Xi Jiang, Sian Xiao, Hunter La Force, Eric C. Larson, Peng Tao

Based on this characteristic, we proposed a new adaptive sampling method, latent space assisted adaptive sampling for protein trajectories (LAST), to accelerate the exploration of protein conformational space.

Accurate ADMET Prediction with XGBoost

1 code implementation15 Apr 2022 Hao Tian, Rajas Ketkar, Peng Tao

The absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties are important in drug discovery as they define efficacy and safety.

Drug Discovery

ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention

no code implementations23 Mar 2022 Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

We argue that two factors, information bottleneck sensitivity and inconsistency between different attention topologies, could affect the performance of the Sparse Transformer.

Sparse Learning text-classification +1

Indicative Image Retrieval: Turning Blackbox Learning into Grey

no code implementations28 Jan 2022 Xulu Zhang, Zhenqun Yang, Hao Tian, Qing Li, XiaoYong Wei

In many applications, we need the matching evidence to be indicated rather than just have the ranked list (e. g., the locations of the target proteins/cells/lesions in medical images).

Image Retrieval Representation Learning +1

ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation

2 code implementations31 Dec 2021 Han Zhang, Weichong Yin, Yewei Fang, Lanxin Li, Boqiang Duan, Zhihua Wu, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

To explore the landscape of large-scale pre-training for bidirectional text-image generation, we train a 10-billion parameter ERNIE-ViLG model on a large-scale dataset of 145 million (Chinese) image-text pairs which achieves state-of-the-art performance for both text-to-image and image-to-text tasks, obtaining an FID of 7. 9 on MS-COCO for text-to-image synthesis and best results on COCO-CN and AIC-ICC for image captioning.

Image Captioning Quantization +2

ERNIE-SPARSE: Robust Efficient Transformer Through Hierarchically Unifying Isolated Information

no code implementations29 Sep 2021 Yang Liu, Jiaxiang Liu, Yuxiang Lu, Shikun Feng, Yu Sun, Zhida Feng, Li Chen, Hao Tian, Hua Wu, Haifeng Wang

The first factor is information bottleneck sensitivity, which is caused by the key feature of Sparse Transformer — only a small number of global tokens can attend to all other tokens.

text-classification Text Classification

Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Generalized Tasks

no code implementations29 Sep 2021 Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Yang Cao, Yu Kang, Haifeng Wang

While artificial neural networks (ANNs) have been widely adopted in machine learning, researchers are increasingly obsessed by the gaps between ANNs and natural neural networks (NNNs).


Reinforcement Learning with Evolutionary Trajectory Generator: A General Approach for Quadrupedal Locomotion

1 code implementation14 Sep 2021 Haojie Shi, Bo Zhou, Hongsheng Zeng, Fan Wang, Yueqiang Dong, Jiangyong Li, Kang Wang, Hao Tian, Max Q. -H. Meng

However, due to the complex nonlinear dynamics in quadrupedal robots and reward sparsity, it is still difficult for RL to learn effective gaits from scratch, especially in challenging tasks such as walking over the balance beam.

reinforcement-learning Reinforcement Learning (RL)

ADER:Adapting between Exploration and Robustness for Actor-Critic Methods

no code implementations8 Sep 2021 Bo Zhou, Kejiao Li, Hongsheng Zeng, Fan Wang, Hao Tian

Combining off-policy reinforcement learning methods with function approximators such as neural networks has been found to lead to overestimation of the value function and sub-optimal solutions.

Continuous Control

Evolving Decomposed Plasticity Rules for Information-Bottlenecked Meta-Learning

2 code implementations8 Sep 2021 Fan Wang, Hao Tian, Haoyi Xiong, Hua Wu, Jie Fu, Yang Cao, Yu Kang, Haifeng Wang

In contrast, biological neural networks (BNNs) can adapt to various new tasks by continually updating the neural connections based on the inputs, which is aligned with the paradigm of learning effective learning rules in addition to static parameters, e. g., meta-learning.

Memorization Meta-Learning

ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression

1 code implementation4 Jun 2021 Weiyue Su, Xuyi Chen, Shikun Feng, Jiaxiang Liu, Weixin Liu, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

Specifically, the first stage, General Distillation, performs distillation with guidance from pretrained teacher, gerenal data and latent distillation loss.

Knowledge Distillation

The Flare and Warp of the Young Stellar Disk traced with LAMOST DR5 OB-type stars

no code implementations1 Feb 2021 Yang Yu, Hai-Feng Wang, Wen-Yuan Cui, Lin-Lin Li, Chao Liu, Bo Zhang, Hao Tian, Zhen-Yan Huo, Jie Ju, Zhi-Cun Liu, Fang Wen, Shuai Feng

We present analysis of the spatial density structure for the outer disk from 8$-$14 \, kpc with the LAMOST DR5 13534 OB-type stars and observe similar flaring on north and south sides of the disk implying that the flaring structure is symmetrical about the Galactic plane, for which the scale height at different Galactocentric distance is from 0. 14 to 0. 5 \, kpc.

Astrophysics of Galaxies

Exploring the Galactic Anticenter substructure with LAMOST & Gaia DR2

no code implementations7 Jan 2021 Jing Li, Xiang-Xiang Xue, Chao Liu, Bo Zhang, Hans-Walter Rix, Jeffrey L. Carlin, Chengqun Yang, Rene A. Mendez, Jing Zhong, Hao Tian, Lan Zhang, Yan Xu, Yaqian Wu, Gang Zhao, Ruixiang Chang

Their location in [$\alpha$/M] vs. [M/H] space is more metal poor than typical thin disk stars, with [$\alpha$/M] \textbf{lower} than the thick disk.

Astrophysics of Galaxies

ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora

2 code implementations EMNLP 2021 Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

In this paper, we propose ERNIE-M, a new training method that encourages the model to align the representation of multiple languages with monolingual corpora, to overcome the constraint that the parallel corpus size places on the model performance.


Unsupervised Object Detection with LiDAR Clues

no code implementations CVPR 2021 Hao Tian, Yuntao Chen, Jifeng Dai, Zhaoxiang Zhang, Xizhou Zhu

We further identify another major issue, seldom noticed by the community, that the long-tailed and open-ended (sub-)category distribution should be accommodated.

object-detection Object Detection

SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis

6 code implementations ACL 2020 Hao Tian, Can Gao, Xinyan Xiao, Hao liu, Bolei He, Hua Wu, Haifeng Wang, Feng Wu

In particular, the prediction of aspect-sentiment pairs is converted into multi-label classification, aiming to capture the dependency between words in a pair.

Multi-Label Classification Sentiment Analysis

ivis Dimensionality Reduction Framework for Biomacromolecular Simulations

1 code implementation22 Apr 2020 Hao Tian, Peng Tao

Molecular dynamics (MD) simulations have been widely applied to study macromolecules including proteins.

Dimensionality Reduction

ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation

4 code implementations26 Jan 2020 Dongling Xiao, Han Zhang, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

Current pre-training works in natural language generation pay little attention to the problem of exposure bias on downstream tasks.

 Ranked #1 on Question Generation on SQuAD1.1 (using extra training data)

Abstractive Text Summarization Dialogue Generation +3

Efficient and Robust Reinforcement Learning with Uncertainty-based Value Expansion

no code implementations10 Dec 2019 Bo Zhou, Hongsheng Zeng, Fan Wang, Yunxiang Li, Hao Tian

By integrating dynamics models into model-free reinforcement learning (RL) methods, model-based value expansion (MVE) algorithms have shown a significant advantage in sample efficiency as well as value estimation.

reinforcement-learning Reinforcement Learning (RL)

Learning to Recommend via Meta Parameter Partition

no code implementations4 Dec 2019 Liang Zhao, Yang Wang, daxiang dong, Hao Tian

The fixed part, capturing user invariant features, is shared by all users and is learned during offline meta learning stage.


MBCAL: Sample Efficient and Variance Reduced Reinforcement Learning for Recommender Systems

no code implementations6 Nov 2019 Fan Wang, Xiaomin Fang, Lihang Liu, Hao Tian, Zhiming Peng

The proposed method takes advantage of the characteristics of recommender systems and draws ideas from the model-based reinforcement learning method for higher sample efficiency.

Model-based Reinforcement Learning Recommendation Systems +2

Risk Averse Value Expansion for Sample Efficient and Robust Policy Learning

no code implementations25 Sep 2019 Bo Zhou, Fan Wang, Hongsheng Zeng, Hao Tian

A promising direction is to combine model-based reinforcement learning with model-free reinforcement learning, such as model-based value expansion(MVE).

Model-based Reinforcement Learning reinforcement-learning +1

ERNIE 2.0: A Continual Pre-training Framework for Language Understanding

3 code implementations29 Jul 2019 Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, Haifeng Wang

Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.

Chinese Named Entity Recognition Chinese Reading Comprehension +8

Sequential Evaluation and Generation Framework for Combinatorial Recommender System

1 code implementation1 Feb 2019 Fan Wang, Xiaomin Fang, Lihang Liu, Yaxue Chen, Jiucheng Tao, Zhiming Peng, Cihang Jin, Hao Tian

On the one hand of this framework, an evaluation model is trained to evaluate the expected overall utility, by fully considering the user, item information and the correlations among the co-exposed items.

Recommendation Systems

Transferring Grasp Configurations using Active Learning and Local Replanning

no code implementations22 Jul 2018 Hao Tian, Changbo Wang, Dinesh Manocha, Xin-Yu Zhang

We compute a grasp space for each part of the example object using active learning.


Cannot find the paper you are looking for? You can Submit a new open access paper.