no code implementations • EMNLP 2021 • Zheng Li, Danqing Zhang, Tianyu Cao, Ying WEI, Yiwei Song, Bing Yin
In this work, we explore multilingual sequence labeling with minimal supervision using a single unified model for multiple languages.
no code implementations • EMNLP 2020 • Zheng Li, Mukul Kumar, William Headden, Bing Yin, Ying WEI, Yu Zhang, Qiang Yang
Recent emergence of multilingual pre-training language model (mPLM) has enabled breakthroughs on various downstream cross-lingual transfer (CLT) tasks.
1 code implementation • 5 Jun 2023 • Zhaoyi Li, Ying WEI, Defu Lian
Despite the rising prevalence of neural sequence models, recent empirical evidences suggest their deficiency in compositional generalization.
1 code implementation • CVPR 2023 • Weixia Zhang, Guangtao Zhai, Ying WEI, Xiaokang Yang, Kede Ma
We aim at advancing blind image quality assessment (BIQA), which predicts the human perception of image quality without any reference information.
1 code implementation • 21 Mar 2023 • Shuailei Ma, Yuefeng Wang, Ying WEI, Peihao Chen, Zhixiang Ye, Jiaqi Fan, Enming Zhang, Thomas H. Li
We propose leveraging the VL as the ``Brain'' of the open-world detector by simply generating unknown labels.
1 code implementation • 8 Jan 2023 • Shuailei Ma, Yuefeng Wang, Shanze Wang, Ying WEI
HSAM and TAM semantically align and merge the extracted features and query embeddings in the hierarchical spatial and task perspectives in turn.
Ranked #2 on
Human-Object Interaction Detection
on HICO-DET
1 code implementation • CVPR 2023 • Shuailei Ma, Yuefeng Wang, Jiaqi Fan, Ying WEI, Thomas H. Li, Hongli Liu, Fanbing Lv
Open-world object detection (OWOD), as a more general and challenging goal, requires the model trained from data on known objects to detect both known and unknown objects and incrementally learn to identify these unknown objects.
no code implementations • 12 Dec 2022 • Huichen Zhu, Yifei Sun, Ying WEI
We propose a variable importance decomposition to measure the impact of a variable on the treatment effect function.
no code implementations • 16 Nov 2022 • Juan Zha, Zheng Li, Ying WEI, Yu Zhang
However, most prior works assume that all the tasks are sampled from a single data source, which cannot adapt to real-world scenarios where tasks are heterogeneous and lie in different distributions.
no code implementations • 12 Sep 2022 • Chengliang Tang, Nathan Lenssen, Ying WEI, Tian Zheng
To overcome this fundamental issue, we propose Wasserstein Distributional Learning (WDL), a flexible density-on-scalar regression modeling framework that starts with the Wasserstein distance $W_2$ as a proper metric for the space of density outcomes.
no code implementations • 9 Jun 2022 • Yichen Wu, Long-Kai Huang, Ying WEI
The success of meta-learning on existing benchmarks is predicated on the assumption that the distribution of meta-training tasks covers meta-testing tasks.
no code implementations • 28 Apr 2022 • Huadong Li, Yuefeng Wang, Ying WEI, Lin Wang, Li Ge
Finally, the distance between vehicle appearances is presented by the discriminative region features and multi-view features together.
no code implementations • 27 Apr 2022 • Gangwei Jiang, Shiyao Wang, Tiezheng Ge, Yuning Jiang, Ying WEI, Defu Lian
The synthetic training images with erasure ground-truth are then fed to train a coarse-to-fine erasing network.
no code implementations • NeurIPS 2021 • Huaxiu Yao, Ying WEI, Long-Kai Huang, Ding Xue, Junzhou Huang, Zhenhui (Jessie) Li
More recently, there has been a surge of interest in employing machine learning approaches to expedite the drug discovery process where virtual screening for hit discovery and ADMET prediction for lead optimization play essential roles.
1 code implementation • NeurIPS 2021 • Huaxiu Yao, Yu Wang, Ying WEI, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn
In ATS, for the first time, we design a neural scheduler to decide which meta-training tasks to use next by predicting the probability being sampled for each candidate task, and train the scheduler to optimize the generalization capacity of the meta-model to unseen tasks.
no code implementations • 29 Sep 2021 • Yinjie Jiang, Zhengyu Chen, Luotian Yuan, Ying WEI, Kun Kuang, Xinhai Ye, Zhihua Wang, Fei Wu
Meta-learning has emerged as a potent paradigm for quick learning of few-shot tasks, by leveraging the meta-knowledge learned from meta-training tasks.
no code implementations • 8 Sep 2021 • Geng-Xin Xu, Chen Liu, Jun Liu, Zhongxiang Ding, Feng Shi, Man Guo, Wei Zhao, Xiaoming Li, Ying WEI, Yaozong Gao, Chuan-Xian Ren, Dinggang Shen
Particularly, we propose a domain translator and align the heterogeneous data to the estimated class prototypes (i. e., class centers) in a hyper-sphere manifold.
no code implementations • 17 Jun 2021 • Long-Kai Huang, Ying WEI, Yu Rong, Qiang Yang, Junzhou Huang
Transferability estimation has been an essential tool in selecting a pre-trained model and the layers in it for transfer learning, to transfer, so as to maximize the performance on a target task and prevent negative transfer.
no code implementations • 7 Feb 2021 • Zekun Li, Wei Zhao, Feng Shi, Lei Qi, Xingzhi Xie, Ying WEI, Zhongxiang Ding, Yang Gao, Shangjie Wu, Jun Liu, Yinghuan Shi, Dinggang Shen
How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world.
1 code implementation • NeurIPS 2020 • Sifan Wu, Xi Xiao, Qianggang Ding, Peilin Zhao, Ying WEI, Junzhou Huang
Specifically, AST adopts a Sparse Transformer as the generator to learn a sparse attention map for time series forecasting, and uses a discriminator to improve the prediction performance from sequence level.
Multivariate Time Series Forecasting
Probabilistic Time Series Forecasting
1 code implementation • 26 Jul 2020 • Huaxiu Yao, Long-Kai Huang, Linjun Zhang, Ying WEI, Li Tian, James Zou, Junzhou Huang, Zhenhui Li
Moreover, both MetaMix and Channel Shuffle outperform state-of-the-art results by a large margin across many datasets and are compatible with existing meta-learning algorithms.
1 code implementation • 5 Jul 2020 • Yifan Zhang, Ying WEI, Qingyao Wu, Peilin Zhao, Shuaicheng Niu, Junzhou Huang, Mingkui Tan
Deep learning based medical image diagnosis has shown great potential in clinical medicine.
no code implementations • 4 Jul 2020 • Yue Sun, Kun Gao, Zhengwang Wu, Zhihao Lei, Ying WEI, Jun Ma, Xiaoping Yang, Xue Feng, Li Zhao, Trung Le Phan, Jitae Shin, Tao Zhong, Yu Zhang, Lequan Yu, Caizi Li, Ramesh Basnet, M. Omair Ahmad, M. N. S. Swamy, Wenao Ma, Qi Dou, Toan Duc Bui, Camilo Bermudez Noguera, Bennett Landman, Ian H. Gotlib, Kathryn L. Humphreys, Sarah Shultz, Longchuan Li, Sijie Niu, Weili Lin, Valerie Jewells, Gang Li, Dinggang Shen, Li Wang
Deep learning-based methods have achieved state-of-the-art performance; however, one of major limitations is that the learning-based methods may suffer from the multi-site issue, that is, the models trained on a dataset from one site may not be applicable to the datasets acquired from other sites with different imaging protocols/scanners.
3 code implementations • NeurIPS 2020 • Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying WEI, Wenbing Huang, Junzhou Huang
We pre-train GROVER with 100 million parameters on 10 million unlabelled molecules -- the biggest GNN and the largest training dataset in molecular representation learning.
Ranked #4 on
Molecular Property Prediction
on QM7
no code implementations • 7 May 2020 • Liang Sun, Zhanhao Mo, Fuhua Yan, Liming Xia, Fei Shan, Zhongxiang Ding, Wei Shao, Feng Shi, Huan Yuan, Huiting Jiang, Dijia Wu, Ying WEI, Yaozong Gao, Wanchun Gao, He Sui, Daoqiang Zhang, Dinggang Shen
We evaluated our proposed AFS-DF on COVID-19 dataset with 1495 patients of COVID-19 and 1027 patients of community acquired pneumonia (CAP).
no code implementations • 7 May 2020 • Donglin Di, Feng Shi, Fuhua Yan, Liming Xia, Zhanhao Mo, Zhongxiang Ding, Fei Shan, Shengrui Li, Ying WEI, Ying Shao, Miaofei Han, Yaozong Gao, He Sui, Yue Gao, Dinggang Shen
The main challenge in early screening is how to model the confusing cases in the COVID-19 and CAP groups, with very similar clinical manifestations and imaging features.
no code implementations • 6 May 2020 • Xi Ouyang, Jiayu Huo, Liming Xia, Fei Shan, Jun Liu, Zhanhao Mo, Fuhua Yan, Zhongxiang Ding, Qi Yang, Bin Song, Feng Shi, Huan Yuan, Ying WEI, Xiaohuan Cao, Yaozong Gao, Dijia Wu, Qian Wang, Dinggang Shen
To this end, we develop a dual-sampling attention network to automatically diagnose COVID- 19 from the community acquired pneumonia (CAP) in chest computed tomography (CT).
1 code implementation • 30 Apr 2020 • Yifan Zhang, Shuaicheng Niu, Zhen Qiu, Ying WEI, Peilin Zhao, Jianhua Yao, Junzhou Huang, Qingyao Wu, Mingkui Tan
There are two main challenges: 1) the discrepancy of data distributions between domains; 2) the task difference between the diagnosis of typical pneumonia and COVID-19.
no code implementations • 22 Mar 2020 • Feng Shi, Liming Xia, Fei Shan, Dijia Wu, Ying WEI, Huan Yuan, Huiting Jiang, Yaozong Gao, He Sui, Dinggang Shen
The worldwide spread of coronavirus disease (COVID-19) has become a threatening risk for global public health.
1 code implementation • 12 Mar 2020 • Yinghua Zhang, Yu Zhang, Ying WEI, Kun Bai, Yangqiu Song, Qiang Yang
Though the learned representations are separable in the source domain, they usually have a large variance and samples with different class labels tend to overlap in the target domain, which yields suboptimal adaptation performance.
no code implementations • 29 Dec 2019 • Zhihao Lei, Lin Qi, Ying WEI, Yunlong Zhou
In this paper, we propose a dual aggregation network to adaptively aggregate different information in infant brain MRI segmentation.
1 code implementation • 17 Nov 2019 • Yifan Zhang, Ying WEI, Peilin Zhao, Shuaicheng Niu, Qingyao Wu, Mingkui Tan, Junzhou Huang
In this paper, we seek to exploit rich labeled data from relevant domains to help the learning in the target task with unsupervised domain adaptation (UDA).
1 code implementation • IJCNLP 2019 • Zheng Li, Xin Li, Ying WEI, Lidong Bing, Yu Zhang, Qiang Yang
Joint extraction of aspects and sentiments can be effectively formulated as a sequence labeling problem.
Aspect-Based Sentiment Analysis (ABSA)
Unsupervised Domain Adaptation
1 code implementation • 7 Oct 2019 • Huaxiu Yao, Chuxu Zhang, Ying WEI, Meng Jiang, Suhang Wang, Junzhou Huang, Nitesh V. Chawla, Zhenhui Li
Towards the challenging problem of semi-supervised node classification, there have been extensive studies.
no code implementations • 7 Sep 2019 • Ying Wei, Peilin Zhao, Huaxiu Yao, Junzhou Huang
Automated machine learning aims to automate the whole process of machine learning, including model configuration.
1 code implementation • 13 May 2019 • Huaxiu Yao, Ying WEI, Junzhou Huang, Zhenhui Li
In order to learn quickly with few samples, meta-learning utilizes prior knowledge learned from previous tasks.
1 code implementation • 24 Jan 2019 • Huaxiu Yao, Yiding Liu, Ying WEI, Xianfeng Tang, Zhenhui Li
Specifically, our proposed model is designed as a spatial-temporal network with a meta-learning paradigm.
1 code implementation • AAAI 2019 2018 • Zheng Li, Ying WEI, Yu Zhang, Xiang Zhang, Xin Li, Qiang Yang
Aspect-level sentiment classification (ASC) aims at identifying sentiment polarities towards aspects in a sentence, where the aspect can behave as a general Aspect Category (AC) or a specific Aspect Term (AT).
no code implementations • ICML 2018 • Ying WEI, Yu Zhang, Junzhou Huang, Qiang Yang
In transfer learning, what and how to transfer are two primary issues to be addressed, as different transfer learning algorithms applied between a source and a target domain result in different knowledge transferred and thereby the performance improvement in the target domain.
no code implementations • NeurIPS 2018 • Yu Zhang, Ying WEI, Qiang Yang
Based on such training set, L2MT first uses a proposed layerwise graph neural network to learn task embeddings for all the tasks in a multitask problem and then learns an estimation function to estimate the relative test error based on task embeddings and the representation of the multitask model based on a unified formulation.
1 code implementation • Thirty-Second AAAI Conference on Artificial Intelligence 2018 • Zheng Li, Ying WEI, Yu Zhang, Qiang Yang
Existing cross-domain sentiment classification meth- ods cannot automatically capture non-pivots, i. e., the domain- specific sentiment words, and pivots, i. e., the domain-shared sentiment words, simultaneously.
no code implementations • 18 Aug 2017 • Ying Wei, Yu Zhang, Qiang Yang
We establish the L2T framework in two stages: 1) we first learn a reflection function encrypting transfer learning skills from experiences; and 2) we infer what and how to transfer for a newly arrived pair of domains by optimizing the reflection function.