no code implementations • EMNLP (sdp) 2020 • Meng Ling, Jian Chen
We present DeepPaperComposer, a simple solution for preparing highly accurate (100%) training data without manual labeling to extract content from scholarly articles using convolutional neural networks (CNNs).
no code implementations • 14 Feb 2023 • Shuhao Shi, Kai Qiao, Jie Yang, Baojie Song, Jian Chen, Bin Yan
Specifically, node features are first mapped to a feature space through neighborhood aggregation and then generated samples for the minority class in the feature space.
1 code implementation • 3 Jan 2023 • Shuhao Shi, Kai Qiao, Jian Chen, Shuai Yang, Jie Yang, Baojie Song, Linyuan Wang, Bin Yan
However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research.
Ranked #1 on
Stance Detection
on MGTAB
no code implementations • 13 Dec 2022 • Qinyi Deng, Yong Guo, Zhibang Yang, Haolin Pan, Jian Chen
In this way, these data can be also very informative if we can effectively exploit these complementary labels, i. e., the classes that a sample does not belong to.
no code implementations • 19 Nov 2022 • Bingna Xu, Yong Guo, Luoqian Jiang, Mianjie Yu, Jian Chen
Inspired by this, we propose a Hierarchical Collaborative Downscaling (HCD) method that performs gradient descent in both HR and LR domains to improve the downscaled representations.
no code implementations • 17 Nov 2022 • Luoqian Jiang, Yifan He, Jian Chen
To address the above issues, we propose a Text-Aware Dual Routing Network (TDR) which simultaneously handles the VQA cases with and without understanding text information in the input images.
1 code implementation • 14 Oct 2022 • Yong Guo, Yaofo Chen, Yin Zheng, Qi Chen, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
More critically, these independent search processes cannot share their learned knowledge (i. e., the distribution of good architectures) with each other and thus often result in limited search results.
1 code implementation • 30 Jul 2022 • Haolin Pan, Yong Guo, Qinyi Deng, Haomin Yang, Yiqun Chen, Jian Chen
Self-supervised learning (SSL) has achieved remarkable performance in pretraining the models that can be further used in downstream tasks via fine-tuning.
1 code implementation • 16 Jul 2022 • Yong Guo, Jingdong Wang, Qi Chen, JieZhang Cao, Zeshuai Deng, Yanwu Xu, Jian Chen, Mingkui Tan
Nevertheless, it is hard for existing model compression methods to accurately identify the redundant components due to the extremely large SR mapping space.
no code implementations • 8 May 2022 • Shuhao Shi, Jian Chen, Kai Qiao, Shuai Yang, Linyuan Wang, Bin Yan
The Graph Convolutional Networks (GCNs) have achieved excellent results in node classification tasks, but the model's performance at low label rates is still unsatisfactory.
no code implementations • 2 Dec 2021 • Shujian Liao, Jian Chen, Hao Ni
In this paper, we investigate the problem of predicting the future volatility of Forex currency pairs using the deep learning techniques.
1 code implementation • 25 Nov 2021 • Rui Wang, Jian Chen, Gang Yu, Li Sun, Changqian Yu, Changxin Gao, Nong Sang
Image manipulation with StyleGAN has been an increasing concern in recent years. Recent works have achieved tremendous success in analyzing several semantic latent spaces to edit the attributes of the generated images. However, due to the limited semantic and spatial manipulation precision in these latent spaces, the existing endeavors are defeated in fine-grained StyleGAN image manipulation, i. e., local attribute translation. To address this issue, we discover attribute-specific control units, which consist of multiple channels of feature maps and modulation styles.
no code implementations • 27 Oct 2021 • Wang Chen, Jian Chen, Weitian Wu, Xinmin Yang, Hui Li
For performance assessment, the proposed algorithm is compared with existing four state-of-the-art multiobjective evolutionary algorithms on benchmark test problems with various types of Pareto optimal fronts.
no code implementations • 29 Sep 2021 • Yawen Chen, Zeyi Wen, Yile Chen, Jian Chen, Jin Huang
However, the recomputation of the Hessian matrix in the second-order optimization posts much extra computation and memory burden in the training.
no code implementations • 29 Sep 2021 • Shuhao Shi, Pengfei Xie, Xu Luo, Kai Qiao, Linyuan Wang, Jian Chen, Bin Yan
AMC-GNN generates two graph views by data augmentation and compares different layers' output embeddings of Graph Neural Network encoders to obtain feature representations, which could be used for downstream tasks.
no code implementations • 24 Sep 2021 • Mingyang Zhang, Jie Jia, Jian Chen
A novel multi-scale temporal convolutional network (TCN) and long short-term memory network (LSTM) based magnetic localization approach is proposed.
1 code implementation • 30 Jun 2021 • Yong Guo, Yaofo Chen, Mingkui Tan, Kui Jia, Jian Chen, Jingdong Wang
In practice, the convolutional operation on some of the windows (e. g., smooth windows that contain very similar pixels) can be very redundant and may introduce noises into the computation.
no code implementations • 28 Jun 2021 • Qiqi Ren, Omid Abbasi, Gunes Karabulut Kurt, Halim Yanikomeroglu, Jian Chen
In addition, the caching technique is introduced for network edges to store some of the fundamental data from the HAPS so that large propagation delays can be reduced.
no code implementations • 20 May 2021 • Meng Ling, Jian Chen, Torsten Möller, Petra Isenberg, Tobias Isenberg, Michael Sedlmair, Robert S. Laramee, Han-Wei Shen, Jian Wu, C. Lee Giles
We present document domain randomization (DDR), the first successful transfer of convolutional neural networks (CNNs) trained only on graphically rendered pseudo-paper pages to real-world document segmentation.
no code implementations • 8 May 2021 • Jian Chen, Xuxin Zhang, Rui Zhang, Chen Wang, Ling Liu
The results demonstrate that De-Pois is effective and efficient for detecting poisoned data against all the four types of poisoning attacks, with both the accuracy and F1-score over 0. 9 on average.
no code implementations • 13 Mar 2021 • Jincheng Li, JieZhang Cao, Yifan Zhang, Jian Chen, Mingkui Tan
Relying on this, we learn a defense transformer to counterattack the adversarial examples by parameterizing the affine transformations and exploiting the boundary information of DNNs.
no code implementations • 28 Feb 2021 • Hanzi Huang, Yetian Huang, Haoshuo Chen, Qianwu Zhang, Jian Chen, Nicolas K. Fontaine, Mikael Mazur, Roland Ryf, Junho Cho, Yingxiong Song
We propose a digital interference mitigation scheme to reduce the impact of mode coupling in space division multiplexing self-homodyne coherent detection and experimentally verify its effectiveness in 240-Gbps mode-multiplexed transmission over 3-mode multimode fiber.
no code implementations • 27 Feb 2021 • Yong Guo, Yaofo Chen, Yin Zheng, Qi Chen, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
To this end, we propose a Pareto-Frontier-aware Neural Architecture Generator (NAG) which takes an arbitrary budget as input and produces the Pareto optimal architecture for the target budget.
no code implementations • 24 Feb 2021 • Jian Chen, Hu Cheng, Xuefeng Zhou, Xiaozhi Yan, Lingfei Wang, Yusheng Zhao, Shanmin Wang
However, the ruby scale can often hardly be used for programmably-controlled DAC devices, especially the piezoelectric-driving cells, where a continuous pressure calibration is required.
Applied Physics Materials Science
2 code implementations • 20 Feb 2021 • Yong Guo, Yin Zheng, Mingkui Tan, Qi Chen, Zhipeng Li, Jian Chen, Peilin Zhao, Junzhou Huang
To address this issue, we propose a Neural Architecture Transformer++ (NAT++) method which further enlarges the set of candidate transitions to improve the performance of architecture optimization.
no code implementations • 1 Jan 2021 • Yong Guo, Yaofo Chen, Yin Zheng, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
To find promising architectures under different budgets, existing methods may have to perform an independent search for each budget, which is very inefficient and unnecessary.
no code implementations • 22 Dec 2020 • Jian Chen, Meng Ling, Rui Li, Petra Isenberg, Tobias Isenberg, Michael Sedlmair, Torsten Möller, Robert S. Laramee, Han-Wei Shen, Katharina Wünsche, Qiru Wang
We present the VIS30K dataset, a collection of 29, 689 images that represents 30 years of figures and tables from each track of the IEEE Visualization conference series (Vis, SciVis, InfoVis, VAST).
1 code implementation • 14 Dec 2020 • Jiachun Wang, Fajie Yuan, Jian Chen, Qingyao Wu, Min Yang, Yang Sun, Guoxiao Zhang
We validate the performance of StackRec by instantiating it with four state-of-the-art SR models in three practical scenarios with real-world datasets.
no code implementations • 22 Oct 2020 • Zifei Zhang, Kai Qiao, Jian Chen, Ningning Liang
Experimentally, we show that our ASR of adversarial attack reaches to 58. 38% on average, which outperforms the state-of-the-art method by 12. 1% on the normally trained models and by 11. 13% on the adversarially trained models.
no code implementations • 10 Oct 2020 • Yong Guo, Qingyao Wu, Chaorui Deng, Jian Chen, Mingkui Tan
Although the standard BN can significantly accelerate the training of DNNs and improve the generalization performance, it has several underlying limitations which may hamper the performance in both training and inference.
1 code implementation • 30 Sep 2020 • Han Wu, Wenjie Ruan, Jiangtao Wang, Dingchang Zheng, Bei Liu, Yayuan Gen, Xiangfei Chai, Jian Chen, Kunwei Li, Shaolin Li, Sumi Helal
The black-box nature of machine learning models hinders the deployment of some high-accuracy models in medical diagnosis.
2 code implementations • NeurIPS 2020 • Chaozheng Wu, Jian Chen, Qiaoyu Cao, Jianchi Zhang, Yunxin Tai, Lin Sun, Kui Jia
To test GPNet, we contribute a synthetic dataset of 6-DOF object grasps; evaluation is conducted using rule-based criteria, simulation test, and real test.
no code implementations • 21 Sep 2020 • Yixin Liu, Yong Guo, Zichang Liu, Haohua Liu, Jingjie Zhang, Zejun Chen, Jing Liu, Jian Chen
To address this issue, given a target compression rate for the whole model, one can search for the optimal compression rate for each layer.
1 code implementation • ICML 2020 • Yong Guo, Yaofo Chen, Yin Zheng, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
With the proposed search strategy, our Curriculum Neural Architecture Search (CNAS) method significantly improves the search efficiency and finds better architectures than existing NAS methods.
no code implementations • 26 Mar 2020 • Kai Qiao, Chi Zhang, Jian Chen, Linyuan Wang, Li Tong, Bin Yan
Except for deep network structure, the task or corresponding big dataset is also important for deep network models, but neglected by previous studies.
3 code implementations • CVPR 2020 • Yong Guo, Jian Chen, Jingdong Wang, Qi Chen, JieZhang Cao, Zeshuai Deng, Yanwu Xu, Mingkui Tan
Extensive experiments with paired training data and unpaired real-world data demonstrate our superiority over existing methods.
no code implementations • 13 Mar 2020 • Kai Qiao, Jian Chen, Linyuan Wang, Chi Zhang, Li Tong, Bin Yan
In this study, we proposed a new GAN-based Bayesian visual reconstruction method (GAN-BVRM) that includes a classifier to decode categories from fMRI data, a pre-trained conditional generator to generate natural images of specified categories, and a set of encoding models and evaluator to evaluate generated images.
1 code implementation • 10 Mar 2020 • Yong Guo, Yongsheng Luo, Zhenhao He, Jin Huang, Jian Chen
To this end, we design a hierarchical SR search space and propose a hierarchical controller for architecture search.
no code implementations • 28 Nov 2019 • Skylar W. Wurster, Arkadiusz Sitek, Jian Chen, Karla Evans, Gaeun Kim, Jeremy M. Wolfe
Radiologists can classify a mammogram as normal or abnormal at better than chance levels after less than a second's exposure to the images.
1 code implementation • NeurIPS 2019 • Yong Guo, Yin Zheng, Mingkui Tan, Qi Chen, Jian Chen, Peilin Zhao, Junzhou Huang
To verify the effectiveness of the proposed strategies, we apply NAT on both hand-crafted architectures and NAS based architectures.
no code implementations • 18 Oct 2019 • Risheng Liu, Pan Mu, Jian Chen, Xin Fan, Zhongxuan Luo
Properly modeling latent image distributions plays an important role in a variety of image-related vision problems.
1 code implementation • 27 Jul 2019 • Kai Qiao, Chi Zhang, Jian Chen, Linyuan Wang, Li Tong, Bin Yan
Recently, visual encoding based on functional magnetic resonance imaging (fMRI) have realized many achievements with the rapid development of deep network computation.
no code implementations • 12 Apr 2019 • Lingyun Jiang, Kai Qiao, Ruoxi Qin, Linyuan Wang, Jian Chen, Haibing Bu, Bin Yan
In image classification of deep learning, adversarial examples where inputs intended to add small magnitude perturbations may mislead deep neural networks (DNNs) to incorrect results, which means DNNs are vulnerable to them.
1 code implementation • 27 Mar 2019 • Yong Guo, Qi Chen, Jian Chen, Qingyao Wu, Qinfeng Shi, Mingkui Tan
To address this issue, we develop a novel GAN called Auto-Embedding Generative Adversarial Network (AEGAN), which simultaneously encodes the global structure features and captures the fine-grained details.
no code implementations • 19 Mar 2019 • Kai Qiao, Jian Chen, Linyuan Wang, Chi Zhang, Lei Zeng, Li Tong, Bin Yan
Despite the hierarchically similar representations of deep network and human vision, visual information flows from primary visual cortices to high visual cortices and vice versa based on the bottom-up and top-down manners, respectively.
Neurons and Cognition
5 code implementations • 16 Feb 2019 • Zhihao Wang, Jian Chen, Steven C. H. Hoi
Image Super-Resolution (SR) is an important class of image processing techniques to enhance the resolution of images and videos in computer vision.
no code implementations • 19 Sep 2018 • Yong Guo, Qi Chen, Jian Chen, Junzhou Huang, Yanwu Xu, JieZhang Cao, Peilin Zhao, Mingkui Tan
However, most deep learning methods employ feed-forward architectures, and thus the dependencies between LR and HR images are not fully exploited, leading to limited learning performance.
no code implementations • 2 Jan 2018 • Kai Qiao, Chi Zhang, Linyuan Wang, Bin Yan, Jian Chen, Lei Zeng, Li Tong
We firstly employed the CapsNet to train the nonlinear mapping from image stimuli to high-level capsule features, and from high-level capsule features to image stimuli again in an end-to-end manner.
no code implementations • 23 Nov 2016 • Zeyi Wen, Bin Li, Rao Kotagiri, Jian Chen, Yawen Chen, Rui Zhang
The k-fold cross-validation is commonly used to evaluate the effectiveness of SVMs with the selected hyper-parameters.
1 code implementation • 6 Nov 2016 • Yong Guo, Jian Chen, Qing Du, Anton Van Den Hengel, Qinfeng Shi, Mingkui Tan
As a result, the representation power of intermediate layers can be very weak and the model becomes very redundant with limited performance.