no code implementations • ICML 2020 • Fangcheng Fu, Yuzheng Hu, Yihan He, Jiawei Jiang, Yingxia Shao, Ce Zhang, Bin Cui
Recent years have witnessed intensive research interests on training deep neural networks (DNNs) more efficiently by quantization-based compression methods, which facilitate DNNs training in two ways: (1) activations are quantized to shrink the memory consumption, and (2) gradients are quantized to decrease the communication cost.
no code implementations • 14 Dec 2024 • Anton Alexandrov, Veselin Raychev, Dimitar I. Dimitrov, Ce Zhang, Martin Vechev, Kristina Toutanova
We present BgGPT-Gemma-2-27B-Instruct and BgGPT-Gemma-2-9B-Instruct: continually pretrained and fine-tuned versions of Google's Gemma-2 models, specifically optimized for Bulgarian language understanding and generation.
no code implementations • 5 Dec 2024 • Zhu Han, Ce Zhang, Lianru Gao, Zhiqiang Zeng, Michael K. Ng, Bing Zhang, Jocelyn Chanussot
Cross-scene image classification aims to transfer prior knowledge of ground materials to annotate regions with different distributions and reduce hand-crafted cost in the field of remote sensing.
no code implementations • 26 Nov 2024 • Chang Li, Yu Wang, Ce Zhang, Yongjun Zhang
Terraced field is a significant engineering practice for soil and water conservation (SWC).
no code implementations • 19 Nov 2024 • Zhehan Kan, Ce Zhang, Zihan Liao, Yapeng Tian, Wenming Yang, Junyuan Xiao, Xu Li, Dongmei Jiang, YaoWei Wang, Qingmin Liao
Large Vision-Language Model (LVLM) systems have demonstrated impressive vision-language reasoning capabilities but suffer from pervasive and severe hallucination issues, posing significant risks in critical domains such as healthcare and autonomous systems.
1 code implementation • 19 Nov 2024 • Maurice Weber, Daniel Fu, Quentin Anthony, Yonatan Oren, Shane Adams, Anton Alexandrov, Xiaozhong Lyu, Huu Nguyen, Xiaozhe Yao, Virginia Adams, Ben Athiwaratkun, Rahul Chalamala, Kezhen Chen, Max Ryabinin, Tri Dao, Percy Liang, Christopher Ré, Irina Rish, Ce Zhang
In addition, we release RedPajama-V2, a massive web-only dataset consisting of raw, unfiltered text data together with quality signals and metadata.
1 code implementation • 16 Oct 2024 • Ce Zhang, Simon Stepputtis, Katia Sycara, Yaqi Xie
Test-time adaptation, which enables models to generalize to diverse data with unlabeled test samples, holds significant value in real-world scenarios.
1 code implementation • 16 Jul 2024 • Josh Veitch-Michaelis, Andrew Cottam, Daniella Schweizer, Eben N. Broadbent, David Dao, Ce Zhang, Angelica Almeyda Zambrano, Simeon Max
Using our dataset, we train reference instance and semantic segmentation models that compare favorably to existing state-of-the-art models.
no code implementations • 11 Jul 2024 • Anton Alexandrov, Veselin Raychev, Mark Niklas Müller, Ce Zhang, Martin Vechev, Kristina Toutanova
As open-weight large language models (LLMs) achieve ever more impressive performances across a wide range of tasks in English, practitioners aim to adapt these models to different languages.
no code implementations • 25 Jun 2024 • Ce Zhang, Azim Eskandarian
In this paper, we introduce a novel image-guided point cloud quality assessment algorithm for outdoor autonomous driving environments, named the Image-Guided Outdoor Point Cloud Quality Assessment (IGO-PQA) algorithm.
2 code implementations • 7 Jun 2024 • Junlin Wang, Jue Wang, Ben Athiwaratkun, Ce Zhang, James Zou
With the growing number of LLMs, how to harness the collective expertise of multiple LLMs is an exciting open direction.
1 code implementation • 13 May 2024 • Qi Chen, Xiubo Geng, Corby Rosset, Carolyn Buractaon, Jingwen Lu, Tao Shen, Kun Zhou, Chenyan Xiong, Yeyun Gong, Paul Bennett, Nick Craswell, Xing Xie, Fan Yang, Bryan Tower, Nikhil Rao, Anlei Dong, Wenqi Jiang, Zheng Liu, Mingqin Li, Chuanjie Liu, Zengzhong Li, Rangan Majumder, Jennifer Neville, Andy Oakley, Knut Magne Risvik, Harsha Vardhan Simhadri, Manik Varma, Yujing Wang, Linjun Yang, Mao Yang, Ce Zhang
Recent breakthroughs in large models have highlighted the critical significance of data scale, labels and modals.
1 code implementation • 10 Apr 2024 • Steve Rhyner, Haocong Luo, Juan Gómez-Luna, Mohammad Sadrosadati, Jiawei Jiang, Ataberk Olgun, Harshita Gupta, Ce Zhang, Onur Mutlu
Our results demonstrate three major findings: 1) The UPMEM PIM system can be a viable alternative to state-of-the-art CPUs and GPUs for many memory-bound ML training workloads, especially when operations and datatypes are natively supported by PIM hardware, 2) it is important to carefully choose the optimization algorithms that best fit PIM, and 3) the UPMEM PIM system does not scale approximately linearly with the number of nodes for many data-intensive ML training workloads.
1 code implementation • 26 Mar 2024 • Michael Poli, Armin W Thomas, Eric Nguyen, Pragaash Ponnusamy, Björn Deiseroth, Kristian Kersting, Taiji Suzuki, Brian Hie, Stefano Ermon, Christopher Ré, Ce Zhang, Stefano Massaroli
The development of deep learning architectures is a resource-demanding process, due to a vast design space, long prototyping times, and high compute costs associated with at-scale model training and evaluation.
1 code implementation • 23 Mar 2024 • Lijie Xu, Chulin Xie, Yiran Guo, Gustavo Alonso, Bo Li, Guoliang Li, Wei Wang, Wentao Wu, Ce Zhang
In this paper, we formalize this problem as relational federated learning (RFL).
2 code implementations • 19 Mar 2024 • Ce Zhang, Simon Stepputtis, Katia Sycara, Yaqi Xie
Large-scale pre-trained Vision-Language Models (VLMs) have exhibited impressive zero-shot performance and transferability, allowing them to adapt to downstream tasks in a data-efficient manner.
1 code implementation • CVPR 2024 • Ce Zhang, Simon Stepputtis, Joseph Campbell, Katia Sycara, Yaqi Xie
Being able to understand visual scenes is a precursor for many downstream tasks, including autonomous driving, robotics, and other vision-based approaches.
no code implementations • 10 Mar 2024 • Yi Zhang, Ce Zhang
Several approaches aim to efficiently adapt VLP models to downstream tasks with limited supervision, aiming to leverage the acquired knowledge from VLP models.
no code implementations • 15 Jan 2024 • Yi Zhang, Ce Zhang, Ke Yu, Yushun Tang, Zhihai He
However, for generalization tasks, the current fine-tuning methods for CLIP, such as CoOp and CoCoOp, demonstrate relatively low performance on some fine-grained datasets.
1 code implementation • 28 Dec 2023 • Ce Zhang, Taixi Lu, Md Mohaiminul Islam, Ziyang Wang, Shoubin Yu, Mohit Bansal, Gedas Bertasius
Furthermore, we show that a specialized prompt that asks the LLM first to summarize the noisy short-term visual captions and then answer a given input question leads to a significant LVQA performance boost.
Ranked #2 on Zero-Shot Video Question Answer on NExT-GQA
1 code implementation • NeurIPS 2023 • Maurice Weber, Carlo Siebenschuh, Rory Butler, Anton Alexandrov, Valdemar Thanner, Georgios Tsolakis, Haris Jabbar, Ian Foster, Bo Li, Rick Stevens, Ce Zhang
Together with the pipeline, we will additionally release 9. 5M urls to word documents which can be processed using WordScape to create a dataset of over 40M pages.
no code implementations • 21 Nov 2023 • Luis Oala, Manil Maskey, Lilith Bat-Leah, Alicia Parrish, Nezihe Merve Gürel, Tzu-Sheng Kuo, Yang Liu, Rotem Dror, Danilo Brajovic, Xiaozhe Yao, Max Bartolo, William A Gaviria Rojas, Ryan Hileman, Rainier Aliment, Michael W. Mahoney, Meg Risdal, Matthew Lease, Wojciech Samek, Debojyoti Dutta, Curtis G Northcutt, Cody Coleman, Braden Hancock, Bernard Koch, Girmaw Abebe Tadesse, Bojan Karlaš, Ahmed Alaa, Adji Bousso Dieng, Natasha Noy, Vijay Janapa Reddi, James Zou, Praveen Paritosh, Mihaela van der Schaar, Kurt Bollacker, Lora Aroyo, Ce Zhang, Joaquin Vanschoren, Isabelle Guyon, Peter Mattson
Drawing from discussions at the inaugural DMLR workshop at ICML 2023 and meetings prior, in this report we outline the relevance of community engagement and infrastructure development for the creation of next-generation public datasets that will advance machine learning science.
no code implementations • 2 Nov 2023 • Xueting Hu, Ce Zhang, Yi Zhang, Bowen Hai, Ke Yu, Zhihai He
When CLIP is used for depth estimation tasks, the patches, divided from the input images, can be combined with a series of semantic descriptions of the depth information to obtain similarity results.
1 code implementation • 31 Oct 2023 • Ce Zhang, Changcheng Fu, Shijie Wang, Nakul Agarwal, Kwonjoon Lee, Chiho Choi, Chen Sun
To recognize and predict human-object interactions, we use a Transformer-based neural architecture which allows the "retrieval" of relevant objects for action anticipation at various time scales.
Ranked #4 on Long Term Action Anticipation on Ego4D
1 code implementation • 26 Oct 2023 • Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, Beidi Chen
We show that contextual sparsity exists, that it can be accurately predicted, and that we can exploit it to speed up LLM inference in wall-clock time without compromising LLM's quality or in-context learning ability.
no code implementations • 18 Oct 2023 • Qinbin Li, Chulin Xie, Xiaojun Xu, Xiaoyuan Liu, Ce Zhang, Bo Li, Bingsheng He, Dawn Song
To address this, we propose HybridTree, a novel federated learning approach that enables federated tree learning on hybrid data.
1 code implementation • 17 Oct 2023 • Yilmazcan Ozyurt, Stefan Feuerriegel, Ce Zhang
Document-level relation extraction aims at inferring structured human knowledge from textual documents.
1 code implementation • 13 Oct 2023 • Johannes Rausch, Gentiana Rashiti, Maxim Gusev, Ce Zhang, Stefan Feuerriegel
To the best of our knowledge, our DSG system is the first end-to-end trainable system for hierarchical document parsing.
no code implementations • 5 Sep 2023 • Yang Li, Huaijun Jiang, Yu Shen, Yide Fang, Xiaofeng Yang, Danqing Huang, Xinyi Zhang, Wentao Zhang, Ce Zhang, Peng Chen, Bin Cui
The distributed data analytic system -- Spark is a common choice for processing massive volumes of heterogeneous data, while it is challenging to tune its parameters to achieve high performance.
no code implementations • 3 Sep 2023 • Yi Zhang, Ce Zhang, Zihan Liao, Yushun Tang, Zhihai He
Large-scale pre-trained Vision-Language Models (VLMs), such as CLIP and ALIGN, have introduced a new paradigm for learning transferable visual representations.
1 code implementation • 31 Aug 2023 • Qiang Huang, Jiawei Jiang, Xi Susie Rao, Ce Zhang, Zhichao Han, Zitao Zhang, Xin Wang, Yongjun He, Quanqing Xu, Yang Zhao, Chuang Hu, Shuo Shang, Bo Du
To handle graphs in which features or connectivities are evolving over time, a series of temporal graph neural networks (TGNNs) have been proposed.
no code implementations • 22 Aug 2023 • Yi Zhang, Ce Zhang, Xueting Hu, Zhihai He
To leverage the valuable knowledge encoded within these models for downstream tasks, several fine-tuning approaches, including prompt tuning methods and adapter-based methods, have been developed to adapt vision-language models effectively with supervision.
1 code implementation • 31 Jul 2023 • Qi Zhao, Shijie Wang, Ce Zhang, Changcheng Fu, Minh Quan Do, Nakul Agarwal, Kwonjoon Lee, Chen Sun
We propose to formulate the LTA task from two perspectives: a bottom-up approach that predicts the next actions autoregressively by modeling temporal dynamics; and a top-down approach that infers the goal of the actor and plans the needed procedure to accomplish the goal.
Ranked #2 on Long Term Action Anticipation on Ego4D
no code implementations • 28 Jul 2023 • Yi Zhang, Ce Zhang, Yushun Tang, Zhihai He
Based on these visual concepts, we construct a discriminative representation of images and learn a concept inference network to perform downstream image classification tasks, such as few-shot learning and domain generalization.
1 code implementation • 6 Jul 2023 • Xiaozhong Lyu, Stefan Grafberger, Samantha Biegel, Shaopeng Wei, Meng Cao, Sebastian Schelter, Ce Zhang
There are exponentially many terms in the multilinear extension, and one key contribution of this paper is a polynomial time algorithm that computes exactly, given a retrieval-augmented model with an additive utility function and a validation set, the data importance of data points in the retrieval corpus using the multilinear extension of the model's utility function.
no code implementations • 29 Jun 2023 • Ce Zhang, Chengjie Zhang, Yiluan Guo, Lingji Chen, Michael Happold
Multiple Object Tracking (MOT) is crucial to autonomous vehicle perception.
1 code implementation • 11 May 2023 • Ning Ding, Ce Zhang, Azim Eskandarian
On the other hand, unknown objects, which have not been seen in training sample set, are one of the reasons that hinder autonomous vehicles from driving beyond the operational domain.
1 code implementation • 26 Apr 2023 • Huaijun Jiang, Yu Shen, Yang Li, Beicheng Xu, Sixian Du, Wentao Zhang, Ce Zhang, Bin Cui
Black-box optimization (BBO) has a broad range of applications, including automatic machine learning, experimental design, and database knob tuning.
no code implementations • 15 Apr 2023 • Ce Zhang, Kailiang Wu, Zhihai He
Given an unknown dynamical system, what is the minimum number of samples needed for effective learning of its governing laws and accurate prediction of its future evolution behavior, and how to select these critical samples?
no code implementations • CVPR 2023 • Zhehan Kan, Shuoshuo Chen, Ce Zhang, Yushun Tang, Zhihai He
This strong correlation suggests that we can use this error as feedback to guide the correction process.
1 code implementation • 13 Mar 2023 • Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Daniel Y. Fu, Zhiqiang Xie, Beidi Chen, Clark Barrett, Joseph E. Gonzalez, Percy Liang, Christopher Ré, Ion Stoica, Ce Zhang
As a result, when running OPT-175B on a single 16GB GPU, FlexGen achieves significantly higher throughput compared to state-of-the-art offloading systems, reaching a generation throughput of 1 token/s for the first time with an effective batch size of 144.
no code implementations • CVPR 2023 • Yushun Tang, Ce Zhang, Heng Xu, Shuoshuo Chen, Jie Cheng, Luziwei Leng, Qinghai Guo, Zhihai He
We observe that the performance of this feed-forward Hebbian learning for fully test-time adaptation can be significantly improved by incorporating a feedback neuro-modulation layer.
no code implementations • 1 Feb 2023 • Susie Xi Rao, Peter H. Egger, Ce Zhang
This paper presents a hierarchical classification system that automatically categorizes a scholarly publication using its abstract into a three-tier hierarchical label set (discipline, field, subfield) in a multi-class setting.
1 code implementation • 16 Dec 2022 • Yujing Wang, Yaming Yang, Zhuo Li, Jiangang Bai, Mingliang Zhang, Xiangtai Li, Jing Yu, Ce Zhang, Gao Huang, Yunhai Tong
To the best of our knowledge, this is the first work that explicitly models the layer-wise evolution of attention maps.
3 code implementations • 16 Nov 2022 • Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, Yuta Koreeda
We present Holistic Evaluation of Language Models (HELM) to improve the transparency of language models.
no code implementations • 7 Nov 2022 • Ning Ding, Ce Zhang, Azim Eskandarian
A lack of driver's vigilance is the main cause of most vehicle crashes.
no code implementations • 18 Oct 2022 • Yangheng Zhao, Jun Wang, Xiaolong Li, Yue Hu, Ce Zhang, Yanfeng Wang, Siheng Chen
Instead of learning a single prototype for each class, in this paper, we propose to use an adaptive number of prototypes to dynamically describe the different point patterns within a semantic class.
Ranked #17 on 3D Semantic Segmentation on SemanticKITTI
1 code implementation • ICLR 2022 • Alfonso Amayuelas, Shuai Zhang, Susie Xi Rao, Ce Zhang
We introduce a set of models that use Neural Networks to create one-point vector embeddings to answer the queries.
1 code implementation • 12 Sep 2022 • Jiawei Zhang, Linyi Li, Ce Zhang, Bo Li
In particular, we propose a certifiably robust learning with reasoning pipeline (CARE), which consists of a learning component and a reasoning component.
1 code implementation • 20 Jul 2022 • Chulin Xie, Pin-Yu Chen, Qinbin Li, Arash Nourian, Ce Zhang, Bo Li
To address these challenges, in this paper, we introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account, and enables an efficient decomposition of the VFL optimization objective to sub-objectives that can be iteratively tackled by the server and the clients on their own.
1 code implementation • NeurIPS 2023 • Mark Mazumder, Colby Banbury, Xiaozhe Yao, Bojan Karlaš, William Gaviria Rojas, Sudnya Diamos, Greg Diamos, Lynn He, Alicia Parrish, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Douwe Kiela, David Jurado, David Kanter, Rafael Mosquera, Juan Ciro, Lora Aroyo, Bilge Acun, Lingjiao Chen, Mehul Smriti Raje, Max Bartolo, Sabri Eyuboglu, Amirata Ghorbani, Emmett Goodman, Oana Inel, Tariq Kane, Christine R. Kirkpatrick, Tzu-Sheng Kuo, Jonas Mueller, Tristan Thrush, Joaquin Vanschoren, Margaret Warren, Adina Williams, Serena Yeung, Newsha Ardalani, Praveen Paritosh, Lilith Bat-Leah, Ce Zhang, James Zou, Carole-Jean Wu, Cody Coleman, Andrew Ng, Peter Mattson, Vijay Janapa Reddi
Machine learning research has long focused on models rather than datasets, and prominent datasets are used for common ML tasks without regard to the breadth, difficulty, and faithfulness of the underlying problems.
no code implementations • 5 Jul 2022 • Susie Xi Rao, Piriyakorn Piriyatamwong, Parijat Ghoshal, Sara Nasirian, Emmanuel de Salis, Sandra Mitrović, Michael Wechner, Vanya Brucker, Peter Egger, Ce Zhang
The scientific publication output grows exponentially.
1 code implementation • 20 Jun 2022 • Kenza Amara, Rex Ying, Zitao Zhang, Zhihao Han, Yinan Shan, Ulrik Brandes, Sebastian Schemm, Ce Zhang
As GNN models are deployed to more mission-critical applications, we are in dire need for a common evaluation protocol of explainability methods of GNNs.
1 code implementation • 19 Jun 2022 • Yang Li, Yu Shen, Wentao Zhang, Ce Zhang, Bin Cui
End-to-end AutoML has attracted intensive interests from both academia and industry which automatically searches for ML pipelines in a space induced by feature engineering, algorithm/model selection, and hyper-parameter tuning.
1 code implementation • 13 Jun 2022 • Yilmazcan Ozyurt, Stefan Feuerriegel, Ce Zhang
To the best of our knowledge, ours is the first framework to learn domain-invariant, contextual representation for UDA of time series data.
1 code implementation • 12 Jun 2022 • Lijie Xu, Shuang Qiu, Binhang Yuan, Jiawei Jiang, Cedric Renggli, Shaoduo Gan, Kaan Kara, Guoliang Li, Ji Liu, Wentao Wu, Jieping Ye, Ce Zhang
In this paper, we first conduct a systematic empirical study on existing data shuffling strategies, which reveals that all existing strategies have room for improvement -- they all suffer in terms of I/O performance or convergence rate.
1 code implementation • 8 Jun 2022 • Zhen Wang, Weirui Kuang, Ce Zhang, Bolin Ding, Yaliang Li
Due to this uniqueness, existing HPO benchmarks no longer satisfy the need to compare HPO methods in the FL setting.
no code implementations • 6 Jun 2022 • Yang Li, Yu Shen, Huaijun Jiang, Tianyi Bai, Wentao Zhang, Ce Zhang, Bin Cui
The extensive experiments show that our approach considerably boosts BO by designing a promising and compact search space instead of using the entire space, and outperforms the state-of-the-arts on a wide range of benchmarks, including machine learning and deep learning tuning tasks, and neural architecture search.
no code implementations • 6 Jun 2022 • Yang Li, Yu Shen, Huaijun Jiang, Wentao Zhang, Zhi Yang, Ce Zhang, Bin Cui
With the extensive applications of machine learning models, automatic hyperparameter optimization (HPO) has become increasingly important.
1 code implementation • 2 Jun 2022 • Binhang Yuan, Yongjun He, Jared Quincy Davis, Tianyi Zhang, Tri Dao, Beidi Chen, Percy Liang, Christopher Re, Ce Zhang
Our key technical contribution is a scheduling algorithm that allocates different computational "tasklets" in the training of foundation models to a group of decentralized GPU devices connected by a slow heterogeneous network.
1 code implementation • 2 Jun 2022 • Jue Wang, Binhang Yuan, Luka Rimanic, Yongjun He, Tri Dao, Beidi Chen, Christopher Re, Ce Zhang
Communication compression is a crucial technique for modern distributed learning systems to alleviate their communication bottlenecks over slower networks.
1 code implementation • 31 May 2022 • Mintong Kang, Linyi Li, Maurice Weber, Yang Liu, Ce Zhang, Bo Li
In this paper, we first formulate the certified fairness of an ML model trained on a given data distribution as an optimization problem based on the model performance loss bound on a fairness constrained distribution, which is within bounded distributional distance with the training distribution.
no code implementations • 25 May 2022 • Mingxuan Lu, Zhichao Han, Susie Xi Rao, Zitao Zhang, Yang Zhao, Yinan Shan, Ramesh Raghunathan, Ce Zhang, Jiawei Jiang
Apart from rule-based and machine learning filters that are already deployed in production, we want to enable efficient real-time inference with graph neural networks (GNNs), which is useful to catch multihop risk propagation in a transaction graph.
1 code implementation • 23 Apr 2022 • Bojan Karlaš, David Dao, Matteo Interlandi, Bo Li, Sebastian Schelter, Wentao Wu, Ce Zhang
We present DataScope (ease. ml/datascope), the first system that efficiently computes Shapley values of training examples over an end-to-end ML pipeline, and illustrate its applications in data debugging for ML training.
1 code implementation • 22 Apr 2022 • Susie Xi Rao, Clémence Lanfranchi, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Mo Cheng, Yinan Shan, Yang Zhao, Ce Zhang
At online retail platforms, detecting fraudulent accounts and transactions is crucial to improve customer experience, minimize loss, and avoid unauthorized transactions.
1 code implementation • 4 Apr 2022 • Cedric Renggli, Xiaozhe Yao, Luka Kolar, Luka Rimanic, Ana Klimovic, Ce Zhang
Transfer learning can be seen as a data- and compute-efficient alternative to training models from scratch.
no code implementations • 4 Mar 2022 • Ce Zhang, Azim Eskandarian
The results demonstrate that the proposed evaluation metric accurately assesses the detection quality of camera-based systems in autonomous driving environments.
1 code implementation • International Journal of Applied Earth Observation and Geoinformation 2022 • David John, Ce Zhang
In this paper, we implement and analyse an Attention U-Net deep network for semantic segmentation using Sentinel-2 satellite sensor imagery, for the purpose of detecting deforestation within two forest biomes in South America, the Amazon Rainforest and the Atlantic Forest.
1 code implementation • 3 Feb 2022 • Maurice Weber, Linyi Li, Boxin Wang, Zhikuan Zhao, Bo Li, Ce Zhang
As a result, the wider application of these techniques is currently limited by its scalability and flexibility -- these techniques often do not scale to large-scale datasets with modern deep neural networks or cannot handle loss functions which may be non-smooth such as the 0-1 loss.
1 code implementation • 26 Jan 2022 • Gyri Reiersen, David Dao, Björn Lütjens, Konstantin Klemmer, Kenza Amara, Attila Steinegger, Ce Zhang, Xiaoxiang Zhu
The potential for impact and scale of leveraging advancements in machine learning and remote sensing technologies is promising but needs to be of high quality in order to replace the current forest stock protocols for certifications.
no code implementations • 18 Jan 2022 • Yang Li, Yu Shen, Huaijun Jiang, Wentao Zhang, Jixiang Li, Ji Liu, Ce Zhang, Bin Cui
The ever-growing demand and complexity of machine learning are putting pressure on hyper-parameter tuning systems: while the evaluation cost of models continues to increase, the scalability of state-of-the-arts starts to become a crucial bottleneck.
1 code implementation • 5 Jan 2022 • Susie Xi Rao, Johannes Rausch, Peter Egger, Ce Zhang
Tables have been an ever-existing structure to store data.
no code implementations • 17 Dec 2021 • Qian Chen, Haoxin Bai, Bingchen Che, Tianyun Zhao, Ce Zhang, Kaige Wang, Jintao Bai, Wei Zhao
To date, live-cell imaging at the nanometer scale remains challenging.
1 code implementation • LREC 2022 • Thórhildur Thorleiksdóttir, Cedric Renggli, Nora Hollenstein, Ce Zhang
Collecting human judgements is currently the most reliable evaluation method for natural language generation systems.
1 code implementation • 10 Nov 2021 • Xiangru Lian, Binhang Yuan, XueFeng Zhu, Yulong Wang, Yongjun He, Honghuan Wu, Lei Sun, Haodong Lyu, Chengjun Liu, Xing Dong, Yiqiao Liao, Mingnan Luo, Congfei Zhang, Jingru Xie, Haonan Li, Lei Chen, Renjie Huang, Jianying Lin, Chengchun Shu, Xuezhong Qiu, Zhishan Liu, Dongying Kong, Lei Yuan, Hai Yu, Sen yang, Ce Zhang, Ji Liu
Specifically, in order to ensure both the training efficiency and the training accuracy, we design a novel hybrid training algorithm, where the embedding layer and the dense neural network are handled by different synchronization mechanisms; then we build a system called Persia (short for parallel recommendation training system with hybrid acceleration) to support this hybrid training algorithm.
no code implementations • ICLR 2022 • Yuexiang Xie, Zhen Wang, Yaliang Li, Ce Zhang, Jingren Zhou, Bolin Ding
However, our further studies uncover that the design of the loss function of Flooding can lead to a discrepancy between its objective and implementation, and cause the instability issue.
1 code implementation • Findings (EMNLP) 2021 • Daphna Keidar, Mian Zhong, Ce Zhang, Yash Raj Shrestha, Bibek Paudel
With the recent surge in social applications relying on knowledge graphs, the need for techniques to ensure fairness in KG based methods is becoming increasingly evident.
1 code implementation • 18 Sep 2021 • Libo Wang, Rui Li, Ce Zhang, Shenghui Fang, Chenxi Duan, Xiaoliang Meng, Peter M. Atkinson
In this paper, we propose a Transformer-based decoder and construct a UNet-like Transformer (UNetFormer) for real-time urban scene segmentation.
Ranked #1 on Scene Segmentation on UAVid
1 code implementation • 30 Aug 2021 • Cedric Renggli, Luka Rimanic, Nora Hollenstein, Ce Zhang
The Bayes error rate (BER) is a fundamental concept in machine learning that quantifies the best possible accuracy any classifier can achieve on a fixed probability distribution.
no code implementations • 14 Aug 2021 • Fan Wu, Yunhui Long, Ce Zhang, Bo Li
We show that these DP GCN mechanisms are not always resilient against LinkTeller empirically under mild privacy guarantees ($\varepsilon>5$).
no code implementations • 23 Jul 2021 • Gyri Reiersen, David Dao, Björn Lütjens, Konstantin Klemmer, Xiaoxiang Zhu, Ce Zhang
This proposal paper describes the first systematic comparison of forest carbon estimation from aerial imagery, satellite imagery, and ground-truth field measurements via deep learning-based algorithms for a tropical reforestation project.
3 code implementations • 19 Jul 2021 • Yang Li, Yu Shen, Wentao Zhang, Jiawei Jiang, Bolin Ding, Yaliang Li, Jingren Zhou, Zhi Yang, Wentao Wu, Ce Zhang, Bin Cui
End-to-end AutoML has attracted intensive interests from both academia and industry, which automatically searches for ML pipelines in a space induced by feature engineering, algorithm/model selection, and hyper-parameter tuning.
1 code implementation • 3 Jul 2021 • Shaoduo Gan, Xiangru Lian, Rui Wang, Jianbin Chang, Chengjun Liu, Hongmei Shi, Shengzhuo Zhang, Xianghong Li, Tengxu Sun, Jiawei Jiang, Binhang Yuan, Sen yang, Ji Liu, Ce Zhang
Recent years have witnessed a growing list of systems for distributed data-parallel training.
no code implementations • 21 Jun 2021 • Ce Zhang, Azim Eskandarian, Xuelai Du
The output from the proposed algorithm is a surrounding environment complexity parameter.
1 code implementation • 11 Jun 2021 • Nezihe Merve Gürel, Xiangyu Qi, Luka Rimanic, Ce Zhang, Bo Li
In particular, we develop KEMLP by integrating a diverse set of weak auxiliary models based on their logical relationships to the main DNN model that performs the target task.
6 code implementations • 1 Jun 2021 • Yang Li, Yu Shen, Wentao Zhang, Yuanwei Chen, Huaijun Jiang, Mingchao Liu, Jiawei Jiang, Jinyang Gao, Wentao Wu, Zhi Yang, Ce Zhang, Bin Cui
Black-box optimization (BBO) has a broad range of applications, including automatic machine learning, engineering, physics, and experimental design.
no code implementations • NAACL 2021 • Shuai Zhang, Xi Rao, Yi Tay, Ce Zhang
To this end, this paper proposes to learn disentangled representations of KG entities - a new method that disentangles the inner latent properties of KG entities.
no code implementations • NeurIPS 2021 • Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Pan Zhou, Benjamin I. P. Rubinstein, Ce Zhang, Bo Li
To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness.
1 code implementation • 17 May 2021 • Jiawei Jiang, Shaoduo Gan, Yue Liu, Fanlin Wang, Gustavo Alonso, Ana Klimovic, Ankit Singla, Wentao Wu, Ce Zhang
The appeal of serverless (FaaS) has triggered a growing interest on how to use it in data-intensive applications such as ETL, query processing, or machine learning (ML).
1 code implementation • IEEE Transactions on Geoscience and Remote Sensing 2021 • Rui Li, Shunyi Zheng, Ce Zhang, Chenxi Duan, Jianlin Su, Libo Wang, Peter M. Atkinson
A novel attention mechanism of kernel attention with linear complexity is proposed to alleviate the large computational demand in attention.
Ranked #7 on Semantic Segmentation on ISPRS Vaihingen
1 code implementation • 25 Apr 2021 • Libo Wang, Rui Li, Chenxi Duan, Ce Zhang, Xiaoliang Meng, Shenghui Fang
The fully convolutional network (FCN) with an encoder-decoder architecture has been the standard paradigm for semantic segmentation.
Ranked #3 on Semantic Segmentation on ISPRS Potsdam (using extra training data)
1 code implementation • NAACL 2021 • Nora Hollenstein, Federico Pirovano, Ce Zhang, Lena Jäger, Lisa Beinborn
We analyze if large language models are able to predict patterns of human reading behavior.
1 code implementation • 12 Apr 2021 • Ji Liu, Ce Zhang
Scalable and efficient distributed learning is one of the main driving forces behind the recent rapid advancement of machine learning and artificial intelligence.
1 code implementation • NeurIPS 2021 • Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Benjamin Rubinstein, Pan Zhou, Ce Zhang, Bo Li
To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness.
2 code implementations • 20 Mar 2021 • Boxin Wang, Fan Wu, Yunhui Long, Luka Rimanic, Ce Zhang, Bo Li
In this paper, we aim to explore the power of generative models and gradient sparsity, and propose a scalable privacy-preserving generative model DATALENS.
no code implementations • 14 Mar 2021 • Libo Wang, Ce Zhang, Rui Li, Chenxi Duan, Xiaoliang Meng, Peter M. Atkinson
However, MSR images suffer from two critical issues: 1) increased scale variation of geo-objects and 2) loss of detailed information at coarse spatial resolutions.
3 code implementations • NeurIPS 2020 • Defu Cao, Yujing Wang, Juanyong Duan, Ce Zhang, Xia Zhu, Conguri Huang, Yunhai Tong, Bixiong Xu, Jing Bai, Jie Tong, Qi Zhang
In this paper, we propose Spectral Temporal Graph Neural Network (StemGNN) to further improve the accuracy of multivariate time-series forecasting.
Ranked #8 on Traffic Prediction on EXPY-TKY
Graph Neural Network Multivariate Time Series Forecasting +2
2 code implementations • 20 Feb 2021 • Yujing Wang, Yaming Yang, Jiangang Bai, Mingliang Zhang, Jing Bai, Jing Yu, Ce Zhang, Gao Huang, Yunhai Tong
In this paper, we propose a novel and generic mechanism based on evolving attention to improve the performance of transformers.
no code implementations • 17 Feb 2021 • Nora Hollenstein, Cedric Renggli, Benjamin Glaus, Maria Barrett, Marius Troendle, Nicolas Langer, Ce Zhang
In this paper, we present the first large-scale study of systematically analyzing the potential of EEG brain activity data for improving natural language processing tasks, with a special focus on which features of the signal are most beneficial.
no code implementations • 17 Feb 2021 • Shuai Zhang, Yi Tay, Wenqi Jiang, Da-Cheng Juan, Ce Zhang
In order for learned representations to be effective and efficient, it is ideal that the geometric inductive bias aligns well with the underlying structure of the data.
2 code implementations • 16 Feb 2021 • Rui Li, Shunyi Zheng, Ce Zhang, Chenxi Duan, Libo Wang
Based on FPN and AAM, a novel framework named Attention Aggregation Feature Pyramid Network (A2-FPN) is developed for semantic segmentation of fine-resolution remotely sensed images.
no code implementations • 15 Feb 2021 • Cedric Renggli, Luka Rimanic, Nezihe Merve Gürel, Bojan Karlaš, Wentao Wu, Ce Zhang
Developing machine learning models can be seen as a process similar to the one established for traditional software development.
2 code implementations • 4 Feb 2021 • Hanlin Tang, Shaoduo Gan, Ammar Ahmad Awan, Samyam Rajbhandari, Conglong Li, Xiangru Lian, Ji Liu, Ce Zhang, Yuxiong He
One of the most effective methods is error-compensated compression, which offers robust convergence speed even under 1-bit compression.
no code implementations • 24 Jan 2021 • Ce Zhang, Young-Keun Kim, Azim Eskandarian
The proposed CNN model, namely EEG-Inception, is built on the backbone of the Inception-Time network, which showed to be highly efficient and accurate for time-series classification.
no code implementations • 1 Jan 2021 • Yujing Wang, Yaming Yang, Jiangang Bai, Mingliang Zhang, Jing Bai, Jing Yu, Ce Zhang, Yunhai Tong
Instead, we model their dependencies via a chain of prediction models that take previous attention maps as input to predict the attention maps of a new layer through convolutional neural networks.
no code implementations • 20 Dec 2020 • Susie Xi Rao, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Mo Cheng, Yinan Shan, Yang Zhao, Ce Zhang
Massive account registration has raised concerns on risk management in e-commerce companies, especially when registration increases rapidly within a short time frame.
no code implementations • 8 Dec 2020 • Yang Li, Jiawei Jiang, Jinyang Gao, Yingxia Shao, Ce Zhang, Bin Cui
In this framework, the BO methods are used to solve the HPO problem for each ML algorithm separately, incorporating a much smaller hyperparameter space for BO methods.
5 code implementations • 5 Dec 2020 • Yang Li, Yu Shen, Jiawei Jiang, Jinyang Gao, Ce Zhang, Bin Cui
Instead of sampling configurations randomly in HB, BOHB samples configurations based on a BO surrogate model, which is constructed with the high-fidelity measurements only.
no code implementations • NeurIPS 2020 • Zhiqiang Tao, Yaliang Li, Bolin Ding, Ce Zhang, Jingren Zhou, Yun Fu
Computing the gradient of model hyperparameters, i. e., hypergradient, enables a promising and natural way to solve the hyperparameter optimization task.
no code implementations • COLING 2020 • Nora Hollenstein, Adrian van der Lek, Ce Zhang
We demonstrate the functionalities of the new user interface for CogniVal.
1 code implementation • 29 Nov 2020 • Rui Li, Shunyi Zheng, Chenxi Duan, Jianlin Su, Ce Zhang
The attention mechanism can refine the extracted feature maps and boost the classification performance of the deep network, which has become an essential technique in computer vision and natural language processing.
1 code implementation • 24 Nov 2020 • Susie Xi Rao, Shuai Zhang, Zhichao Han, Zitao Zhang, Wei Min, Zhiyao Chen, Yinan Shan, Yang Zhao, Ce Zhang
At online retail platforms, it is crucial to actively detect the risks of transactions to improve customer experience and minimize financial loss.
1 code implementation • 22 Nov 2020 • Xinzheng Zhang, Hang Su, Ce Zhang, Xiaowei Gu, Xiaoheng Tan, Peter M. Atkinson
In this paper, a robust unsupervised approach is proposed for small area change detection from multi-temporal SAR images using deep learning.
3 code implementations • 11 Nov 2020 • Shuai Zhang, Huoyu Liu, Aston Zhang, Yue Hu, Ce Zhang, Yumeng Li, Tanchao Zhu, Shaojian He, Wenwu Ou
Furthermore, we present two variants of hypercuboids to enhance the capability in capturing the diversities of user interests.
1 code implementation • 19 Oct 2020 • Mohammad Reza Karimi, Nezihe Merve Gürel, Bojan Karlaš, Johannes Rausch, Ce Zhang, Andreas Krause
Given $k$ pre-trained classifiers and a stream of unlabeled data examples, how can we actively decide when to query a label so that we can distinguish the best model from the rest while making a small number of queries?
2 code implementations • 16 Oct 2020 • Cedric Renggli, Luka Rimanic, Luka Kolar, Wentao Wu, Ce Zhang
In our experience of working with domain experts who are using today's AutoML systems, a common problem we encountered is what we call "unrealistic expectations" -- when users are facing a very challenging task with a noisy data acquisition process, while being expected to achieve startlingly high accuracy with machine learning (ML).
no code implementations • NeurIPS 2020 • Luka Rimanic, Cedric Renggli, Bo Li, Ce Zhang
This analysis requires in-depth understanding of the properties that connect both the transformed space and the raw feature space.
no code implementations • CVPR 2022 • Cedric Renggli, André Susano Pinto, Luka Rimanic, Joan Puigcerver, Carlos Riquelme, Ce Zhang, Mario Lucic
Transfer learning has been recently popularized as a data-efficient alternative to training models from scratch, in particular for computer vision tasks where it provides a remarkably solid baseline.
no code implementations • 12 Oct 2020 • Wenqi Jiang, Zhenhao He, Shuai Zhang, Thomas B. Preußer, Kai Zeng, Liang Feng, Jiansong Zhang, Tongxuan Liu, Yong Li, Jingren Zhou, Ce Zhang, Gustavo Alonso
MicroRec accelerates recommendation inference by (1) redesigning the data structures involved in the embeddings to reduce the number of lookups needed and (2) taking advantage of the availability of High-Bandwidth Memory (HBM) in FPGA accelerators to tackle the latency by enabling parallel lookups.
no code implementations • 21 Sep 2020 • Maurice Weber, Nana Liu, Bo Li, Ce Zhang, Zhikuan Zhao
This link leads to a tight robustness condition which puts constraints on the amount of noise a classifier can tolerate, independent of whether the noise source is natural or adversarial.
no code implementations • 14 Sep 2020 • Tianhao Wang, Johannes Rausch, Ce Zhang, Ruoxi Jia, Dawn Song
The federated SV preserves the desirable properties of the canonical SV while it can be calculated without incurring extra communication cost and is also able to capture the effect of participation order on data value.
no code implementations • 3 Sep 2020 • Rui Li, Shunyi Zheng, Chenxi Duan, Ce Zhang, Jianlin Su, P. M. Atkinson
A novel attention mechanism of kernel attention with linear complexity is proposed to alleviate the large computational demand in attention.
no code implementations • 26 Aug 2020 • Hanlin Tang, Shaoduo Gan, Samyam Rajbhandari, Xiangru Lian, Ji Liu, Yuxiong He, Ce Zhang
Adam is the important optimization algorithm to guarantee efficiency and accuracy for training many important tasks such as BERT and ImageNet.
no code implementations • 25 Aug 2020 • Ce Zhang, Azim Eskandarian
Then, the EEG signal preprocessing, feature extraction, and classification algorithms for driver state detection are reviewed.
no code implementations • 25 Aug 2020 • Ce Zhang, Azim Eskandarian
The experiment results show the proposed algorithm average computation time is 37. 22% less than the FBCSP (1st winner in the BCI Competition IV) and 4. 98% longer than the conventional CSP method.
1 code implementation • 1 Aug 2020 • Rui Li, Shunyi Zheng, Chenxi Duan, Ce Zhang
In this paper, a Multi-Scale Fully Convolutional Network (MSFCN) with multi-scale convolutional kernel is proposed to exploit discriminative representations from two-dimensional (2D) satellite images.
no code implementations • 29 Jul 2020 • Yuexiang Xie, Zhen Wang, Yaliang Li, Bolin Ding, Nezihe Merve Gürel, Ce Zhang, Minlie Huang, Wei. Lin, Jingren Zhou
Then we instantiate this search strategy by optimizing both a dedicated graph neural network (GNN) and the adjacency tensor associated with the defined feature graph.
2 code implementations • 26 Jul 2020 • Rui Li, Chenxi Duan, Shunyi Zheng, Ce Zhang, Peter M. Atkinson
In this Letter, we incorporate multi-scale features generated by different layers of U-Net and design a multi-scale skip connected and asymmetric-convolution-based U-Net (MACU-Net), for segmentation using fine-resolution remotely sensed images.
no code implementations • 29 Jun 2020 • Mario Arduini, Lorenzo Noci, Federico Pirovano, Ce Zhang, Yash Raj Shrestha, Bibek Paudel
As a second step, we explore gender bias in KGE, and a careful examination of popular KGE algorithms suggest that sensitive attribute like the gender of a person can be predicted from the embedding.
1 code implementation • 11 May 2020 • Bojan Karlaš, Peng Li, Renzhi Wu, Nezihe Merve Gürel, Xu Chu, Wentao Wu, Ce Zhang
Machine learning (ML) applications have been thriving recently, largely attributed to the increasing availability of data.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Giuseppe Russo, Nora Hollenstein, Claudiu Musat, Ce Zhang
We introduce CGA, a conditional VAE architecture, to control, generate, and augment text.
no code implementations • 21 Apr 2020 • Simona Santamaria, David Dao, Björn Lütjens, Ce Zhang
Recent works propose low-cost and accurate MRV via automatically determining forest carbon from drone imagery, collected by the landowners.
1 code implementation • 19 Mar 2020 • Maurice Weber, Xiaojun Xu, Bojan Karlaš, Ce Zhang, Bo Li
In addition, we theoretically show that it is possible to train the robust smoothed models efficiently for simple models such as K-nearest neighbor classifiers, and we propose an exact smooth-training algorithm that eliminates the need to sample from a noise distribution for such models.
no code implementations • 3 Mar 2020 • Xinzheng Zhang, Hang Su, Ce Zhang, Peter M. Atkinson, Xiaoheng Tan, Xiaoping Zeng, Xin Jian
Parallel FCM are utilized on these two mapped DDIs to obtain three types of pseudo-label pixels, namely, changed pixels, unchanged pixels, and intermediate pixels.
1 code implementation • 28 Feb 2020 • Zhuolin Yang, Zhikuan Zhao, Boxin Wang, Jiawei Zhang, Linyi Li, Hengzhi Pei, Bojan Karlas, Ji Liu, Heng Guo, Ce Zhang, Bo Li
Intensive algorithmic efforts have been made to enable the rapid improvements of certificated robustness for complex ML models recently.
1 code implementation • 27 Feb 2020 • Linyi Li, Maurice Weber, Xiaojun Xu, Luka Rimanic, Bhavya Kailkhura, Tao Xie, Ce Zhang, Bo Li
Moreover, to the best of our knowledge, TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset.
no code implementations • 17 Jan 2020 • Xinzheng Zhang, Guo Liu, Ce Zhang, Peter M. Atkinson, Xiaoheng Tan, Xin Jian, Xichuan Zhou, Yongming Li
The prediction of this Phase is the set of changed and unchanged superpixels.
no code implementations • 23 Dec 2019 • Yujing Wang, Yaming Yang, Yiren Chen, Jing Bai, Ce Zhang, Guinan Su, Xiaoyu Kou, Yunhai Tong, Mao Yang, Lidong Zhou
Learning text representation is crucial for text classification and other language related tasks.
no code implementations • 19 Dec 2019 • Fotis Psallidas, Yiwen Zhu, Bojan Karlas, Matteo Interlandi, Avrilia Floratou, Konstantinos Karanasos, Wentao Wu, Ce Zhang, Subru Krishnan, Carlo Curino, Markus Weimer
The recent success of machine learning (ML) has led to an explosive growth both in terms of new systems and algorithms built in industry and academia, and new applications built by an ever-growing community of data science (DS) practitioners.
no code implementations • 16 Dec 2019 • Huajun Wang, Yuan-Hai Shao, Shenglong Zhou, Ce Zhang, Naihua Xiu
To distinguish all of them, in this paper, we introduce a new model equipped with an $L_{0/1}$ soft-margin loss (dubbed as $L_{0/1}$-SVM) which well captures the nature of the binary classification.
no code implementations • LREC 2020 • Nora Hollenstein, Marius Troendle, Ce Zhang, Nicolas Langer
We recorded and preprocessed ZuCo 2. 0, a new dataset of simultaneous eye-tracking and electroencephalography during natural reading and during annotation.
1 code implementation • CVPR 2021 • Ruoxi Jia, Fan Wu, Xuehui Sun, Jiacen Xu, David Dao, Bhavya Kailkhura, Ce Zhang, Bo Li, Dawn Song
Quantifying the importance of each training point to a learning task is a fundamental problem in machine learning and the estimated importance scores have been leveraged to guide a range of data workflows such as data summarization and domain adaption.
2 code implementations • 5 Nov 2019 • Johannes Rausch, Octavio Martinez, Fabian Bissig, Ce Zhang, Stefan Feuerriegel
Translating renderings (e. g. PDFs, scans) into hierarchical document structures is extensively demanded in the daily routines of many real-world applications.
no code implementations • 10 Oct 2019 • Xupeng Miao, Nezihe Merve Gürel, Wentao Zhang, Zhichao Han, Bo Li, Wei Min, Xi Rao, Hansheng Ren, Yinan Shan, Yingxia Shao, Yujie Wang, Fan Wu, Hui Xue, Yaming Yang, Zitao Zhang, Yang Zhao, Shuai Zhang, Yujing Wang, Bin Cui, Ce Zhang
Despite the wide application of Graph Convolutional Network (GCN), one major limitation is that it does not benefit from the increasing depth and suffers from the oversmoothing problem.