1 code implementation • 29 May 2023 • Yifei Wang, Zhengyang Zhou, Liqin Wang, John Laurentiev, Peter Hou, Li Zhou, Pengyu Hong
The confounding factors, which are non-sensitive variables but manifest systematic differences, can significantly affect fairness evaluation.
no code implementations • 16 May 2023 • Yifei Wang, Yiyang Zhou, Jihua Zhu, Xinyuan Liu, Wenbiao Yan, Zhiqiang Tian
Label distribution learning (LDL) is a new machine learning paradigm for solving label ambiguity.
1 code implementation • CVPR 2023 • Zeming Wei, Yifei Wang, Yiwen Guo, Yisen Wang
Adversarial training has been widely acknowledged as the most effective method to improve the adversarial robustness against adversarial examples for Deep Neural Networks (DNNs).
2 code implementations • 12 Mar 2023 • Xiaojun Guo, Yifei Wang, Tianqi Du, Yisen Wang
Instead of characterizing oversmoothing from the view of complete collapse in which representations converge to a single point, we dive into a more general perspective of dimensional collapse in which representations lie in a narrow cone.
Ranked #4 on
Node Property Prediction
on ogbn-arxiv
1 code implementation • 8 Mar 2023 • Yifei Wang, Qi Zhang, Tianqi Du, Jiansheng Yang, Zhouchen Lin, Yisen Wang
In recent years, contrastive learning achieves impressive results on self-supervised visual representation learning, but there still lacks a rigorous understanding of its learning dynamics.
1 code implementation • 4 Mar 2023 • Zhijian Zhuo, Yifei Wang, Jinwen Ma, Yisen Wang
In this work, we propose a unified theoretical understanding for existing variants of non-contrastive learning.
1 code implementation • 2 Mar 2023 • Rundong Luo, Yifei Wang, Yisen Wang
Motivated by this observation, we revisit existing self-AT methods and discover an inherent dilemma that affects self-AT robustness: either strong or weak data augmentations are harmful to self-AT, and a medium strength is insufficient to bridge the gap.
no code implementations • 28 Feb 2023 • Wenbiao Yan, Jihua Zhu, Yiyang Zhou, Yifei Wang, Qinghai Zheng
In this way, the learned semantic consistency from multi-view data can improve the information bottleneck to more exactly distinguish the consistent information and learn a unified feature representation with more discriminative consistent information for clustering.
no code implementations • 26 Feb 2023 • Yiyang Zhou, Qinghai Zheng, Wenbiao Yan, Yifei Wang, Pengcheng Shi, Jihua Zhu
Further, we designed a multi-level consistency collaboration strategy, which utilizes the consistent information of semantic space as a self-supervised signal to collaborate with the cluster assignments in feature space.
Ranked #1 on
Multiview Clustering
on Fashion-MNIST
1 code implementation • 12 Feb 2023 • Yifei Wang, Yupan Wang, Zeyu Zhang, Song Yang, Kaiqi Zhao, Jiamou Liu
To this end, we propose USER, an unsupervised robust version of graph neural networks that is based on structural entropy.
no code implementations • 18 Dec 2022 • Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang
Extensive experiments show that our proposed DAT can effectively remove domain-varying features and improve OOD generalization under both correlation shift and diversity shift.
no code implementations • 16 Dec 2022 • Matthias Eisenmann, Annika Reinke, Vivienn Weru, Minu Dietlinde Tizabi, Fabian Isensee, Tim J. Adler, Patrick Godau, Veronika Cheplygina, Michal Kozubek, Sharib Ali, Anubha Gupta, Jan Kybic, Alison Noble, Carlos Ortiz de Solórzano, Samiksha Pachade, Caroline Petitjean, Daniel Sage, Donglai Wei, Elizabeth Wilden, Deepak Alapatt, Vincent Andrearczyk, Ujjwal Baid, Spyridon Bakas, Niranjan Balu, Sophia Bano, Vivek Singh Bawa, Jorge Bernal, Sebastian Bodenstedt, Alessandro Casella, Jinwook Choi, Olivier Commowick, Marie Daum, Adrien Depeursinge, Reuben Dorent, Jan Egger, Hannah Eichhorn, Sandy Engelhardt, Melanie Ganz, Gabriel Girard, Lasse Hansen, Mattias Heinrich, Nicholas Heller, Alessa Hering, Arnaud Huaulmé, Hyunjeong Kim, Bennett Landman, Hongwei Bran Li, Jianning Li, Jun Ma, Anne Martel, Carlos Martín-Isla, Bjoern Menze, Chinedu Innocent Nwoye, Valentin Oreiller, Nicolas Padoy, Sarthak Pati, Kelly Payette, Carole Sudre, Kimberlin Van Wijnen, Armine Vardazaryan, Tom Vercauteren, Martin Wagner, Chuanbo Wang, Moi Hoon Yap, Zeyun Yu, Chun Yuan, Maximilian Zenk, Aneeq Zia, David Zimmerer, Rina Bao, Chanyeol Choi, Andrew Cohen, Oleh Dzyubachyk, Adrian Galdran, Tianyuan Gan, Tianqi Guo, Pradyumna Gupta, Mahmood Haithami, Edward Ho, Ikbeom Jang, Zhili Li, Zhengbo Luo, Filip Lux, Sokratis Makrogiannis, Dominik Müller, Young-tack Oh, Subeen Pang, Constantin Pape, Gorkem Polat, Charlotte Rosalie Reed, Kanghyun Ryu, Tim Scherr, Vajira Thambawita, Haoyu Wang, Xinliang Wang, Kele Xu, Hung Yeh, Doyeob Yeo, Yixuan Yuan, Yan Zeng, Xin Zhao, Julian Abbing, Jannes Adam, Nagesh Adluru, Niklas Agethen, Salman Ahmed, Yasmina Al Khalil, Mireia Alenyà, Esa Alhoniemi, Chengyang An, Talha Anwar, Tewodros Weldebirhan Arega, Netanell Avisdris, Dogu Baran Aydogan, Yingbin Bai, Maria Baldeon Calisto, Berke Doga Basaran, Marcel Beetz, Cheng Bian, Hao Bian, Kevin Blansit, Louise Bloch, Robert Bohnsack, Sara Bosticardo, Jack Breen, Mikael Brudfors, Raphael Brüngel, Mariano Cabezas, Alberto Cacciola, Zhiwei Chen, Yucong Chen, Daniel Tianming Chen, Minjeong Cho, Min-Kook Choi, Chuantao Xie Chuantao Xie, Dana Cobzas, Julien Cohen-Adad, Jorge Corral Acero, Sujit Kumar Das, Marcela de Oliveira, Hanqiu Deng, Guiming Dong, Lars Doorenbos, Cory Efird, Di Fan, Mehdi Fatan Serj, Alexandre Fenneteau, Lucas Fidon, Patryk Filipiak, René Finzel, Nuno R. Freitas, Christoph M. Friedrich, Mitchell Fulton, Finn Gaida, Francesco Galati, Christoforos Galazis, Chang Hee Gan, Zheyao Gao, Shengbo Gao, Matej Gazda, Beerend Gerats, Neil Getty, Adam Gibicar, Ryan Gifford, Sajan Gohil, Maria Grammatikopoulou, Daniel Grzech, Orhun Güley, Timo Günnemann, Chunxu Guo, Sylvain Guy, Heonjin Ha, Luyi Han, Il Song Han, Ali Hatamizadeh, Tian He, Jimin Heo, Sebastian Hitziger, SeulGi Hong, Seungbum Hong, Rian Huang, Ziyan Huang, Markus Huellebrand, Stephan Huschauer, Mustaffa Hussain, Tomoo Inubushi, Ece Isik Polat, Mojtaba Jafaritadi, SeongHun Jeong, Bailiang Jian, Yuanhong Jiang, Zhifan Jiang, Yueming Jin, Smriti Joshi, Abdolrahim Kadkhodamohammadi, Reda Abdellah Kamraoui, Inha Kang, Junghwa Kang, Davood Karimi, April Khademi, Muhammad Irfan Khan, Suleiman A. Khan, Rishab Khantwal, Kwang-Ju Kim, Timothy Kline, Satoshi Kondo, Elina Kontio, Adrian Krenzer, Artem Kroviakov, Hugo Kuijf, Satyadwyoom Kumar, Francesco La Rosa, Abhi Lad, Doohee Lee, Minho Lee, Chiara Lena, Hao Li, Ling Li, Xingyu Li, Fuyuan Liao, Kuanlun Liao, Arlindo Limede Oliveira, Chaonan Lin, Shan Lin, Akis Linardos, Marius George Linguraru, Han Liu, Tao Liu, Di Liu, Yanling Liu, João Lourenço-Silva, Jingpei Lu, Jiangshan Lu, Imanol Luengo, Christina B. Lund, Huan Minh Luu, Yi Lv, Uzay Macar, Leon Maechler, Sina Mansour L., Kenji Marshall, Moona Mazher, Richard McKinley, Alfonso Medela, Felix Meissen, Mingyuan Meng, Dylan Miller, Seyed Hossein Mirjahanmardi, Arnab Mishra, Samir Mitha, Hassan Mohy-ud-Din, Tony Chi Wing Mok, Gowtham Krishnan Murugesan, Enamundram Naga Karthik, Sahil Nalawade, Jakub Nalepa, Mohamed Naser, Ramin Nateghi, Hammad Naveed, Quang-Minh Nguyen, Cuong Nguyen Quoc, Brennan Nichyporuk, Bruno Oliveira, David Owen, Jimut Bahan Pal, Junwen Pan, Wentao Pan, Winnie Pang, Bogyu Park, Vivek Pawar, Kamlesh Pawar, Michael Peven, Lena Philipp, Tomasz Pieciak, Szymon Plotka, Marcel Plutat, Fattaneh Pourakpour, Domen Preložnik, Kumaradevan Punithakumar, Abdul Qayyum, Sandro Queirós, Arman Rahmim, Salar Razavi, Jintao Ren, Mina Rezaei, Jonathan Adam Rico, ZunHyan Rieu, Markus Rink, Johannes Roth, Yusely Ruiz-Gonzalez, Numan Saeed, Anindo Saha, Mostafa Salem, Ricardo Sanchez-Matilla, Kurt Schilling, Wei Shao, Zhiqiang Shen, Ruize Shi, Pengcheng Shi, Daniel Sobotka, Théodore Soulier, Bella Specktor Fadida, Danail Stoyanov, Timothy Sum Hon Mun, Xiaowu Sun, Rong Tao, Franz Thaler, Antoine Théberge, Felix Thielke, Helena Torres, Kareem A. Wahid, Jiacheng Wang, Yifei Wang, Wei Wang, Xiong Wang, Jianhui Wen, Ning Wen, Marek Wodzinski, Ye Wu, Fangfang Xia, Tianqi Xiang, Chen Xiaofei, Lizhan Xu, Tingting Xue, Yuxuan Yang, Lin Yang, Kai Yao, Huifeng Yao, Amirsaeed Yazdani, Michael Yip, Hwanseung Yoo, Fereshteh Yousefirizi, Shunkai Yu, Lei Yu, Jonathan Zamora, Ramy Ashraf Zeineldin, Dewen Zeng, Jianpeng Zhang, Bokai Zhang, Jiapeng Zhang, Fan Zhang, Huahong Zhang, Zhongchen Zhao, Zixuan Zhao, Jiachen Zhao, Can Zhao, Qingshuo Zheng, Yuheng Zhi, Ziqi Zhou, Baosheng Zou, Klaus Maier-Hein, Paul F. Jäger, Annette Kopp-Schneider, Lena Maier-Hein
Of these, 84% were based on standard architectures.
no code implementations • 1 Nov 2022 • Yifei Wang, Tavor Baharav, Yanjun Han, Jiantao Jiao, David Tse
In the infinite-armed bandit problem, each arm's average reward is sampled from an unknown distribution, and each arm can be sampled further to obtain noisy estimates of the average reward of that arm.
1 code implementation • 15 Oct 2022 • Qi Zhang, Yifei Wang, Yisen Wang
Masked Autoencoders (MAE) based on a reconstruction task have risen to be a promising paradigm for self-supervised learning (SSL) and achieve state-of-the-art performance across different benchmark datasets.
1 code implementation • 14 Oct 2022 • Yichuan Mo, Dongxian Wu, Yifei Wang, Yiwen Guo, Yisen Wang
We find, when randomly masking gradients from some attention blocks or masking perturbations on some patches during adversarial training, the adversarial robustness of ViTs can be remarkably improved, which may potentially open up a line of work to explore the architectural information inside the newly designed models like ViTs.
1 code implementation • 13 Oct 2022 • Qixun Wang, Yifei Wang, Hong Zhu, Yisen Wang
In this paper, we empirically show that sample-wise AT has limited improvement on OOD performance.
1 code implementation • 30 Sep 2022 • Yifei Wang, Yixuan Hua, Emmanuel Candés, Mert Pilanci
For randomly generated data, we show the existence of a phase transition in recovering planted neural network models, which is easy to describe: whenever the ratio between the number of samples and the dimension exceeds a numerical threshold, the recovery succeeds with high probability; otherwise, it fails with high probability.
1 code implementation • 9 Aug 2022 • Yifei Wang, Shiyang Chen, Guobin Chen, Ethan Shurberg, Hang Liu, Pengyu Hong
MCM builds a motif vocabulary in an unsupervised way and deploys a novel motif convolution operation to extract the local structural context of individual nodes, which is then used to learn higher-level node representations via multilayer perceptron and/or message passing in graph neural networks.
1 code implementation • 29 Jun 2022 • Qi Chen, Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Moreover, we show that the optimization-induced variants of our models can boost the performance and improve training stability and efficiency as well.
no code implementations • 2 Jun 2022 • Yifei Wang, Qichao Ying, Zhenxing Qian, Sheng Li, Xinpeng Zhang
To address this issue, we present a new video watermarking based on joint Dual-Tree Cosine Wavelet Transformation (DTCWT) and Singular Value Decomposition (SVD), which is resistant to frame rate conversion.
1 code implementation • 26 May 2022 • Yifei Wang, Peng Chen, Mert Pilanci, Wuchen Li
We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation.
1 code implementation • 19 Apr 2022 • Alex Leviyev, Joshua Chen, Yifei Wang, Omar Ghattas, Aaron Zimmerman
Meanwhile, Stein variational Newton (SVN), a Newton-like extension of SVGD, dramatically accelerates the convergence of SVGD by incorporating Hessian information into the dynamics, but also produces biased samples.
no code implementations • 15 Apr 2022 • Tong Yang, Yifei Wang, Long Sha, Jan Engelbrecht, Pengyu Hong
As far as we know, by applying abstract algebra in statistical learning, this work develops the first formal language for general knowledge graphs, and also sheds light on the problem of neural-symbolic integration from an algebraic perspective.
no code implementations • ICLR 2022 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
On the other hand, our unified framework can be extended to the unsupervised scenario, which interprets unsupervised contrastive learning as an important sampling of CEM.
1 code implementation • 25 Mar 2022 • Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Our theory suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and the overlapped augmented views (i. e., the chaos) create a ladder for contrastive learning to gradually learn class-separated representations.
no code implementations • 19 Nov 2021 • Zhirui Wang, Yifei Wang, Yisen Wang
Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack.
no code implementations • NeurIPS 2021 • Yifei Wang, Zhengyang Geng, Feng Jiang, Chuming Li, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Multi-view methods learn representations by aligning multiple views of the same image and their performance largely depends on the choice of data augmentation.
no code implementations • 13 Oct 2021 • Yifei Wang, Tolga Ergen, Mert Pilanci
Recent work has proven that the strong duality holds (which means zero duality gap) for regularized finite-width two-layer ReLU networks and consequently provided an equivalent convex training problem.
no code implementations • ICLR 2022 • Yifei Wang, Mert Pilanci
We then show that the limit points of non-convex subgradient flows can be identified via primal-dual correspondence in this convex optimization problem.
no code implementations • 12 Oct 2021 • Justin Li, Dakang Zhang, Yifei Wang, Christopher Ye, Hao Xu, Pengyu Hong
Since late 1960s, there have been numerous successes in the exciting new frontier of asymmetric catalysis.
no code implementations • ICLR 2022 • Yifei Wang, Jonathan Lacotte, Mert Pilanci
As additional consequences of our convex perspective, (i) we establish that Clarke stationary points found by stochastic gradient descent correspond to the global optimum of a subsampled convex problem (ii) we provide a polynomial-time algorithm for checking if a neural network is a global minimum of the training loss (iii) we provide an explicit construction of a continuous path between any neural network and the global minimum of its sublevel set and (iv) characterize the minimal size of the hidden layer so that the neural network optimization landscape has no spurious valleys.
no code implementations • 29 Sep 2021 • Zhirui Wang, Yifei Wang, Yisen Wang
Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack.
no code implementations • ICLR 2022 • Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Our work suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and it is the overlapping augmented views (i. e., the chaos) that create a ladder for contrastive learning to gradually learn class-separated representations.
no code implementations • 29 Sep 2021 • Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang
Extensive experiments show that our proposed DAT can effectively remove the domain-varying features and improve OOD generalization on both correlation shift and diversity shift tasks.
1 code implementation • 1 Jul 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).
no code implementations • ICML Workshop AML 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Based on these, we propose principled adversarial sampling algorithms in both supervised and unsupervised scenarios.
no code implementations • 15 May 2021 • Jonathan Lacotte, Yifei Wang, Mert Pilanci
Our first contribution is to show that, at each iteration, the embedding dimension (or sketch size) can be as small as the effective dimension of the Hessian matrix.
1 code implementation • NeurIPS 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Graph Convolutional Networks (GCNs) have attracted more and more attentions in recent years.
1 code implementation • 12 Feb 2021 • Yifei Wang, Peng Chen, Wuchen Li
We propose a projected Wasserstein gradient descent method (pWGD) for high-dimensional Bayesian inference problems.
1 code implementation • ICCV 2021 • Miao Zhang, Jie Liu, Yifei Wang, Yongri Piao, Shunyu Yao, Wei Ji, Jingjing Li, Huchuan Lu, Zhongxuan Luo
Our bidirectional dynamic fusion strategy encourages the interaction of spatial and temporal information in a dynamic manner.
Ranked #11 on
Video Polyp Segmentation
on SUN-SEG-Easy (Unseen)
1 code implementation • ICLR 2021 • Peizhao Li, Yifei Wang, Han Zhao, Pengyu Hong, Hongfu Liu
Disparate impact has raised serious concerns in machine learning applications and its societal impacts.
no code implementations • 1 Jan 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).
no code implementations • COLING 2020 • Chao Tian, Yifei Wang, Hao Cheng, Yijiang Lian, Zhihua Zhang
In this paper we propose a unified approach for supporting different generation manners of machine translation, including autoregressive, semi-autoregressive, and refinement-based non-autoregressive models.
no code implementations • 5 Aug 2020 • Yijiang Lian, Zhijie Chen, Xin Pei, Shuang Li, Yifei Wang, Yuefeng Qiu, Zhiheng Zhang, Zhipeng Tao, Liang Yuan, Hanju Guan, Kefeng Zhang, Zhigang Li, Xiaochun Liu
Industrial sponsored search system (SSS) can be logically divided into three modules: keywords matching, ad retrieving, and ranking.
no code implementations • 2 Jul 2020 • Yifei Wang, Dan Peng, Furui Liu, Zhenguo Li, Zhitang Chen, Jiansheng Yang
Adversarial Training (AT) is proposed to alleviate the adversarial vulnerability of machine learning models by extracting only robust features from the input, which, however, inevitably leads to severe accuracy reduction as it discards the non-robust yet useful features.
no code implementations • 10 Jun 2020 • Yifei Wang, Jonathan Lacotte, Mert Pilanci
As additional consequences of our convex perspective, (i) we establish that Clarke stationary points found by stochastic gradient descent correspond to the global optimum of a subsampled convex problem (ii) we provide a polynomial-time algorithm for checking if a neural network is a global minimum of the training loss (iii) we provide an explicit construction of a continuous path between any neural network and the global minimum of its sublevel set and (iv) characterize the minimal size of the hidden layer so that the neural network optimization landscape has no spurious valleys.
no code implementations • 13 Jan 2020 • Yifei Wang, Wuchen Li
We introduce a framework for Newton's flows in probability space with information metrics, named information Newton's flows.
no code implementations • 1 Nov 2019 • Yifei Wang, Rui Liu, Yong Chen, Hui Zhangs, Zhiwen Ye
Spectral Clustering is a popular technique to split data points into groups, especially for complex datasets.
1 code implementation • 4 Sep 2019 • Yifei Wang, Wuchen Li
We present a framework for Nesterov's accelerated gradient flows in probability space to design efficient mean-field Markov chain Monte Carlo (MCMC) algorithms for Bayesian inverse problems.
no code implementations • 13 Jul 2017 • Yifei Wang, Wen Li, Dengxin Dai, Luc van Gool
Our work builds on the recently proposed Deep CORAL method, which proposed to train a convolutional neural network and simultaneously minimize the Euclidean distance of convariance matrices between the source and target domains.