no code implementations • 17 Jul 2024 • Mohamad Amin Mohamadi, Zhiyuan Li, Lei Wu, Danica J. Sutherland

We present a theoretical explanation of the ``grokking'' phenomenon, where a model generalizes long after overfitting, for the originally-studied problem of modular addition.

no code implementations • 31 May 2024 • Mingze Wang, Haotian He, Jinbo Wang, Zilin Wang, Guanhua Huang, Feiyu Xiong, Zhiyu Li, Weinan E, Lei Wu

In this work, we propose an Implicit Regularization Enhancement (IRE) framework to accelerate the discovery of flat solutions in deep learning, thereby improving generalization and convergence.

no code implementations • 9 Apr 2024 • Zhanran Lin, Puheng Li, Lei Wu

One of the most intriguing findings in the structure of neural network landscape is the phenomenon of mode connectivity: For two typical global minima, there exists a path connecting them without barrier.

no code implementations • 24 Feb 2024 • Jihao Long, Xiaojun Peng, Lei Wu

In this paper, we conduct a comprehensive analysis of generalization properties of Kernel Ridge Regression (KRR) in the noiseless regime, a scenario crucial to scientific computing, where data are often generated via computer simulations.

no code implementations • 11 Feb 2024 • Liu Ziyin, Mingze Wang, Hongchao Li, Lei Wu

Symmetries exist abundantly in the loss function of neural networks.

3 code implementations • Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV-2024) 2023 • Fangchen Yu, Yina Xie, Lei Wu, Yafei Wen, Guozhi Wang, Shuai Ren, Xiaoxin Chen, Jianfeng Mao, Wenye Li

Document image dewarping is a crucial task in computer vision with numerous practical applications.

no code implementations • 26 Nov 2023 • Kaizhao Liu, ZiHao Wang, Lei Wu

We next consider the one-point strong convexity and show that as long as $n=\omega(d)$, with high probability, the landscape is one-point strongly convex in the local annulus: $\{w\in\mathbb{R}^d: o_d(1)\leqslant \|w-w^*\|\leqslant c\}$, where $w^*$ is the ground truth and $c$ is an absolute constant.

no code implementations • 24 Nov 2023 • Mingze Wang, Zeping Min, Lei Wu

Inspired by this analysis, we propose a novel algorithm called Progressive Rescaling Gradient Descent (PRGD) and show that PRGD can maximize the margin at an {\em exponential rate}.

no code implementations • 1 Oct 2023 • Mingze Wang, Lei Wu

In this paper, we provide a theoretical study of noise geometry for minibatch stochastic gradient descent (SGD), a phenomenon where noise aligns favorably with the geometry of local landscape.

no code implementations • 5 Jun 2023 • Hongrui Chen, Jihao Long, Lei Wu

We prove that if $\beta$ is independent of the input dimension $d$, then functions in the RKHS can be learned efficiently under the $L^\infty$ norm, i. e., the sample complexity depends polynomially on $d$.

no code implementations • 1 Jun 2023 • Ahmed W. Moawad, Anastasia Janas, Ujjwal Baid, Divya Ramakrishnan, Rachit Saluja, Nader Ashraf, Leon Jekel, Raisa Amiruddin, Maruf Adewole, Jake Albrecht, Udunna Anazodo, Sanjay Aneja, Syed Muhammad Anwar, Timothy Bergquist, Evan Calabrese, Veronica Chiang, Verena Chung, Gian Marco Marco Conte, Farouk Dako, James Eddy, Ivan Ezhov, Ariana Familiar, Keyvan Farahani, Juan Eugenio Iglesias, Zhifan Jiang, Elaine Johanson, Anahita Fathi Kazerooni, Florian Kofler, Kiril Krantchev, Dominic LaBella, Koen van Leemput, Hongwei Bran Li, Marius George Linguraru, Katherine E. Link, Xinyang Liu, Nazanin Maleki, Zeke Meier, Bjoern H Menze, Harrison Moy, Klara Osenberg, Marie Piraud, Zachary Reitman, Russel Takeshi Shinohara, Nourel Hoda Tahon, Ayman Nada, Yuri S. Velichko, Chunhao Wang, Benedikt Wiestler, Walter Wiggins, Umber Shafique, Arman Avesta, Khaled Bousabarah, Satrajit Chakrabarty, Nicolo Gennaro, Wolfgang Holler, Manpreet Kaur, Pamela Lamontagne, MingDe Lin, Jan Lost, Daniel S. Marcus, Ryan Maresca, Sarah Merkaj, Ayaman Nada, Gabriel Cassinelli Pedersen, Marc von Reppert, Aristeidis Sotiras, Oleg Teytelboym, Niklas Tillmans, Malte Westerhoff, Ayda Youssef, Devon Godfrey, Scott Floyd, Andreas Rauschecker, Javier Villanueva-Meyer, Irada Pflüger, Jaeyoung Cho, Martin Bendszus, Gianluca Brugnara, Justin Cramer, Gloria J. Guzman Perez-Carillo, Derek R. Johnson, Anthony Kam, Benjamin Yin Ming Kwan, Lillian Lai, Neil U. Lall, Fatima Memon, Satya Narayana Patro, Bojan Petrovic, Tiffany Y. So, Gerard Thompson, Lei Wu, E. Brooke Schrickel, Anu Bansal, Frederik Barkhof, Cristina Besada, Sammy Chu, Jason Druzgal, Alexandru Dusoi, Luciano Farage, Fabricio Feltrin, Amy Fong, Steve H. Fung, R. Ian Gray, Ichiro Ikuta, Michael Iv, Alida A. Postma, Amit Mahajan, David Joyner, Chase Krumpelman, Laurent Letourneau-Guillon, Christie M. Lincoln, Mate E. Maros, Elka Miller, Fanny Morón, Esther A. Nimchinsky, Ozkan Ozsarlak, Uresh Patel, Saurabh Rohatgi, Atin Saha, Anousheh Sayah, Eric D. Schwartz, Robert Shih, Mark S. Shiroishi, Juan E. Small, Manoj Tanwar, Jewels Valerie, Brent D. Weinberg, Matthew L. White, Robert Young, Vahe M. Zohrabian, Aynur Azizova, Melanie Maria Theresa Brüßeler, Pascal Fehringer, Mohanad Ghonim, Mohamed Ghonim, Athanasios Gkampenis, Abdullah Okar, Luca Pasquini, Yasaman Sharifi, Gagandeep Singh, Nico Sollmann, Theodora Soumala, Mahsa Taherzadeh, Nikolay Yordanov, Philipp Vollmuth, Martha Foltyn-Dumitru, Ajay Malhotra, Aly H. Abayazeed, Francesco Dellepiane, Philipp Lohmann, Víctor M. Pérez-García, Hesham Elhalawani, Sanaria Al-Rubaiey, Rui Duarte Armindo, Kholod Ashraf, Moamen M. Asla, Mohamed Badawy, Jeroen Bisschop, Nima Broomand Lomer, Jan Bukatz, Jim Chen, Petra Cimflova, Felix Corr, Alexis Crawley, Lisa Deptula, Tasneem Elakhdar, Islam H. Shawali, Shahriar Faghani, Alexandra Frick, Vaibhav Gulati, Muhammad Ammar Haider, Fátima Hierro, Rasmus Holmboe Dahl, Sarah Maria Jacobs, Kuang-chun Jim Hsieh, Sedat G. Kandemirli, Katharina Kersting, Laura Kida, Sofia Kollia, Ioannis Koukoulithras, Xiao Li, Ahmed Abouelatta, Aya Mansour, Ruxandra-Catrinel Maria-Zamfirescu, Marcela Marsiglia, Yohana Sarahi Mateo-Camacho, Mark McArthur, Olivia McDonnell, Maire McHugh, Mana Moassefi, Samah Mostafa Morsi, Alexander Muntenu, Khanak K. Nandolia, Syed Raza Naqvi, Yalda Nikanpour, Mostafa Alnoury, Abdullah Mohamed Aly Nouh, Francesca Pappafava, Markand D. Patel, Samantha Petrucci, Eric Rawie, Scott Raymond, Borna Roohani, Sadeq Sabouhi, Laura M. Sanchez-Garcia, Zoe Shaked, Pokhraj P. Suthar, Talissa Altes, Edvin Isufi, Yaseen Dhermesh, Jaime Gass, Jonathan Thacker, Abdul Rahman Tarabishy, Benjamin Turner, Sebastiano Vacca, George K. Vilanilam, Daniel Warren, David Weiss, Klara Willms, Fikadu Worede, Sara Yousry, Wondwossen Lerebo, Alejandro Aristizabal, Alexandros Karargyris, Hasan Kassem, Sarthak Pati, Micah Sheller, Spyridon Bakas, Jeffrey D. Rudie, Mariam Aboian

Additionally, 31 studies (139 lesions) were held out for validation, and 59 studies (218 lesions) were used for testing.

no code implementations • 30 May 2023 • Lei Wu

To this end, researchers have introduced the Barron space $\mathcal{B}_s(\Omega)$ and the spectral Barron space $\mathcal{F}_s(\Omega)$, where the index $s\in [0,\infty)$ indicates the smoothness of functions within these spaces and $\Omega\subset\mathbb{R}^d$ denotes the input domain.

no code implementations • 27 May 2023 • Lei Wu, Weijie J. Su

By contrast, for gradient descent (GD), the stability imposes a similar constraint but only on the largest eigenvalue of Hessian.

no code implementations • 15 May 2023 • ZiHao Wang, Lei Wu

To this end, we compare the performance of CNNs, locally-connected networks (LCNs), and fully-connected networks (FCNs) on a simple regression task, where LCNs can be viewed as CNNs without weight sharing.

no code implementations • 9 May 2023 • Hongrui Chen, Jihao Long, Lei Wu

The first application is to study learning functions in $\mathcal{F}_{p,\pi}$ with RFMs.

1 code implementation • 7 Sep 2022 • Alex Fedorov, Eloy Geenjaar, Lei Wu, Tristan Sylvain, Thomas P. DeRamus, Margaux Luck, Maria Misiura, R Devon Hjelm, Sergey M. Plis, Vince D. Calhoun

Coarse labels do not capture the long-tailed spectrum of brain disorder phenotypes, which leads to a loss of generalizability of the model that makes them less useful in diagnostic settings.

2 code implementations • 27 Aug 2022 • Xianbang Chen, Yikui Liu, Lei Wu

Through this training, the tailor learns to customize the raw predictions into cost-oriented predictions.

no code implementations • 12 Jul 2022 • Xiaolei Diao, Daqian Shi, Hao Tang, Qiang Shen, Yanzeng Li, Lei Wu, Hao Xu

The long-tail effect is a common issue that limits the performance of deep learning models on real-world datasets.

no code implementations • 6 Jul 2022 • Lei Wu, Mingze Wang, Weijie Su

In this paper, we provide an explanation of this striking phenomenon by relating the particular noise structure of SGD to its \emph{linear stability} (Wu et al., 2018).

no code implementations • 17 May 2022 • Yuhao Mo, Chu Han, Yu Liu, Min Liu, Zhenwei Shi, Jiatai Lin, Bingchao Zhao, Chunwang Huang, Bingjiang Qiu, Yanfen Cui, Lei Wu, Xipeng Pan, Zeyan Xu, Xiaomei Huang, Zaiyi Liu, Ying Wang, Changhong Liang

In this study, we propose a novel ROI-free model for breast cancer diagnosis in ultrasound images with interpretable feature representations.

no code implementations • 24 Apr 2022 • Chao Ma, Daniel Kunin, Lei Wu, Lexing Ying

Numerically, we observe that neural network loss functions possesses a multiscale structure, manifested in two ways: (1) in a neighborhood of minima, the loss mixes a continuum of scales and grows subquadratically, and (2) in a larger region, the loss shows several separate scales clearly.

1 code implementation • 10 Mar 2022 • Yiqi Zhong, Lei Wu, Xianming Liu, Junjun Jiang

Robustness of deep neural networks (DNNs) to malicious perturbations is a hot topic in trustworthy AI.

no code implementations • 16 Feb 2022 • Lei Wu

Specifically, when the input distribution is the standard Gaussian, we show that mild conditions on $\sigma$ (e. g., $\sigma$ has a dominating linear part) are sufficient to guarantee the learnability in polynomial time and polynomial samples.

1 code implementation • 23 Aug 2021 • Dian Qin, Jiajun Bu, Zhe Liu, Xin Shen, Sheng Zhou, Jingjun Gu, Zhijua Wang, Lei Wu, Huifen Dai

To deal with this problem, we propose an efficient architecture by distilling knowledge from well-trained medical image segmentation networks to train another lightweight network.

no code implementations • 10 Aug 2021 • Lei Wu, Jihao Long

We propose a spectral-based approach to analyze how two-layer neural networks separate from linear methods in terms of approximating high-dimensional functions.

1 code implementation • 29 Mar 2021 • Alex Fedorov, Eloy Geenjaar, Lei Wu, Thomas P. DeRamus, Vince D. Calhoun, Sergey M. Plis

We show that self-supervised models are not as robust as expected based on their results in natural imaging benchmarks and can be outperformed by supervised learning with dropout.

no code implementations • 26 Mar 2021 • Yangyang Qin, Hefei Ling, Zhenghai He, Yuxuan Shi, Lei Wu

Knowledge distillation can lead to deploy-friendly networks against the plagued computational complexity problem, but previous methods neglect the feature hierarchy in detectors.

no code implementations • 2 Feb 2021 • Daohan Wang, Lei Wu, Jin Min Yang, Mengchao Zhang

Axion-like particles (ALPs) are predicted by many extensions of the Standard Model (SM).

High Energy Physics - Phenomenology High Energy Physics - Experiment

1 code implementation • 25 Dec 2020 • Alex Fedorov, Tristan Sylvain, Eloy Geenjaar, Margaux Luck, Lei Wu, Thomas P. DeRamus, Alex Kirilin, Dmitry Bleklov, Vince D. Calhoun, Sergey M. Plis

Sensory input from multiple sources is crucial for robust and coherent human perception.

1 code implementation • 25 Dec 2020 • Alex Fedorov, Lei Wu, Tristan Sylvain, Margaux Luck, Thomas P. DeRamus, Dmitry Bleklov, Sergey M. Plis, Vince D. Calhoun

In this paper, we introduce a way to exhaustively consider multimodal architectures for contrastive self-supervised fusion of fMRI and MRI of AD patients and controls.

no code implementations • 17 Dec 2020 • Victor V. Flambaum, Liangliang Su, Lei Wu, Bin Zhu

Due to the low nuclear recoils, sub-GeV dark matter (DM) is usually beyond the sensitivity of the conventional DM direct detection experiments.

High Energy Physics - Phenomenology Cosmology and Nongalactic Astrophysics

no code implementations • 22 Sep 2020 • Weinan E, Chao Ma, Stephan Wojtowytsch, Lei Wu

The purpose of this article is to review the achievements made in the last few years towards the understanding of the reasons behind the success and subtleties of neural network-based machine learning.

no code implementations • 14 Sep 2020 • Zhong Li, Chao Ma, Lei Wu

The approach is motivated by approximating the general activation functions with one-dimensional ReLU networks, which reduces the problem to the complexity controls of ReLU networks.

no code implementations • 14 Sep 2020 • Chao Ma, Lei Wu, Weinan E

The dynamic behavior of RMSprop and Adam algorithms is studied through a combination of careful numerical experiments and theoretical explanations.

no code implementations • 13 Aug 2020 • Chao Ma, Lei Wu, Weinan E

The random feature model exhibits a kind of resonance behavior when the number of parameters is close to the training sample size.

1 code implementation • 25 Jun 2020 • Chao Ma, Lei Wu, Weinan E

A numerical and phenomenological study of the gradient descent (GD) algorithm for training two-layer neural network models is carried out for different parameter regimes when the target function can be accurately approximated by a relatively small number of neurons.

1 code implementation • 29 May 2020 • Ren He, Haoyu Wang, Pengcheng Xia, Liu Wang, Yuanchun Li, Lei Wu, Yajin Zhou, Xiapu Luo, Yao Guo, Guoai Xu

To facilitate future research, we have publicly released all the well-labelled COVID-19 themed apps (and malware) to the research community.

Cryptography and Security

1 code implementation • 24 Mar 2020 • Yunlei Liang, Song Gao, Yuxin Cai, Natasha Zhang Foutz, Lei Wu

In this research, we present a time-aware dynamic Huff model (T-Huff) for location-based market share analysis and calibrate this model using large-scale store visit patterns based on mobile phone location data across ten most populated U. S. cities.

Social and Information Networks H.1

no code implementations • 17 Mar 2020 • Waleed Abdallah, Shehu AbdusSalam, Azar Ahmadov, Amine Ahriche, Gaël Alguero, Benjamin C. Allanach, Jack Y. Araz, Alexandre Arbey, Chiara Arina, Peter Athron, Emanuele Bagnaschi, Yang Bai, Michael J. Baker, Csaba Balazs, Daniele Barducci, Philip Bechtle, Aoife Bharucha, Andy Buckley, Jonathan Butterworth, Haiying Cai, Claudio Campagnari, Cari Cesarotti, Marcin Chrzaszcz, Andrea Coccaro, Eric Conte, Jonathan M. Cornell, Louie Dartmoor Corpe, Matthias Danninger, Luc Darmé, Aldo Deandrea, Nishita Desai, Barry Dillon, Caterina Doglioni, Juhi Dutta, John R. Ellis, Sebastian Ellis, Farida Fassi, Matthew Feickert, Nicolas Fernandez, Sylvain Fichet, Jernej F. Kamenik, Thomas Flacke, Benjamin Fuks, Achim Geiser, Marie-Hélène Genest, Akshay Ghalsasi, Tomas Gonzalo, Mark Goodsell, Stefania Gori, Philippe Gras, Admir Greljo, Diego Guadagnoli, Sven Heinemeyer, Lukas A. Heinrich, Jan Heisig, Deog Ki Hong, Tetiana Hryn'ova, Katri Huitu, Philip Ilten, Ahmed Ismail, Adil Jueid, Felix Kahlhoefer, Jan Kalinowski, Deepak Kar, Yevgeny Kats, Charanjit K. Khosa, Valeri Khoze, Tobias Klingl, Pyungwon Ko, Kyoungchul Kong, Wojciech Kotlarski, Michael Krämer, Sabine Kraml, Suchita Kulkarni, Anders Kvellestad, Clemens Lange, Kati Lassila-Perini, Seung J. Lee, Andre Lessa, Zhen Liu, Lara Lloret Iglesias, Jeanette M. Lorenz, Danika MacDonell, Farvah Mahmoudi, Judita Mamuzic, Andrea C. Marini, Pete Markowitz, Pablo Martinez Ruiz del Arbol, David Miller, Vasiliki Mitsou, Stefano Moretti, Marco Nardecchia, Siavash Neshatpour, Dao Thi Nhung, Per Osland, Patrick H. Owen, Orlando Panella, Alexander Pankov, Myeonghun Park, Werner Porod, Darren Price, Harrison Prosper, Are Raklev, Jürgen Reuter, Humberto Reyes-González, Thomas Rizzo, Tania Robens, Juan Rojo, Janusz A. Rosiek, Oleg Ruchayskiy, Veronica Sanz, Kai Schmidt-Hoberg, Pat Scott, Sezen Sekmen, Dipan Sengupta, Elizabeth Sexton-Kennedy, Hua-Sheng Shao, Seodong Shin, Luca Silvestrini, Ritesh Singh, Sukanya Sinha, Jory Sonneveld, Yotam Soreq, Giordon H. Stark, Tim Stefaniak, Jesse Thaler, Riccardo Torre, Emilio Torrente-Lujan, Gokhan Unel, Natascia Vignaroli, Wolfgang Waltenberger, Nicholas Wardle, Graeme Watt, Georg Weiglein, Martin J. White, Sophie L. Williamson, Jonas Wittbrodt, Lei Wu, Stefan Wunsch, Tevong You, Yang Zhang, José Zurita

We report on the status of efforts to improve the reinterpretation of searches and measurements at the LHC in terms of models for new physics, in the context of the LHC Reinterpretation Forum.

High Energy Physics - Phenomenology High Energy Physics - Experiment

no code implementations • 7 Mar 2020 • Huan Lei, Lei Wu, Weinan E

We introduce a machine-learning-based framework for constructing continuum non-Newtonian fluid dynamics model directly from a micro-scale description.

no code implementations • 30 Dec 2019 • Weinan E, Chao Ma, Lei Wu

We demonstrate that conventional machine learning models and algorithms, such as the random feature model, the two-layer neural network model and the residual neural network model, can all be recovered (in a scaled form) as particular discretizations of different continuous formulations.

no code implementations • 15 Dec 2019 • Weinan E, Chao Ma, Lei Wu

We study the generalization properties of minimum-norm solutions for three over-parametrized machine learning models including the random feature model, the two-layer neural network model and the residual network model.

no code implementations • NeurIPS 2019 • Lei Wu, Qingcan Wang, Chao Ma

We analyze the global convergence of gradient descent for deep linear residual networks by proposing a new initialization: zero-asymmetric (ZAS) initialization.

1 code implementation • 25 Jun 2019 • Lijin Quan, Lei Wu, Haoyu Wang

Unfortunately, current tools are web-application oriented and cannot be applied to EOSIO WebAssembly code directly, which makes it more difficult to detect vulnerabilities from those smart contracts.

Cryptography and Security

no code implementations • 18 Jun 2019 • Weinan E, Chao Ma, Lei Wu

We define the Barron space and show that it is the right space for two-layer neural network models in the sense that optimal direct and inverse approximation theorems hold for functions in the Barron space.

no code implementations • ICLR 2019 • Lei Wu, Chao Ma, Weinan E

These new estimates are a priori in nature in the sense that the bounds depend only on some norms of the underlying functions to be fitted, not the parameters in the model.

no code implementations • ICLR 2019 • Zhanxing Zhu, Jingfeng Wu, Bing Yu, Lei Wu, Jinwen Ma

Along this line, we theoretically study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics.

no code implementations • ICLR 2019 • Lei Wu, Zhanxing Zhu, Cheng Tai

State-of-the-art deep neural networks are vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs.

no code implementations • 10 Apr 2019 • Weinan E, Chao Ma, Qingcan Wang, Lei Wu

In addition, it is also shown that the GD path is uniformly close to the functions given by the related random feature model.

no code implementations • 8 Apr 2019 • Weinan E, Chao Ma, Lei Wu

In the over-parametrized regime, it is shown that gradient descent dynamics can achieve zero training loss exponentially fast regardless of the quality of the labels.

1 code implementation • NeurIPS 2018 • Lei Wu, Chao Ma, Weinan E

The question of which global minima are accessible by a stochastic gradient decent (SGD) algorithm with specific learning rate and batch size is studied from the perspective of dynamical stability.

no code implementations • ICLR 2019 • Weinan E, Chao Ma, Lei Wu

New estimates for the population risk are established for two-layer neural networks.

1 code implementation • ICLR 2019 • Zhanxing Zhu, Jingfeng Wu, Bing Yu, Lei Wu, Jinwen Ma

Along this line, we study a general form of gradient based optimization dynamics with unbiased noise, which unifies SGD and standard Langevin dynamics.

no code implementations • 27 Feb 2018 • Lei Wu, Zhanxing Zhu, Cheng Tai, Weinan E

State-of-the-art deep neural networks are known to be vulnerable to adversarial examples, formed by applying small but malicious perturbations to the original inputs.

no code implementations • ICLR 2018 • Lei Wu, Zhanxing Zhu, Cheng Tai, Weinan E

Deep neural networks provide state-of-the-art performance for many applications of interest.

1 code implementation • 23 Dec 2017 • Han He, Lei Wu, Xiaokun Yang, Hua Yan, Zhimin Gao, Yi Feng, George Townsend

To build a concrete study and substantiate the efficiency of our neural architecture, we take Chinese Word Segmentation as a research case example.

1 code implementation • 7 Dec 2017 • Han He, Lei Wu, Hua Yan, Zhimin Gao, Yi Feng, George Townsend

We present a simple yet elegant solution to train a single joint model on multi-criteria corpora for Chinese Word Segmentation (CWS).

no code implementations • 30 Jun 2017 • Lei Wu, Zhanxing Zhu, Weinan E

It is widely observed that deep learning models with learned parameters generalize well, even with much more model parameters than the number of training samples.

no code implementations • CVPR 2015 • Peng Zhang, Wengang Zhou, Lei Wu, Houqiang Li

We propose to extract two types of features, one to measure the semantic obviousness of the image and the other to discover local characteristic.

Image Quality Estimation
No-Reference Image Quality Assessment
**+1**

1 code implementation • International Joint Conferences on Artificial Intelligence 2014 • Min-Ling Zhang, Lei Wu

Existing approaches learn from multi-label data by manipulating with identical feature set, i. e. the very instance representation of each example is employed in the discrimination processes of all class labels.

no code implementations • NeurIPS 2009 • Lei Wu, Rong Jin, Steven C. Hoi, Jianke Zhu, Nenghai Yu

Learning distance functions with side information plays a key role in many machine learning and data mining applications.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.