1 code implementation • 27 Nov 2023 • Amartya Bhattacharya, Debarshi Brahma, Suraj Nagaje Mahadev, Anmol Asati, Vikas Verma, Soma Biswas
Since training individual models for each domain is not practical, we propose a novel framework termed DPOD (Domain-specific Prompt tuning using Out-of-domain data), which can exploit out-of-domain data during training to improve fake news detection of all desired domains simultaneously.
1 code implementation • 27 Dec 2022 • Yingtian Zou, Vikas Verma, Sarthak Mittal, Wai Hoh Tang, Hieu Pham, Juho Kannala, Yoshua Bengio, Arno Solin, Kenji Kawaguchi
Mixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpolating pairs of inputs and their labels.
no code implementations • 18 Oct 2022 • Alexia Jolicoeur-Martineau, Alex Lamb, Vikas Verma, Aniket Didolkar
We propose a novel regularizer for supervised learning called Conditioning on Noisy Targets (CNT).
no code implementations • 9 Nov 2020 • Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham, Quoc V. Le
Despite recent success, most contrastive self-supervised learning methods are domain-specific, relying heavily on data augmentation techniques that require knowledge about a particular domain, such as image cropping and rotation.
no code implementations • ICML Workshop LifelongML 2020 • Simon Guiroy, Vikas Verma, Christopher Pal
The study of generalization of neural networks in gradient-based meta-learning has recently great research interest.
1 code implementation • 14 Jun 2020 • Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar
Our approach improves the robustness of CNN models against the manifold intrusion problem that may occur in other state-of-the-art mixing approaches.
1 code implementation • CVPR 2021 • Jisoo Jeong, Vikas Verma, Minsung Hyun, Juho Kannala, Nojun Kwak
Despite the data labeling cost for the object detection tasks being substantially more than that of the classification tasks, semi-supervised learning methods for object detection have not been studied much.
1 code implementation • 25 Dec 2019 • Alex Lamb, Sherjil Ozair, Vikas Verma, David Ha
In this work we focus on their ability to have invariance towards the presence or absence of details.
no code implementations • 25 Sep 2019 • Vikas Verma, Meng Qu, Alex Lamb, Yoshua Bengio, Juho Kannala, Jian Tang
We present GraphMix, a regularization technique for Graph Neural Network based semi-supervised object classification, leveraging the recent advances in the regularization of classical deep neural networks.
1 code implementation • 25 Sep 2019 • Vikas Verma, Meng Qu, Kenji Kawaguchi, Alex Lamb, Yoshua Bengio, Juho Kannala, Jian Tang
We present GraphMix, a regularization method for Graph Neural Network based semi-supervised object classification, whereby we propose to train a fully-connected network jointly with the graph neural network via parameter sharing and interpolation-based regularization.
Ranked #1 on Node Classification on Pubmed random partition
5 code implementations • ICLR 2020 • Fan-Yun Sun, Jordan Hoffmann, Vikas Verma, Jian Tang
There are also some recent methods based on language models (e. g. graph2vec) but they tend to only consider certain substructures (e. g. subtrees) as graph representatives.
Ranked #25 on Graph Classification on IMDb-M
no code implementations • 16 Jul 2019 • Simon Guiroy, Vikas Verma, Christopher Pal
We also show that coherence of meta-test gradients, measured by the average inner product between the task-specific gradient vectors evaluated at meta-train solution, is also correlated with generalization.
3 code implementations • 16 Jun 2019 • Alex Lamb, Vikas Verma, Kenji Kawaguchi, Alexander Matyasko, Savya Khosla, Juho Kannala, Yoshua Bengio
Adversarial robustness has become a central goal in deep learning, both in the theory and the practice.
1 code implementation • ICLR 2019 • Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Aaron Courville, Ioannis Mitliagkas, Yoshua Bengio
Because the hidden states are learned, this has an important effect of encouraging the hidden states for a class to be concentrated in such a way so that interpolations within the same class or between two different classes do not intersect with the real data points from other classes.
1 code implementation • ICLR Workshop DeepGenStruct 2019 • Christopher Beckham, Sina Honari, Alex Lamb, Vikas Verma, Farnoosh Ghadiri, R Devon Hjelm, Christopher Pal
In this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders.
4 code implementations • 9 Mar 2019 • Vikas Verma, Kenji Kawaguchi, Alex Lamb, Juho Kannala, Arno Solin, Yoshua Bengio, David Lopez-Paz
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm.
1 code implementation • NeurIPS 2019 • Christopher Beckham, Sina Honari, Vikas Verma, Alex Lamb, Farnoosh Ghadiri, R. Devon Hjelm, Yoshua Bengio, Christopher Pal
In this paper, we explore new approaches to combining information encoded within the learned representations of auto-encoders.
no code implementations • 18 Jun 2018 • Jason Jo, Vikas Verma, Yoshua Bengio
We focus on two supervised visual reasoning tasks whose labels encode a semantic relational rule between two or more objects in an image: the MNIST Parity task and the colorized Pentomino task.
12 code implementations • ICLR 2019 • Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, Aaron Courville, David Lopez-Paz, Yoshua Bengio
Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples.
Ranked #18 on Image Classification on OmniBenchmark
2 code implementations • 21 Feb 2018 • Kenji Kawaguchi, Yoshua Bengio, Vikas Verma, Leslie Pack Kaelbling
This paper introduces a novel measure-theoretic theory for machine learning that does not require statistical assumptions.
no code implementations • ICLR 2018 • Stanisław Jastrzębski, Devansh Arpit, Nicolas Ballas, Vikas Verma, Tong Che, Yoshua Bengio
In general, a Resnet block tends to concentrate representation learning behavior in the first few layers while higher layers perform iterative refinement of features.
1 code implementation • 28 Feb 2017 • Kenji Kawaguchi, Bo Xie, Vikas Verma, Le Song
For deep models, with no unrealistic assumptions, we prove universal approximation ability, a lower bound on approximation error, a partial optimization guarantee, and a generalization bound.
no code implementations • 2 Sep 2014 • Vikas Verma
We also propose a Two-Step Matching process for reducing the response time of the CBIR systems.