Search Results for author: Cenk Baykal

Found 15 papers, 4 papers with code

SLaM: Student-Label Mixing for Distillation with Unlabeled Examples

no code implementations NeurIPS 2023 Vasilis Kontonis, Fotis Iliopoulos, Khoa Trinh, Cenk Baykal, Gaurav Menghani, Erik Vee

Knowledge distillation with unlabeled examples is a powerful training paradigm for generating compact and lightweight student models in applications where the amount of labeled data is limited but one has access to a large pool of unlabeled data.

Knowledge Distillation

The Power of External Memory in Increasing Predictive Model Capacity

no code implementations31 Jan 2023 Cenk Baykal, Dylan J Cutler, Nishanth Dikkala, Nikhil Ghosh, Rina Panigrahy, Xin Wang

One way of introducing sparsity into deep networks is by attaching an external table of parameters that is sparsely looked up at different layers of the network.

Language Modelling

Weighted Distillation with Unlabeled Examples

no code implementations13 Oct 2022 Fotis Iliopoulos, Vasilis Kontonis, Cenk Baykal, Gaurav Menghani, Khoa Trinh, Erik Vee

Our method is hyper-parameter free, data-agnostic, and simple to implement.

Robust Active Distillation

no code implementations3 Oct 2022 Cenk Baykal, Khoa Trinh, Fotis Iliopoulos, Gaurav Menghani, Erik Vee

Distilling knowledge from a large teacher model to a lightweight one is a widely successful approach for generating compact, powerful models in the semi-supervised learning setting where a limited amount of labeled data is available.

Active Learning Informativeness +1

A Theoretical View on Sparsely Activated Networks

no code implementations8 Aug 2022 Cenk Baykal, Nishanth Dikkala, Rina Panigrahy, Cyrus Rashtchian, Xin Wang

After representing LSH-based sparse networks with our model, we prove that sparse networks can match the approximation power of dense networks on Lipschitz functions.

Bandit Sampling for Multiplex Networks

no code implementations8 Feb 2022 Cenk Baykal, Vamsi K. Potluru, Sameena Shah, Manuela M. Veloso

Most of the existing work focuses primarily on the monoplex setting where we have access to a network with only a single type of connection between entities.

Link Prediction Node Classification

Graph Belief Propagation Networks

1 code implementation6 Jun 2021 Junteng Jia, Cenk Baykal, Vamsi K. Potluru, Austin R. Benson

With the wide-spread availability of complex relational data, semi-supervised node classification in graphs has become a central machine learning problem.

Classification Node Classification

Low-Regret Active learning

no code implementations6 Apr 2021 Cenk Baykal, Lucas Liebenwein, Dan Feldman, Daniela Rus

We develop an online learning algorithm for identifying unlabeled data points that are most informative for training (i. e., active learning).

Active Learning Informativeness

Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy

1 code implementation4 Mar 2021 Lucas Liebenwein, Cenk Baykal, Brandon Carter, David Gifford, Daniela Rus

Neural network pruning is a popular technique used to reduce the inference costs of modern, potentially overparameterized, networks.

Network Pruning

On Coresets for Support Vector Machines

no code implementations15 Feb 2020 Murad Tukan, Cenk Baykal, Dan Feldman, Daniela Rus

A coreset is a small, representative subset of the original data points such that a models trained on the coreset are provably competitive with those trained on the original data set.

Small Data Image Classification

Provable Filter Pruning for Efficient Neural Networks

2 code implementations ICLR 2020 Lucas Liebenwein, Cenk Baykal, Harry Lang, Dan Feldman, Daniela Rus

We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network.

SiPPing Neural Networks: Sensitivity-informed Provable Pruning of Neural Networks

2 code implementations11 Oct 2019 Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus

We introduce a pruning algorithm that provably sparsifies the parameters of a trained model in a way that approximately preserves the model's predictive accuracy.

Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds

no code implementations ICLR 2019 Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, Daniela Rus

We present an efficient coresets-based neural network compression algorithm that sparsifies the parameters of a trained fully-connected neural network in a manner that provably approximates the network's output.

Generalization Bounds Neural Network Compression

Small Coresets to Represent Large Training Data for Support Vector Machines

no code implementations ICLR 2018 Cenk Baykal, Murad Tukan, Dan Feldman, Daniela Rus

Support Vector Machines (SVMs) are one of the most popular algorithms for classification and regression analysis.

Training Support Vector Machines using Coresets

no code implementations13 Aug 2017 Cenk Baykal, Lucas Liebenwein, Wilko Schwarting

We present a novel coreset construction algorithm for solving classification tasks using Support Vector Machines (SVMs) in a computationally efficient manner.

Cannot find the paper you are looking for? You can Submit a new open access paper.