Search Results for author: Kamil Adamczewski

Found 17 papers, 7 papers with code

AdaGlimpse: Active Visual Exploration with Arbitrary Glimpse Position and Scale

1 code implementation4 Apr 2024 Adam Pardyl, Michał Wronka, Maciej Wołczyk, Kamil Adamczewski, Tomasz Trzciński, Bartosz Zieliński

Active Visual Exploration (AVE) is a task that involves dynamically selecting observations (glimpses), which is critical to facilitate comprehension and navigation within an environment.


Scaling Laws for Fine-Grained Mixture of Experts

1 code implementation12 Feb 2024 Jakub Krajewski, Jan Ludziejewski, Kamil Adamczewski, Maciej Pióro, Michał Krutul, Szymon Antoniak, Kamil Ciebiera, Krystian Król, Tomasz Odrzygóźdź, Piotr Sankowski, Marek Cygan, Sebastian Jaszczur

Our findings not only show that MoE models consistently outperform dense Transformers but also highlight that the efficiency gap between dense and MoE models widens as we scale up the model size and training budget.

SADMoE: Exploiting Activation Sparsity with Dynamic-k Gating

no code implementations6 Oct 2023 Filip Szatkowski, Bartosz Wójcik, Mikołaj Piórczyński, Kamil Adamczewski

Transformer models, despite their impressive performance, often face practical limitations due to their high computational requirements.

Pre-Pruning and Gradient-Dropping Improve Differentially Private Image Classification

no code implementations19 Jun 2023 Kamil Adamczewski, Yingchen He, Mijung Park

To tackle this challenge, we take advantage of the fact that neural networks are overparameterized, which allows us to improve neural network training with differential privacy.

Image Classification

Differential Privacy Meets Neural Network Pruning

no code implementations8 Mar 2023 Kamil Adamczewski, Mijung Park

We study the interplay between neural network pruning and differential privacy, through the two modes of parameter updates.

Dimensionality Reduction Network Pruning

Differentially Private Neural Tangent Kernels for Privacy-Preserving Data Generation

no code implementations3 Mar 2023 Yilin Yang, Kamil Adamczewski, Danica J. Sutherland, Xiaoxiao Li, Mijung Park

Maximum mean discrepancy (MMD) is a particularly useful distance metric for differentially private data generation: when used with finite-dimensional features it allows us to summarize and privatize the data distribution once, which we can repeatedly use during generator training without further privacy loss.

Privacy Preserving

Revisiting Random Channel Pruning for Neural Network Compression

1 code implementation CVPR 2022 Yawei Li, Kamil Adamczewski, Wen Li, Shuhang Gu, Radu Timofte, Luc van Gool

The proposed approach provides a new way to compare different methods, namely how well they behave compared with random pruning.

Neural Network Compression

Hermite Polynomial Features for Private Data Generation

1 code implementation9 Jun 2021 Margarita Vinaroz, Mohammad-Amin Charusaie, Frederik Harder, Kamil Adamczewski, Mijung Park

Hence, a relatively low order of Hermite polynomial features can more accurately approximate the mean embedding of the data distribution compared to a significantly higher number of random features.

Dirichlet Pruning for Neural Network Compression

1 code implementation10 Nov 2020 Kamil Adamczewski, Mijung Park

We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one.

Neural Network Compression Variational Inference

Bayesian Importance of Features (BIF)

no code implementations26 Oct 2020 Kamil Adamczewski, Frederik Harder, Mijung Park

We introduce a simple and intuitive framework that provides quantitative explanations of statistical models through the probabilistic assessment of input feature importance.

Bayesian Inference BIG-bench Machine Learning +3

DP-MERF: Differentially Private Mean Embeddings with Random Features for Practical Privacy-Preserving Data Generation

1 code implementation26 Feb 2020 Frederik Harder, Kamil Adamczewski, Mijung Park

We propose a differentially private data generation paradigm using random feature representations of kernel mean embeddings when comparing the distribution of true data with that of synthetic data.

Privacy Preserving Synthetic Data Generation

Neuron ranking -- an informed way to condense convolutional neural networks architecture

no code implementations3 Jul 2019 Kamil Adamczewski, Mijung Park

Convolutional neural networks (CNNs) in recent years have made a dramatic impact in science, technology and industry, yet the theoretical mechanism of CNN architecture design remains surprisingly vague.

Variational Inference

Radial and Directional Posteriors for Bayesian Neural Networks

2 code implementations7 Feb 2019 Changyong Oh, Kamil Adamczewski, Mijung Park

We propose a new variational family for Bayesian neural networks.

Subgraph Matching Using Compactness Prior for Robust Feature Correspondence

no code implementations CVPR 2015 Yumin Suh, Kamil Adamczewski, Kyoung Mu Lee

By constructing Markov chain on the restricted search space instead of the original solution space, our method approximates the solution effectively.

Graph Matching

How good is the Shapley value-based approach to the influence maximization problem?

no code implementations27 Sep 2014 Kamil Adamczewski, Szymon Matejczyk, Tomasz P. Michalak

Intuitively, since the Shapley value evaluates the average marginal contribution of a player to the coalitional game, it can be used in the network context to evaluate the marginal contribution of a node in the process of information diffusion given various groups of already 'infected' nodes.

Cannot find the paper you are looking for? You can Submit a new open access paper.