no code implementations • 2 May 2024 • Sindhu Tipirneni, Ravinarayana Adkathimar, Nurendra Choudhary, Gaurush Hiranandani, Rana Ali Amjad, Vassilis N. Ioannidis, Changhe Yuan, Chandan K. Reddy
Thus, we propose CACTUS (Context-Aware ClusTering with aUgmented triplet losS), a systematic approach that leverages open-source LLMs for efficient and effective supervised clustering of entity subsets, particularly focusing on text-based entities.
no code implementations • 26 Sep 2021 • Kumar Pratik, Rana Ali Amjad, Arash Behboodi, Joseph B. Soriaga, Max Welling
Through extensive experiments on CDL-B channel model, we show that the HKF can be used for tracking the channel over a wide range of Doppler values, matching Kalman filter performance with genie Doppler information.
no code implementations • 15 Jun 2021 • Markus Nagel, Marios Fournarakis, Rana Ali Amjad, Yelysei Bondarenko, Mart van Baalen, Tijmen Blankevoort
Neural network quantization is one of the most effective ways of achieving these savings but the additional noise it induces can lead to accuracy degradation.
1 code implementation • NeurIPS 2020 • Mart van Baalen, Christos Louizos, Markus Nagel, Rana Ali Amjad, Ying Wang, Tijmen Blankevoort, Max Welling
We introduce Bayesian Bits, a practical method for joint mixed precision quantization and pruning through gradient based optimization.
no code implementations • ICML 2020 • Markus Nagel, Rana Ali Amjad, Mart van Baalen, Christos Louizos, Tijmen Blankevoort
In this paper, we propose AdaRound, a better weight-rounding mechanism for post-training quantization that adapts to the data and the task loss.
no code implementations • 6 Jun 2019 • Rana Ali Amjad, Bernhard C. Geiger
We furthermore suggest a neural network where the decoder architecture is a parameterized naive Bayes decoder.
no code implementations • 18 Apr 2018 • Rana Ali Amjad, Kairen Liu, Bernhard C. Geiger
In this work, we investigate the use of three information-theoretic quantities -- entropy, mutual information with the class variable, and a class selectivity measure based on Kullback-Leibler divergence -- to understand and study the behavior of already trained fully-connected feed-forward neural networks.
no code implementations • 12 Mar 2018 • Rayyan Ahmad Khan, Rana Ali Amjad, Martin Kleinsteuber
We propose a new clustering algorithm, Extended Affinity Propagation, based on pairwise similarities.
no code implementations • 27 Feb 2018 • Rana Ali Amjad, Bernhard C. Geiger
In this theory paper, we investigate training deep neural networks (DNNs) for classification via minimizing the information bottleneck (IB) functional.
no code implementations • 2 Jan 2018 • Clemens Bloechl, Rana Ali Amjad, Bernhard C. Geiger
We present an information-theoretic cost function for co-clustering, i. e., for simultaneous clustering of two sets based on similarities between their elements.
no code implementations • 17 Aug 2016 • Bernhard C. Geiger, Rana Ali Amjad
In this paper, we investigate mutual information as a cost function for clustering, and show in which cases hard, i. e., deterministic, clusters are optimal.