no code implementations • 13 Feb 2024 • Xiangyu Chang, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit K. Roy-Chowdhury

The key premise of federated learning (FL) is to train ML models across a diverse set of data-owners (clients), without exchanging local data.

1 code implementation • 18 Jan 2024 • Arindam Chowdhury, Santiago Paternain, Gunjan Verma, Ananthram Swami, Santiago Segarra

The problem of optimal power allocation -- for maximizing a given network utility metric -- under instantaneous constraints has recently gained significant popularity.

no code implementations • 6 Jan 2024 • Xiangyu Chang, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit K. Roy-Chowdhury

Parameter-efficient tuning (PET) methods such as LoRA, Adapter, and Visual Prompt Tuning (VPT) have found success in enabling adaptation to new domains by tuning small modules within a transformer model.

no code implementations • 17 Nov 2023 • Nicolas Zilberstein, Ananthram Swami, Santiago Segarra

We propose a joint channel estimation and data detection algorithm for massive multilple-input multiple-output systems based on diffusion models.

no code implementations • 11 Jun 2023 • Boning Li, Timofey Efimov, Abhishek Kumar, Jose Cortes, Gunjan Verma, Ananthram Swami, Santiago Segarra

Network digital twins (NDTs) facilitate the estimation of key performance indicators (KPIs) before physically implementing a network, thereby enabling efficient optimization of the network configuration.

1 code implementation • 11 Jun 2023 • Boning Li, Gojko Čutura, Ananthram Swami, Santiago Segarra

We propose the deep demixing (DDmix) model, a graph autoencoder that can reconstruct epidemics evolving over networks from partial or aggregated temporal information.

2 code implementations • 18 Apr 2023 • Boning Li, Jake Perazzone, Ananthram Swami, Santiago Segarra

We propose a novel data-driven approach to allocate transmit power for federated learning (FL) over interference-limited wireless networks.

1 code implementation • 2 Apr 2023 • Arindam Chowdhury, Gunjan Verma, Ananthram Swami, Santiago Segarra

We develop an efficient and near-optimal solution for beamforming in multi-user multiple-input-multiple-output single-hop wireless ad-hoc interference networks.

no code implementations • 19 Nov 2022 • Zhongyuan Zhao, Bojan Radojicic, Gunjan Verma, Ananthram Swami, Santiago Segarra

In this work, we improve upon the widely-used metric of hop distance (and its variants) for the shortest path bias by introducing a bias based on the link duty cycle, which we predict using a graph convolutional neural network.

1 code implementation • 27 Mar 2022 • Zhongyuan Zhao, Ananthram Swami, Santiago Segarra

Distributed scheduling algorithms for throughput or utility maximization in dense wireless multi-hop networks can have overwhelmingly high overhead, causing increased congestion, energy consumption, radio footprint, and security vulnerability.

1 code implementation • 15 Nov 2021 • Boning Li, Ananthram Swami, Santiago Segarra

We propose a data-driven approach for power allocation in the context of federated learning (FL) over interference-limited wireless networks.

1 code implementation • 13 Nov 2021 • Zhongyuan Zhao, Gunjan Verma, Ananthram Swami, Santiago Segarra

In wireless multi-hop networks, delay is an important metric for many applications.

1 code implementation • 12 Sep 2021 • Zhongyuan Zhao, Gunjan Verma, Chirag Rao, Ananthram Swami, Santiago Segarra

Test results on medium-sized wireless networks show that our centralized heuristic can reach a near-optimal solution quickly, and our distributed heuristic based on a shallow GCN can reduce by nearly half the suboptimality gap of the distributed greedy solver with minimal increase in complexity.

1 code implementation • 19 May 2021 • Yu Zhu, Ananthram Swami, Santiago Segarra

On the other hand, we propose a matrix factorization method based on a loss function that generalizes that of the skip-gram model with negative sampling to arbitrary similarity matrices.

no code implementations • ICLR 2021 • Arlei Lopes da Silva, Furkan Kocayusufoglu, Saber Jafarpour, Francesco Bullo, Ananthram Swami, Ambuj Singh

The flow estimation problem consists of predicting missing edge flows in a network (e. g., traffic, power and water) based on partial observations.

no code implementations • 18 Dec 2020 • Liang Ma, Ting He, Kin K. Leung, Ananthram Swami, Don Towsley

This is a technical report, containing all the theorem proofs in the following two papers: (1) Liang Ma, Ting He, Kin K. Leung, Ananthram Swami, and Don Towsley, "Identifiability of Link Metrics Based on End-to-end Path Measurements," in ACM IMC, 2013.

Networking and Internet Architecture

no code implementations • 17 Dec 2020 • Liang Ma, Ting He, Ananthram Swami, Don Towsley, Kin K. Leung

This is a technical report, containing all the theorem proofs in paper "On Optimal Monitor Placement for Localizing Node Failures via Network Tomography" by Liang Ma, Ting He, Ananthram Swami, Don Towsley, and Kin K. Leung, published in IFIP WG 7. 3 Performance, 2015.

Networking and Internet Architecture

no code implementations • NeurIPS 2020 • Leonardo Cotta, Carlos H. C. Teixeira, Ananthram Swami, Bruno Ribeiro

Existing Graph Neural Network (GNN) methods that learn inductive unsupervised graph representations focus on learning node and edge representations by predicting observed edges in the graph.

1 code implementation • 18 Nov 2020 • Abhishek Kumar, Gunjan Verma, Chirag Rao, Ananthram Swami, Santiago Segarra

We study the problem of adaptive contention window (CW) design for random-access wireless networks.

1 code implementation • 18 Nov 2020 • Zhongyuan Zhao, Gunjan Verma, Chirag Rao, Ananthram Swami, Santiago Segarra

In small- to middle-sized wireless networks with tens of links, even a shallow GCN-based MWIS scheduler can leverage the topological information of the graph to reduce in half the suboptimality gap of the distributed greedy solver with good generalizability across graphs and minimal increase in complexity.

1 code implementation • 18 Nov 2020 • Arindam Chowdhury, Gunjan Verma, Chirag Rao, Ananthram Swami, Santiago Segarra

We study the problem of optimal power allocation in a single-hop ad hoc wireless network.

1 code implementation • 18 Nov 2020 • Gojko Cutura, Boning Li, Ananthram Swami, Santiago Segarra

We study the temporal reconstruction of epidemics evolving over networks.

no code implementations • 8 Oct 2020 • Leonardo Cotta, Carlos H. C. Teixeira, Ananthram Swami, Bruno Ribeiro

Existing Graph Neural Network (GNN) methods that learn inductive unsupervised graph representations focus on learning node and edge representations by predicting observed edges in the graph.

1 code implementation • 22 Sep 2020 • Arindam Chowdhury, Gunjan Verma, Chirag Rao, Ananthram Swami, Santiago Segarra

We study the problem of optimal power allocation in a single-hop ad hoc wireless network.

no code implementations • 17 Sep 2020 • Sumit Kumar Jha, Susmit Jha, Rickard Ewetz, Sunny Raj, Alvaro Velasquez, Laura L. Pullum, Ananthram Swami

We present a new extension of Fano's inequality and employ it to theoretically establish that the probability of success for a membership inference attack on a deep neural network can be bounded using the mutual information between its inputs and its activations.

no code implementations • 26 Aug 2020 • Shasha Li, Karim Khalil, Rameswar Panda, Chengyu Song, Srikanth V. Krishnamurthy, Amit K. Roy-Chowdhury, Ananthram Swami

The emergence of Internet of Things (IoT) brings about new security challenges at the intersection of cyber and physical spaces.

no code implementations • ECCV 2020 • Shasha Li, Shitong Zhu, Sudipta Paul, Amit Roy-Chowdhury, Chengyu Song, Srikanth Krishnamurthy, Ananthram Swami, Kevin S. Chan

There has been a recent surge in research on adversarial perturbations that defeat Deep Neural Networks (DNNs) in machine vision; most of these perturbation-based attacks target object classifiers.

no code implementations • 15 Jul 2020 • Hakim Hafidi, Mounir Ghogho, Philippe Ciblat, Ananthram Swami

We propose Graph Contrastive Learning (GraphCL), a general framework for learning node representations in a self supervised manner.

no code implementations • 16 May 2020 • Kevin D. Smith, Saber Jafarpour, Ananthram Swami, Francesco Bullo

Many tasks regarding the monitoring, management, and design of communication networks rely on knowledge of the routing topology.

1 code implementation • 6 May 2020 • Huynh Thi Thanh Binh, Pham Dinh Thanh, Tran Ba Trung, Le Cong Thanh, Le Minh Hai Phong, Ananthram Swami, Bui Thu Lam

Linkage Tree Genetic Algorithm (LTGA) is an effective Evolutionary Algorithm (EA) to solve complex problems using the linkage information between problem variables.

1 code implementation • NeurIPS 2019 • Susmit Jha, Sunny Raj, Steven Fernandes, Sumit K. Jha, Somesh Jha, Brian Jalaian, Gunjan Verma, Ananthram Swami

These experiments demonstrate the effectiveness of the ABC metric to make DNNs more trustworthy and resilient.

2 code implementations • NeurIPS 2019 • Gunjan Verma, Ananthram Swami

Modern machine learning systems are susceptible to adversarial examples; inputs which clearly preserve the characteristic semantics of a given class, but whose classification is (usually confidently) incorrect.

no code implementations • ICLR 2019 • Swati Rallapalli, Liang Ma, Mudhakar Srivatsa, Ananthram Swami, Heesung Kwon, Graham Bent, Christopher Simpkin

Effectively capturing graph node sequences in the form of vector embeddings is critical to many applications.

no code implementations • 19 Sep 2019 • Ziyao Zhang, Liang Ma, Konstantinos Poularakis, Kin K. Leung, Jeremy Tucker, Ananthram Swami

In distributed software-defined networks (SDN), multiple physical SDN controllers, each managing a network domain, are implemented to balance centralised control, scalability, and reliability requirements.

no code implementations • 14 Mar 2019 • Susmit Jha, Sunny Raj, Steven Lawrence Fernandes, Sumit Kumar Jha, Somesh Jha, Gunjan Verma, Brian Jalaian, Ananthram Swami

We study the robustness of machine learning models on benign and adversarial inputs in this neighborhood.

1 code implementation • 2 Jul 2018 • Shasha Li, Ajaya Neupane, Sujoy Paul, Chengyu Song, Srikanth V. Krishnamurthy, Amit K. Roy Chowdhury, Ananthram Swami

We exploit recent advances in generative adversarial network (GAN) architectures to account for temporal correlations and generate adversarial samples that can cause misclassification rates of over 80% for targeted activities.

no code implementations • 12 Feb 2018 • Xiao Xu, Sattar Vakili, Qing Zhao, Ananthram Swami

Two settings of complete and partial side information based on whether the UIG is fully revealed are studied and a general two-step learning structure consisting of an offline reduction of the action space and online aggregation of reward observations from similar arms is proposed to fully exploit the topological structure of the side information.

no code implementations • 26 Oct 2017 • James Atwood, Siddharth Pal, Don Towsley, Ananthram Swami

The predictive power and overall computational efficiency of Diffusion-convolutional neural networks make them an attractive choice for node classification tasks.

1 code implementation • KDD 17 2017 • Yuxiao Dong, Nitesh Vijay Chawla, Ananthram Swami

We study the problem of representation learning in heterogeneous networks.

Ranked #5 on Link Prediction on MovieLens 25M

no code implementations • 30 Sep 2016 • Xuan-Hong Dang, Arlei Silva, Ambuj Singh, Ananthram Swami, Prithwish Basu

Detecting a small number of outliers from a set of data observations is always challenging.

no code implementations • 24 Jun 2016 • Lin Li, Ananthram Swami, Anna Scaglione

We propose a probabilistic modeling framework for learning the dynamic patterns in the collective behaviors of social agents and developing profiles for different behavioral groups, using data collected from multiple information sources.

1 code implementation • 28 Apr 2016 • Nicolas Papernot, Patrick McDaniel, Ananthram Swami, Richard Harang

Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors.

no code implementations • 31 Mar 2016 • Z. Berkay Celik, Patrick McDaniel, Rauf Izmailov, Nicolas Papernot, Ryan Sheatsley, Raquel Alvarez, Ananthram Swami

In this paper, we consider an alternate learning approach that trains models using "privileged" information--features available at training time but not at runtime--to improve the accuracy and resilience of detection systems.

17 code implementations • 8 Feb 2016 • Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram Swami

Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN.

11 code implementations • 24 Nov 2015 • Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, Ananthram Swami

In this work, we formalize the space of adversaries against deep neural networks (DNNs) and introduce a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.

2 code implementations • 14 Nov 2015 • Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, Ananthram Swami

In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.