You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 29 Jan 2023 • Vaibhav Bihani, Sahil Manchanda, Srikanth Sastry, Sayan Ranu, N. M. Anoop Krishnan

Optimization of atomic structures presents a challenging problem, due to their highly rough and non-convex energy landscape, with wide applications in the fields of drug design, materials discovery, and mechanics.

1 code implementation • 10 Nov 2022 • Abishek Thangamuthu, Gunjan Kumar, Suresh Bishnoi, Ravinder Bhattoo, N M Anoop Krishnan, Sayan Ranu

We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes.

1 code implementation • 21 Oct 2022 • Mert Kosan, Zexi Huang, Sourav Medya, Sayan Ranu, Ambuj Singh

One way to address this is counterfactual reasoning where the objective is to change the GNN prediction by minimal changes in the input graph.

1 code implementation • 23 Sep 2022 • Ravinder Bhattoo, Sayan Ranu, N. M. Anoop Krishnan

Lagrangian and Hamiltonian neural networks (LNNs and HNNs, respectively) encode strong inductive biases that allow them to outperform other models of physical systems significantly.

no code implementations • 22 Sep 2022 • Suresh Bishnoi, Ravinder Bhattoo, Sayan Ranu, N. M. Anoop Krishnan

Neural networks with physics based inductive biases such as Lagrangian neural networks (LNN), and Hamiltonian neural networks (HNN) learn the dynamics of physical systems by encoding strong inductive biases.

no code implementations • 3 Sep 2022 • Ravinder Bhattoo, Sayan Ranu, N. M. Anoop Krishnan

Physical systems are commonly represented as a combination of particles, the individual dynamics of which govern the system dynamics.

no code implementations • 24 Aug 2022 • Sahil Manchanda, Sayan Ranu

In this work, we study the hitherto unexplored paradigm of Lifelong Learning to Branch on Mixed Integer Programs.

1 code implementation • 7 May 2022 • Ashish Nair, Rahul Yadav, Anjali Gupta, Abhijnan Chakraborty, Sayan Ranu, Amitabha Bagchi

With the increasing popularity of food delivery platforms, it has become pertinent to look into the working conditions of the 'gig' workers in these platforms, especially providing them fair wages, reasonable working hours, and transparency on work availability.

1 code implementation • 7 Mar 2022 • Shubham Gupta, Sahil Manchanda, Srikanta Bedathur, Sayan Ranu

There has been a recent surge in learning generative models for graphs.

1 code implementation • 25 Dec 2021 • Kartik Sharma, Samidha Verma, Sourav Medya, Arnab Bhattacharya, Sayan Ranu

In this work, we study this problem and show that GNNs remain vulnerable even when the downstream task and model are unknown.

1 code implementation • 24 Dec 2021 • Rishabh Ranjan, Siddharth Grover, Sourav Medya, Venkatesan Chakaravarthy, Yogish Sabharwal, Sayan Ranu

Further, owing to its pair-independent embeddings and theoretical properties, NEUROSED allows approximately 3 orders of magnitude faster retrieval of graphs and subgraphs.

1 code implementation • NeurIPS 2021 • Jayant Jain, Vrittika Bagadia, Sahil Manchanda, Sayan Ranu

First, our study reveals that a significant portion of the routes recommended by existing methods fail to reach the destination.

no code implementations • 7 Oct 2021 • Ravinder Bhattoo, Sayan Ranu, N. M. Anoop Krishnan

However, these models still suffer from issues such as inability to generalize to arbitrary system sizes, poor interpretability, and most importantly, inability to learn translational and rotational symmetries, which lead to the conservation laws of linear and angular momentum, respectively.

no code implementations • 29 Sep 2021 • Rishabh Ranjan, Siddharth Grover, Sourav Medya, Venkatesan Chakaravarthy, Yogish Sabharwal, Sayan Ranu

Subgraph edit distance (SED) is one of the most expressive measures of subgraph similarity.

no code implementations • 29 Sep 2021 • Ravinder Bhattoo, Sayan Ranu, N M Anoop Krishnan

However, these models still suffer from issues such as inability to generalize to arbitrary system sizes, poor interpretability, and most importantly, inability to learn translational and rotational symmetries, which lead to the conservation laws of linear and angular momentum, respectively.

1 code implementation • 19 Aug 2020 • Sunil Nishad, Shubhangi Agarwal, Arnab Bhattacharya, Sayan Ranu

In this paper, we develop GraphReach, a position-aware inductive GNN that captures the global positions of nodes through reachability estimations with respect to a set of anchor nodes.

1 code implementation • 22 Jan 2020 • Nikhil Goyal, Harsh Vardhan Jain, Sayan Ranu

Minimum DFS codes are canonical labels and capture the graph structure precisely along with the label information.

2 code implementations • NeurIPS 2020 • Sahil Manchanda, Akash Mittal, Anuj Dhawan, Sourav Medya, Sayan Ranu, Ambuj Singh

Additionally, a case-study on the practical combinatorial problem of Influence Maximization (IM) shows GCOMB is 150 times faster than the specialized IM algorithm IMM with similar quality.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.