no code implementations • 18 Jan 2024 • Zihan Liu, Wei Ping, Rajarshi Roy, Peng Xu, Chankyu Lee, Mohammad Shoeybi, Bryan Catanzaro
In this work, we introduce ChatQA, a family of conversational question answering (QA) models that obtain GPT-4 level accuracies.
no code implementations • 26 Oct 2023 • Ritam Raha, Rajarshi Roy, Nathanael Fijalkow, Daniel Neider, Guillermo A. Perez
In runtime verification, manually formalizing a specification for monitoring system executions is a tedious and error-prone process.
no code implementations • 7 Aug 2023 • Ahmed Agiza, Rajarshi Roy, Teodor Dumitru Ene, Saad Godil, Sherief Reda, Bryan Catanzaro
Given a gate-level netlist of a circuit represented as a graph, GraPhSyM utilizes graph structure, connectivity, and electrical property features to predict the impact of physical synthesis transformations such as buffer insertion and gate sizing.
no code implementations • 23 Jun 2023 • Yash Paliwal, Rajarshi Roy, Jean-Raphaël Gaglione, Nasim Baharisangari, Daniel Neider, Xiaoming Duan, Ufuk Topcu, Zhe Xu
We study a class of reinforcement learning (RL) tasks where the objective of the agent is to accomplish temporally extended goals.
no code implementations • 2 Dec 2022 • Jean-Raphaël Gaglione, Rajarshi Roy, Nasim Baharisangari, Daniel Neider, Zhe Xu, Ufuk Topcu
Learning linear temporal logic (LTL) formulas from examples labeled as positive or negative has found applications in inferring descriptions of system behavior.
no code implementations • 21 Sep 2022 • Igor Khmelnitsky, Serge Haddad, Lina Ye, Benoît Barbot, Benedikt Bollig, Martin Leucker, Daniel Neider, Rajarshi Roy
Angluin's L* algorithm learns the minimal (complete) deterministic finite automaton (DFA) of a regular language using membership and equivalence queries.
1 code implementation • 6 Sep 2022 • Rajarshi Roy, Jean-Raphaël Gaglione, Nasim Baharisangari, Daniel Neider, Zhe Xu, Ufuk Topcu
To learn meaningful models from positive examples only, we design algorithms that rely on conciseness and language minimality of models as regularizers.
no code implementations • 14 Jun 2022 • Simon Lutz, Daniel Neider, Rajarshi Roy
Virtually all verification and synthesis techniques assume that the formal specifications are readily available, functionally correct, and fully match the engineer's understanding of the given system.
no code implementations • 14 May 2022 • Rajarshi Roy, Jonathan Raiman, Neel Kant, Ilyas Elkin, Robert Kirby, Michael Siu, Stuart Oberman, Saad Godil, Bryan Catanzaro
Deep Convolutional RL agents trained on this environment produce prefix adder circuits that Pareto-dominate existing baselines with up to 16. 0% and 30. 2% lower area for the same delay in the 32b and 64b settings respectively.
1 code implementation • 13 Oct 2021 • Ritam Raha, Rajarshi Roy, Nathanaël Fijalkow, Daniel Neider
Linear temporal logic (LTL) is a specification language for finite sequences (called traces) widely used in program verification, motion planning in robotics, process mining, and many other areas.
no code implementations • 6 Sep 2021 • Robert Kirby, Kolby Nottingham, Rajarshi Roy, Saad Godil, Bryan Catanzaro
In this work we augment state-of-the-art, force-based global placement solvers with a reinforcement learning agent trained to improve the final detail placed Half Perimeter Wire Length (HPWL).
no code implementations • 30 Apr 2021 • Jean-Raphaël Gaglione, Daniel Neider, Rajarshi Roy, Ufuk Topcu, Zhe Xu
Our first algorithm infers minimal LTL formulas by reducing the inference problem to a problem in maximum satisfiability and then using off-the-shelf MaxSAT solvers to find a solution.
no code implementations • 29 Oct 2020 • Amitava Banerjee, Joseph D. Hart, Rajarshi Roy, Edward Ott
To achieve this, we first train a type of machine learning system known as reservoir computing to mimic the dynamics of the unknown network.
no code implementations • 22 Sep 2020 • Igor Khmelnitsky, Daniel Neider, Rajarshi Roy, Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Martin Leucker, Lina Ye
This paper presents a property-directed approach to verifying recurrent neural networks (RNNs).
no code implementations • 10 Feb 2020 • Rajarshi Roy, Dana Fisman, Daniel Neider
In contrast to most of the recent work in this area, which focuses on descriptions expressed in Linear Temporal Logic (LTL), we develop a learning algorithm for formulas in the IEEE standard temporal logic PSL (Property Specification Language).
no code implementations • 5 Dec 2019 • Amitava Banerjee, Jaideep Pathak, Rajarshi Roy, Juan G. Restrepo, Edward Ott
Our technique leverages the results of a machine learning process for short time prediction to achieve our goal.
1 code implementation • 18 Nov 2019 • Yuanzhao Zhang, Zachary G. Nicolaou, Joseph D. Hart, Rajarshi Roy, Adilson E. Motter
We report on a new type of chimera state that attracts almost all initial conditions and exhibits power-law switching behavior in networks of coupled oscillators.
Disordered Systems and Neural Networks Dynamical Systems Adaptation and Self-Organizing Systems Chaotic Dynamics Pattern Formation and Solitons
1 code implementation • 8 Feb 2019 • Joseph D. Hart, Yuanzhao Zhang, Rajarshi Roy, Adilson E. Motter
Symmetries are ubiquitous in network systems and have profound impacts on the observable dynamics.
Adaptation and Self-Organizing Systems Disordered Systems and Neural Networks Chaotic Dynamics Pattern Formation and Solitons