Search Results for author: Rajarshi Roy

Found 18 papers, 4 papers with code

ChatQA: Building GPT-4 Level Conversational QA Models

no code implementations18 Jan 2024 Zihan Liu, Wei Ping, Rajarshi Roy, Peng Xu, Chankyu Lee, Mohammad Shoeybi, Bryan Catanzaro

In this work, we introduce ChatQA, a family of conversational question answering (QA) models that obtain GPT-4 level accuracies.

Conversational Question Answering Retrieval

Synthesizing Efficiently Monitorable Formulas in Metric Temporal Logic

no code implementations26 Oct 2023 Ritam Raha, Rajarshi Roy, Nathanael Fijalkow, Daniel Neider, Guillermo A. Perez

In runtime verification, manually formalizing a specification for monitoring system executions is a tedious and error-prone process.

GraPhSyM: Graph Physical Synthesis Model

no code implementations7 Aug 2023 Ahmed Agiza, Rajarshi Roy, Teodor Dumitru Ene, Saad Godil, Sherief Reda, Bryan Catanzaro

Given a gate-level netlist of a circuit represented as a graph, GraPhSyM utilizes graph structure, connectivity, and electrical property features to predict the impact of physical synthesis transformations such as buffer insertion and gate sizing.

Graph Attention

Learning Temporal Logic Properties: an Overview of Two Recent Methods

no code implementations2 Dec 2022 Jean-Raphaël Gaglione, Rajarshi Roy, Nasim Baharisangari, Daniel Neider, Zhe Xu, Ufuk Topcu

Learning linear temporal logic (LTL) formulas from examples labeled as positive or negative has found applications in inferring descriptions of system behavior.

Specificity Vocal Bursts Valence Prediction

Analyzing Robustness of Angluin's L* Algorithm in Presence of Noise

no code implementations21 Sep 2022 Igor Khmelnitsky, Serge Haddad, Lina Ye, Benoît Barbot, Benedikt Bollig, Martin Leucker, Daniel Neider, Rajarshi Roy

Angluin's L* algorithm learns the minimal (complete) deterministic finite automaton (DFA) of a regular language using membership and equivalence queries.

Classification PAC learning

Learning Interpretable Temporal Properties from Positive Examples Only

1 code implementation6 Sep 2022 Rajarshi Roy, Jean-Raphaël Gaglione, Nasim Baharisangari, Daniel Neider, Zhe Xu, Ufuk Topcu

To learn meaningful models from positive examples only, we design algorithms that rely on conciseness and language minimality of models as regularizers.

Specification sketching for Linear Temporal Logic

no code implementations14 Jun 2022 Simon Lutz, Daniel Neider, Rajarshi Roy

Virtually all verification and synthesis techniques assume that the formal specifications are readily available, functionally correct, and fully match the engineer's understanding of the given system.

PrefixRL: Optimization of Parallel Prefix Circuits using Deep Reinforcement Learning

no code implementations14 May 2022 Rajarshi Roy, Jonathan Raiman, Neel Kant, Ilyas Elkin, Robert Kirby, Michael Siu, Stuart Oberman, Saad Godil, Bryan Catanzaro

Deep Convolutional RL agents trained on this environment produce prefix adder circuits that Pareto-dominate existing baselines with up to 16. 0% and 30. 2% lower area for the same delay in the 32b and 64b settings respectively.

reinforcement-learning Reinforcement Learning (RL)

Scalable Anytime Algorithms for Learning Fragments of Linear Temporal Logic

1 code implementation13 Oct 2021 Ritam Raha, Rajarshi Roy, Nathanaël Fijalkow, Daniel Neider

Linear temporal logic (LTL) is a specification language for finite sequences (called traces) widely used in program verification, motion planning in robotics, process mining, and many other areas.

Motion Planning

Guiding Global Placement With Reinforcement Learning

no code implementations6 Sep 2021 Robert Kirby, Kolby Nottingham, Rajarshi Roy, Saad Godil, Bryan Catanzaro

In this work we augment state-of-the-art, force-based global placement solvers with a reinforcement learning agent trained to improve the final detail placed Half Perimeter Wire Length (HPWL).

reinforcement-learning Reinforcement Learning (RL)

Learning Linear Temporal Properties from Noisy Data: A MaxSAT Approach

no code implementations30 Apr 2021 Jean-Raphaël Gaglione, Daniel Neider, Rajarshi Roy, Ufuk Topcu, Zhe Xu

Our first algorithm infers minimal LTL formulas by reducing the inference problem to a problem in maximum satisfiability and then using off-the-shelf MaxSAT solvers to find a solution.

Machine Learning Link Inference of Noisy Delay-coupled Networks with Opto-Electronic Experimental Tests

no code implementations29 Oct 2020 Amitava Banerjee, Joseph D. Hart, Rajarshi Roy, Edward Ott

To achieve this, we first train a type of machine learning system known as reservoir computing to mimic the dynamics of the unknown network.

BIG-bench Machine Learning Time Series +1

Learning Interpretable Models in the Property Specification Language

no code implementations10 Feb 2020 Rajarshi Roy, Dana Fisman, Daniel Neider

In contrast to most of the recent work in this area, which focuses on descriptions expressed in Linear Temporal Logic (LTL), we develop a learning algorithm for formulas in the IEEE standard temporal logic PSL (Property Specification Language).

Critical Switching in Globally Attractive Chimeras

1 code implementation18 Nov 2019 Yuanzhao Zhang, Zachary G. Nicolaou, Joseph D. Hart, Rajarshi Roy, Adilson E. Motter

We report on a new type of chimera state that attracts almost all initial conditions and exhibits power-law switching behavior in networks of coupled oscillators.

Disordered Systems and Neural Networks Dynamical Systems Adaptation and Self-Organizing Systems Chaotic Dynamics Pattern Formation and Solitons

Topological Control of Synchronization Patterns: Trading Symmetry for Stability

1 code implementation8 Feb 2019 Joseph D. Hart, Yuanzhao Zhang, Rajarshi Roy, Adilson E. Motter

Symmetries are ubiquitous in network systems and have profound impacts on the observable dynamics.

Adaptation and Self-Organizing Systems Disordered Systems and Neural Networks Chaotic Dynamics Pattern Formation and Solitons

Cannot find the paper you are looking for? You can Submit a new open access paper.