1 code implementation • 17 May 2025 • Rajarshi Roy, Yash Pote, David Parker, Marta Kwiatkowska
We demonstrate the effectiveness of our algorithm in two use cases: learning from policies induced by RL algorithms and learning from variants of a probabilistic model.
no code implementations • 27 Jan 2025 • Daniel Neider, Rajarshi Roy
Virtually all verification techniques using formal methods rely on the availability of a formal specification, which describes the design requirements precisely.
no code implementations • 5 Jan 2025 • Amitava Das, Suranjana Trivedy, Danush Khanna, Rajarshi Roy, Gurpreet Singh, Basab Ghosh, Yaswanth Narsupalli, Vinija Jain, Vasu Sharma, Aishwarya Naresh Reganti, Aman Chadha
The rapid rise of large language models (LLMs) has unlocked many applications but also underscores the challenge of aligning them with diverse values and preferences.
1 code implementation • 17 Jun 2024 • Nvidia, :, Bo Adler, Niket Agarwal, Ashwath Aithal, Dong H. Anh, Pallab Bhattacharya, Annika Brundyn, Jared Casper, Bryan Catanzaro, Sharon Clay, Jonathan Cohen, Sirshak Das, Ayush Dattagupta, Olivier Delalleau, Leon Derczynski, Yi Dong, Daniel Egert, Ellie Evans, Aleksander Ficek, Denys Fridman, Shaona Ghosh, Boris Ginsburg, Igor Gitman, Tomasz Grzegorzek, Robert Hero, Jining Huang, Vibhu Jawa, Joseph Jennings, Aastha Jhunjhunwala, John Kamalu, Sadaf Khan, Oleksii Kuchaiev, Patrick Legresley, Hui Li, Jiwei Liu, Zihan Liu, Eileen Long, Ameya Sunil Mahabaleshwarkar, Somshubra Majumdar, James Maki, Miguel Martinez, Maer Rodrigues de Melo, Ivan Moshkov, Deepak Narayanan, Sean Narenthiran, Jesus Navarro, Phong Nguyen, Osvald Nitski, Vahid Noroozi, Guruprasad Nutheti, Christopher Parisien, Jupinder Parmar, Mostofa Patwary, Krzysztof Pawelec, Wei Ping, Shrimai Prabhumoye, Rajarshi Roy, Trisha Saar, Vasanth Rao Naik Sabavat, Sanjeev Satheesh, Jane Polak Scowcroft, Jason Sewall, Pavel Shamis, Gerald Shen, Mohammad Shoeybi, Dave Sizer, Misha Smelyanskiy, Felipe Soares, Makesh Narsimhan Sreedhar, Dan Su, Sandeep Subramanian, Shengyang Sun, Shubham Toshniwal, Hao Wang, Zhilin Wang, Jiaxuan You, Jiaqi Zeng, Jimmy Zhang, Jing Zhang, Vivienne Zhang, Yian Zhang, Chen Zhu
We release the Nemotron-4 340B model family, including Nemotron-4-340B-Base, Nemotron-4-340B-Instruct, and Nemotron-4-340B-Reward.
no code implementations • 13 Jun 2024 • Jialin Song, Aidan Swope, Robert Kirby, Rajarshi Roy, Saad Godil, Jonathan Raiman, Bryan Catanzaro
Automatically designing fast and space-efficient digital circuits is challenging because circuits are discrete, must exactly implement the desired logic, and are costly to simulate.
no code implementations • 27 May 2024 • Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, Wei Ping
In this work, we introduce NV-Embed, incorporating architectural designs, training procedures, and curated datasets to significantly enhance the performance of LLM as a versatile embedding model, while maintaining its simplicity and reproducibility.
no code implementations • 18 Jan 2024 • Zihan Liu, Wei Ping, Rajarshi Roy, Peng Xu, Chankyu Lee, Mohammad Shoeybi, Bryan Catanzaro
In this work, we introduce ChatQA, a suite of models that outperform GPT-4 on retrieval-augmented generation (RAG) and conversational question answering (QA).
Ranked #6 on
Question Answering
on TriviaQA
(using extra training data)
no code implementations • 26 Oct 2023 • Ritam Raha, Rajarshi Roy, Nathanael Fijalkow, Daniel Neider, Guillermo A. Perez
In runtime verification, manually formalizing a specification for monitoring system executions is a tedious and error-prone process.
no code implementations • 7 Aug 2023 • Ahmed Agiza, Rajarshi Roy, Teodor Dumitru Ene, Saad Godil, Sherief Reda, Bryan Catanzaro
Given a gate-level netlist of a circuit represented as a graph, GraPhSyM utilizes graph structure, connectivity, and electrical property features to predict the impact of physical synthesis transformations such as buffer insertion and gate sizing.
no code implementations • 23 Jun 2023 • Yash Paliwal, Rajarshi Roy, Jean-Raphaël Gaglione, Nasim Baharisangari, Daniel Neider, Xiaoming Duan, Ufuk Topcu, Zhe Xu
We study a class of reinforcement learning (RL) tasks where the objective of the agent is to accomplish temporally extended goals.
no code implementations • 2 Dec 2022 • Jean-Raphaël Gaglione, Rajarshi Roy, Nasim Baharisangari, Daniel Neider, Zhe Xu, Ufuk Topcu
Learning linear temporal logic (LTL) formulas from examples labeled as positive or negative has found applications in inferring descriptions of system behavior.
no code implementations • 21 Sep 2022 • Igor Khmelnitsky, Serge Haddad, Lina Ye, Benoît Barbot, Benedikt Bollig, Martin Leucker, Daniel Neider, Rajarshi Roy
Angluin's L* algorithm learns the minimal (complete) deterministic finite automaton (DFA) of a regular language using membership and equivalence queries.
1 code implementation • 6 Sep 2022 • Rajarshi Roy, Jean-Raphaël Gaglione, Nasim Baharisangari, Daniel Neider, Zhe Xu, Ufuk Topcu
To learn meaningful models from positive examples only, we design algorithms that rely on conciseness and language minimality of models as regularizers.
no code implementations • 14 Jun 2022 • Simon Lutz, Daniel Neider, Rajarshi Roy
Virtually all verification and synthesis techniques assume that the formal specifications are readily available, functionally correct, and fully match the engineer's understanding of the given system.
no code implementations • 14 May 2022 • Rajarshi Roy, Jonathan Raiman, Neel Kant, Ilyas Elkin, Robert Kirby, Michael Siu, Stuart Oberman, Saad Godil, Bryan Catanzaro
Deep Convolutional RL agents trained on this environment produce prefix adder circuits that Pareto-dominate existing baselines with up to 16. 0% and 30. 2% lower area for the same delay in the 32b and 64b settings respectively.
1 code implementation • 13 Oct 2021 • Ritam Raha, Rajarshi Roy, Nathanaël Fijalkow, Daniel Neider
Linear temporal logic (LTL) is a specification language for finite sequences (called traces) widely used in program verification, motion planning in robotics, process mining, and many other areas.
no code implementations • 6 Sep 2021 • Robert Kirby, Kolby Nottingham, Rajarshi Roy, Saad Godil, Bryan Catanzaro
In this work we augment state-of-the-art, force-based global placement solvers with a reinforcement learning agent trained to improve the final detail placed Half Perimeter Wire Length (HPWL).
no code implementations • 30 Apr 2021 • Jean-Raphaël Gaglione, Daniel Neider, Rajarshi Roy, Ufuk Topcu, Zhe Xu
Our first algorithm infers minimal LTL formulas by reducing the inference problem to a problem in maximum satisfiability and then using off-the-shelf MaxSAT solvers to find a solution.
no code implementations • 29 Oct 2020 • Amitava Banerjee, Joseph D. Hart, Rajarshi Roy, Edward Ott
To achieve this, we first train a type of machine learning system known as reservoir computing to mimic the dynamics of the unknown network.
no code implementations • 22 Sep 2020 • Igor Khmelnitsky, Daniel Neider, Rajarshi Roy, Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Martin Leucker, Lina Ye
This paper presents a property-directed approach to verifying recurrent neural networks (RNNs).
no code implementations • 10 Feb 2020 • Rajarshi Roy, Dana Fisman, Daniel Neider
In contrast to most of the recent work in this area, which focuses on descriptions expressed in Linear Temporal Logic (LTL), we develop a learning algorithm for formulas in the IEEE standard temporal logic PSL (Property Specification Language).
no code implementations • 5 Dec 2019 • Amitava Banerjee, Jaideep Pathak, Rajarshi Roy, Juan G. Restrepo, Edward Ott
Our technique leverages the results of a machine learning process for short time prediction to achieve our goal.
1 code implementation • 18 Nov 2019 • Yuanzhao Zhang, Zachary G. Nicolaou, Joseph D. Hart, Rajarshi Roy, Adilson E. Motter
We report on a new type of chimera state that attracts almost all initial conditions and exhibits power-law switching behavior in networks of coupled oscillators.
Disordered Systems and Neural Networks Dynamical Systems Adaptation and Self-Organizing Systems Chaotic Dynamics Pattern Formation and Solitons
1 code implementation • 8 Feb 2019 • Joseph D. Hart, Yuanzhao Zhang, Rajarshi Roy, Adilson E. Motter
Symmetries are ubiquitous in network systems and have profound impacts on the observable dynamics.
Adaptation and Self-Organizing Systems Disordered Systems and Neural Networks Chaotic Dynamics Pattern Formation and Solitons