Search Results for author: Sunil Srinivasa

Found 7 papers, 5 papers with code

AI For Global Climate Cooperation 2023 Competition Proceedings

no code implementations10 Jul 2023 Yoshua Bengio, Prateek Gupta, Lu Li, Soham Phade, Sunil Srinivasa, Andrew Williams, Tianyu Zhang, Yang Zhang, Stephan Zheng

On the other hand, an interdisciplinary panel of human experts in law, policy, sociology, economics and environmental science, evaluated the solutions qualitatively.

Decision Making Ethics +1

AI for Global Climate Cooperation: Modeling Global Climate Negotiations, Agreements, and Long-Term Cooperation in RICE-N

2 code implementations15 Aug 2022 Tianyu Zhang, Andrew Williams, Soham Phade, Sunil Srinivasa, Yang Zhang, Prateek Gupta, Yoshua Bengio, Stephan Zheng

To facilitate this research, here we introduce RICE-N, a multi-region integrated assessment model that simulates the global climate and economy, and which can be used to design and evaluate the strategic outcomes for different negotiation and agreement frameworks.

Ethics Multi-agent Reinforcement Learning

WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

3 code implementations31 Aug 2021 Tian Lan, Sunil Srinivasa, Huan Wang, Stephan Zheng

We present WarpDrive, a flexible, lightweight, and easy-to-use open-source RL framework that implements end-to-end deep multi-agent RL on a single GPU (Graphics Processing Unit), built on PyCUDA and PyTorch.

Decision Making Multi-agent Reinforcement Learning +2

Building a Foundation for Data-Driven, Interpretable, and Robust Policy Design using the AI Economist

1 code implementation6 Aug 2021 Alexander Trott, Sunil Srinivasa, Douwe van der Wal, Sebastien Haneuse, Stephan Zheng

Here we show that the AI Economist framework enables effective, flexible, and interpretable policy design using two-level reinforcement learning (RL) and data-driven simulations.

Reinforcement Learning (RL)

The AI Economist: Optimal Economic Policy Design via Two-level Deep Reinforcement Learning

1 code implementation5 Aug 2021 Stephan Zheng, Alexander Trott, Sunil Srinivasa, David C. Parkes, Richard Socher

Here we show that machine-learning-based economic simulation is a powerful policy and mechanism design framework to overcome these limitations.

counterfactual reinforcement-learning +1

The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies

2 code implementations28 Apr 2020 Stephan Zheng, Alexander Trott, Sunil Srinivasa, Nikhil Naik, Melvin Gruesbeck, David C. Parkes, Richard Socher

In experiments conducted on MTurk, an AI tax policy provides an equality-productivity trade-off that is similar to that provided by the Saez framework along with higher inverse-income weighted social welfare.

A K-fold Method for Baseline Estimation in Policy Gradient Algorithms

no code implementations3 Jan 2017 Nithyanand Kota, Abhishek Mishra, Sunil Srinivasa, Xi, Chen, Pieter Abbeel

The high variance issue in unbiased policy-gradient methods such as VPG and REINFORCE is typically mitigated by adding a baseline.

Policy Gradient Methods

Cannot find the paper you are looking for? You can Submit a new open access paper.