Search Results for author: Sakyasingha Dasgupta

Found 18 papers, 2 papers with code

Dancing along Battery: Enabling Transformer with Run-time Reconfigurability on Mobile Devices

no code implementations12 Feb 2021 Yuhong Song, Weiwen Jiang, Bingbing Li, Panjie Qi, Qingfeng Zhuge, Edwin Hsing-Mean Sha, Sakyasingha Dasgupta, Yiyu Shi, Caiwen Ding

Specifically, RT3 integrates two-level optimizations: First, it utilizes an efficient BP as the first-step compression for resource-constrained mobile devices; then, RT3 heuristically generates a shrunken search space based on the first level optimization and searches multiple pattern sets with diverse sparsity for PP via reinforcement learning to support lightweight software reconfiguration, which corresponds to available frequency levels of DVFS (i. e., hardware reconfiguration).

AutoML

FGNAS: FPGA-Aware Graph Neural Architecture Search

no code implementations1 Jan 2021 Qing Lu, Weiwen Jiang, Meng Jiang, Jingtong Hu, Sakyasingha Dasgupta, Yiyu Shi

The success of gragh neural networks (GNNs) in the past years has aroused grow-ing interest and effort in designing best models to handle graph-structured data.

Neural Architecture Search

Standing on the Shoulders of Giants: Hardware and Neural Architecture Co-Search with Hot Start

1 code implementation17 Jul 2020 Weiwen Jiang, Lei Yang, Sakyasingha Dasgupta, Jingtong Hu, Yiyu Shi

To tackle this issue, HotNAS builds a chain of tools to design hardware to support compression, based on which a global optimizer is developed to automatically co-search all the involved search spaces.

Neural Architecture Search

Continual Learning via Online Leverage Score Sampling

no code implementations1 Aug 2019 Dan Teng, Sakyasingha Dasgupta

In order to mimic the human ability of continual acquisition and transfer of knowledge across various tasks, a learning system needs the capability for continual learning, effectively utilizing the previously acquired skills.

Computational Efficiency Continual Learning

Hardware/Software Co-Exploration of Neural Architectures

1 code implementation6 Jul 2019 Weiwen Jiang, Lei Yang, Edwin Sha, Qingfeng Zhuge, Shouzhen Gu, Sakyasingha Dasgupta, Yiyu Shi, Jingtong Hu

We propose a novel hardware and software co-exploration framework for efficient neural architecture search (NAS).

Neural Architecture Search

Model-based Deep Reinforcement Learning for Dynamic Portfolio Optimization

no code implementations25 Jan 2019 Pengqian Yu, Joon Sern Lee, Ilya Kulyatin, Zekun Shi, Sakyasingha Dasgupta

Dynamic portfolio optimization is the process of sequentially allocating wealth to a collection of assets in some consecutive trading periods, based on investors' return-risk profile.

Data Augmentation Portfolio Optimization +2

Transfer Learning From Synthetic To Real Images Using Variational Autoencoders For Precise Position Detection

no code implementations4 Jul 2018 Tadanobu Inoue, Subhajit Chaudhury, Giovanni De Magistris, Sakyasingha Dasgupta

Capturing and labeling camera images in the real world is an expensive task, whereas synthesizing labeled images in a simulation environment is easy for collecting large-scale image data.

Position Transfer Learning

Internal Model from Observations for Reward Shaping

no code implementations2 Jun 2018 Daiki Kimura, Subhajit Chaudhury, Ryuki Tachibana, Sakyasingha Dasgupta

During reinforcement learning, the agent predicts the reward as a function of the difference between the actual state and the state predicted by the internal model.

reinforcement-learning Reinforcement Learning (RL)

Object Detection using Domain Randomization and Generative Adversarial Refinement of Synthetic Images

no code implementations30 May 2018 Fernando Camaro Nogues, Andrew Huie, Sakyasingha Dasgupta

In this work, we present an application of domain randomization and generative adversarial networks (GAN) to train a near real-time object detector for industrial electric parts, entirely in a simulated environment.

object-detection Object Detection +1

Reward Estimation via State Prediction

no code implementations ICLR 2018 Daiki Kimura, Subhajit Chaudhury, Ryuki Tachibana, Sakyasingha Dasgupta

We present a novel reward estimation method that is based on a finite sample of optimal state trajectories from expert demon- strations and can be used for guiding an agent to mimic the expert behavior.

reinforcement-learning Reinforcement Learning (RL)

Dynamic Boltzmann Machines for Second Order Moments and Generalized Gaussian Distributions

no code implementations17 Dec 2017 Rudy Raymond, Takayuki Osogami, Sakyasingha Dasgupta

Gaussian DyBM is a DyBM that assumes the predicted data is generated by a Gaussian distribution whose first-order moment (mean) dynamically changes over time but its second-order moment (variance) is fixed.

Time Series Time Series Analysis

Conditional generation of multi-modal data using constrained embedding space mapping

no code implementations4 Jul 2017 Subhajit Chaudhury, Sakyasingha Dasgupta, Asim Munawar, Md. A. Salam Khan, Ryuki Tachibana

We present a conditional generative model that maps low-dimensional embeddings of multiple modalities of data to a common latent space hence extracting semantic relationships between them.

Distributed Recurrent Neural Forward Models with Synaptic Adaptation for Complex Behaviors of Walking Robots

no code implementations11 Jun 2015 Sakyasingha Dasgupta, Dennis Goldschmidt, Florentin Wörgötter, Poramate Manoonpong

The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc.

Cognitive Aging as Interplay between Hebbian Learning and Criticality

no code implementations4 Feb 2014 Sakyasingha Dasgupta

In this regard learning in neural networks can serve as a model for the acquisition of skills and knowledge in early development stages i. e. the ageing process and criticality in the network serves as the optimum state of cognitive abilities.

Cannot find the paper you are looking for? You can Submit a new open access paper.