Search Results for author: Seyoung Kim

Found 10 papers, 0 papers with code

Neural Network Training with Asymmetric Crosspoint Elements

no code implementations31 Jan 2022 Murat Onen, Tayfun Gokmen, Teodor K. Todorov, Tomasz Nowicki, Jesus A. del Alamo, John Rozen, Wilfried Haensch, Seyoung Kim

Analog crossbar arrays comprising programmable nonvolatile resistors are under intense investigation for acceleration of deep neural network training.

Total Energy

EiGLasso for Scalable Sparse Kronecker-Sum Inverse Covariance Estimation

no code implementations20 May 2021 Jun Ho Yoon, Seyoung Kim

However, the existing methods for sparse Kronecker-sum inverse covariance estimation are limited in that they do not scale to more than a few hundred features and samples and that the unidentifiable parameters pose challenges in estimation.

SEMULATOR: Emulating the Dynamics of Crossbar Array-based Analog Neural System with Regression Neural Networks

no code implementations19 Jan 2021 Chaeun Lee, Seyoung Kim

As deep neural networks require tremendous amount of computation and memory, analog computing with emerging memory devices is a promising alternative to digital computing for edge devices.

regression

EiGLasso: Scalable Estimation of Cartesian Product of Sparse Inverse Covariance Matrices

no code implementations Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) 2020 Jun Ho Yoon, Seyoung Kim

In this paper, we address the problem of jointly estimating dependencies across samples and dependencies across multiple features, where each set of dependencies is modeled as an inverse covariance matrix.

Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays

no code implementations24 Jul 2019 Hyungjun Kim, Malte Rasch, Tayfun Gokmen, Takashi Ando, Hiroyuki Miyazoe, Jae-Joon Kim, John Rozen, Seyoung Kim

By using this zero-shifting method, we show that network performance dramatically improves for imbalanced synapse devices.

Analog CMOS-based Resistive Processing Unit for Deep Neural Network Training

no code implementations20 Jun 2017 Seyoung Kim, Tayfun Gokmen, Hyung-Min Lee, Wilfried E. Haensch

Recently we have shown that an architecture based on resistive processing unit (RPU) devices has potential to achieve significant acceleration in deep neural network (DNN) training compared to today's software-based DNN implementations running on CPU/GPU.

Large-Scale Optimization Algorithms for Sparse Conditional Gaussian Graphical Models

no code implementations15 Sep 2015 Calvin Mccarter, Seyoung Kim

While highly scalable optimization methods exist for sparse Gaussian graphical model estimation, state-of-the-art methods for conditional Gaussian graphical models are not efficient enough and more importantly, fail due to memory constraints for very large problems.

On Sparse Gaussian Chain Graph Models

no code implementations NeurIPS 2014 Calvin Mccarter, Seyoung Kim

In this paper, we address the problem of learning the structure of Gaussian chain graph models in a high-dimensional space.

Computational Efficiency regression

A* Lasso for Learning a Sparse Bayesian Network Structure for Continuous Variables

no code implementations NeurIPS 2013 Jing Xiang, Seyoung Kim

Most previous methods were based on a two-stage approach that prunes the search space in the first stage and then searches for a network structure that satisfies the DAG constraint in the second stage.

Computational Efficiency

Heterogeneous multitask learning with joint sparsity constraints

no code implementations NeurIPS 2009 Xiaolin Yang, Seyoung Kim, Eric P. Xing

In this paper we consider the problem learning multiple related tasks where tasks consist of both continuous and discrete outputs from a common set of input variables that lie in a high-dimensional space.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.