no code implementations • 31 Jan 2022 • Murat Onen, Tayfun Gokmen, Teodor K. Todorov, Tomasz Nowicki, Jesus A. del Alamo, John Rozen, Wilfried Haensch, Seyoung Kim
Analog crossbar arrays comprising programmable nonvolatile resistors are under intense investigation for acceleration of deep neural network training.
no code implementations • 20 May 2021 • Jun Ho Yoon, Seyoung Kim
However, the existing methods for sparse Kronecker-sum inverse covariance estimation are limited in that they do not scale to more than a few hundred features and samples and that the unidentifiable parameters pose challenges in estimation.
no code implementations • 19 Jan 2021 • Chaeun Lee, Seyoung Kim
As deep neural networks require tremendous amount of computation and memory, analog computing with emerging memory devices is a promising alternative to digital computing for edge devices.
no code implementations • Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) 2020 • Jun Ho Yoon, Seyoung Kim
In this paper, we address the problem of jointly estimating dependencies across samples and dependencies across multiple features, where each set of dependencies is modeled as an inverse covariance matrix.
no code implementations • 24 Jul 2019 • Hyungjun Kim, Malte Rasch, Tayfun Gokmen, Takashi Ando, Hiroyuki Miyazoe, Jae-Joon Kim, John Rozen, Seyoung Kim
By using this zero-shifting method, we show that network performance dramatically improves for imbalanced synapse devices.
no code implementations • 20 Jun 2017 • Seyoung Kim, Tayfun Gokmen, Hyung-Min Lee, Wilfried E. Haensch
Recently we have shown that an architecture based on resistive processing unit (RPU) devices has potential to achieve significant acceleration in deep neural network (DNN) training compared to today's software-based DNN implementations running on CPU/GPU.
no code implementations • 15 Sep 2015 • Calvin Mccarter, Seyoung Kim
While highly scalable optimization methods exist for sparse Gaussian graphical model estimation, state-of-the-art methods for conditional Gaussian graphical models are not efficient enough and more importantly, fail due to memory constraints for very large problems.
no code implementations • NeurIPS 2014 • Calvin Mccarter, Seyoung Kim
In this paper, we address the problem of learning the structure of Gaussian chain graph models in a high-dimensional space.
no code implementations • NeurIPS 2013 • Jing Xiang, Seyoung Kim
Most previous methods were based on a two-stage approach that prunes the search space in the first stage and then searches for a network structure that satisfies the DAG constraint in the second stage.
no code implementations • NeurIPS 2009 • Xiaolin Yang, Seyoung Kim, Eric P. Xing
In this paper we consider the problem learning multiple related tasks where tasks consist of both continuous and discrete outputs from a common set of input variables that lie in a high-dimensional space.