no code implementations • 29 Jun 2024 • Hongjun Choi, Jayaraman J. Thiagarajan, Ruben Glatt, Shusen Liu
In this work, we investigate the fundamental trade-off regarding accuracy and parameter efficiency in the parameterization of neural network weights using predictor networks.
no code implementations • 24 Jun 2024 • Zhimin Li, Haichao Miao, Valerio Pascucci, Shusen Liu
The recent introduction of multimodal large language models (MLLMs) combine the inherent power of large language models (LLMs) with the renewed capabilities to reason about the multimodal context.
no code implementations • 7 Dec 2023 • Shusen Liu, Haichao Miao, Zhimin Li, Matthew Olson, Valerio Pascucci, Peer-Timo Bremer
With recent advances in multi-modal foundation models, the previously text-only large language models (LLM) have evolved to incorporate visual input, opening up unprecedented opportunities for various applications in visualization.
no code implementations • 6 Dec 2023 • Matthew L. Olson, Shusen Liu, Jayaraman J. Thiagarajan, Bogdan Kustowski, Weng-Keen Wong, Rushil Anirudh
Recent advances in machine learning, specifically transformer architecture, have led to significant advancements in commercial domains.
no code implementations • 25 Oct 2023 • Zhimin Li, Shusen Liu, Kailkhura Bhavya, Timo Bremer, Valerio Pascucci
For a neural network model, the non-linear behavior is often caused by non-linear activation units of a model.
no code implementations • 30 Jun 2023 • Ruben Glatt, Shusen Liu
Emerging foundation models in machine learning are models trained on vast amounts of data that have been shown to generalize well to new tasks.
1 code implementation • CVPR 2023 • Matthew L. Olson, Shusen Liu, Rushil Anirudh, Jayaraman J. Thiagarajan, Peer-Timo Bremer, Weng-Keen Wong
To this end, we introduce Cross-GAN Auditing (xGA) that, given an established "reference" GAN and a newly proposed "client" GAN, jointly identifies intelligible attributes that are either common across both GANs, novel to the client GAN, or missing from the client GAN.
no code implementations • 30 Oct 2022 • Yuzhe Lu, Shusen Liu, Jayaraman J. Thiagarajan, Wesam Sakla, Rushil Anirudh
We present a fully automated framework for building object detectors on satellite imagery without requiring any human annotation or intervention.
no code implementations • 16 Jun 2022 • Zhimin Li, Shusen Liu, Xin Yu, Kailkhura Bhavya, Jie Cao, Diffenderfer James Daniel, Peer-Timo Bremer, Valerio Pascucci
We decomposed and evaluated a set of critical geometric concepts from the common adopted classification loss, and used them to design a visualization system to compare and highlight the impact of pruning on model performance and feature representation.
no code implementations • 25 Jun 2021 • Donald Loveland, Shusen Liu, Bhavya Kailkhura, Anna Hiszpanski, Yong Han
Graph neural network (GNN) explanations have largely been facilitated through post-hoc introspection.
no code implementations • 16 Jul 2020 • Shusen Liu, Bhavya Kailkhura, Jize Zhang, Anna M. Hiszpanski, Emily Robertson, Donald Loveland, T. Yong-Jin Han
The scientific community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges.
no code implementations • 30 Jun 2020 • Shusen Liu, Bhavya Kailkhura, Jize Zhang, Anna M. Hiszpanski, Emily Robertson, Donald Loveland, T. Yong-Jin Han
The scientific community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges.
2 code implementations • 5 Oct 2019 • Sam Ade Jacobs, Brian Van Essen, David Hysom, Jae-Seung Yeom, Tim Moon, Rushil Anirudh, Jayaraman J. Thiagaranjan, Shusen Liu, Peer-Timo Bremer, Jim Gaffney, Tom Benson, Peter Robinson, Luc Peterson, Brian Spears
Training deep neural networks on large scientific data is a challenging task that requires enormous compute power, especially if no pre-trained models exist to initialize the process.
2 code implementations • 3 Oct 2019 • Rushil Anirudh, Jayaraman J. Thiagarajan, Shusen Liu, Peer-Timo Bremer, Brian K. Spears
There is significant interest in using modern neural networks for scientific applications due to their effectiveness in modeling highly complex, non-linear problems in a data-driven fashion.
1 code implementation • 25 Sep 2019 • Shusen Liu, Rushil Anirudh, Jayaraman J. Thiagarajan, Peer-Timo Bremer
We present function preserving projections (FPP), a scalable linear projection technique for discovering interpretable relationships in high-dimensional data.
2 code implementations • 19 Jul 2019 • Shusen Liu, Di Wang, Dan Maljovec, Rushil Anirudh, Jayaraman J. Thiagarajan, Sam Ade Jacobs, Brian C. Van Essen, David Hysom, Jae-Seung Yeom, Jim Gaffney, Luc Peterson, Peter B. Robinson, Harsh Bhatia, Valerio Pascucci, Brian K. Spears, Peer-Timo Bremer
With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization.
no code implementations • 6 Jul 2019 • Shusen Liu, Bhavya Kailkhura, Donald Loveland, Yong Han
In this work, we propose an introspection technique for deep neural networks that relies on a generative model to instigate salient editing of the input image for model interpretation.
no code implementations • EMNLP 2018 • Shusen Liu, Tao Li, Zhimin Li, Vivek Srikumar, Valerio Pascucci, Peer-Timo Bremer
Neural networks models have gained unprecedented popularity in natural language processing due to their state-of-the-art performance and the flexible end-to-end training scheme.
1 code implementation • 2 Jul 2018 • Shusen Liu, Yi-Nan Li, Runyao Duan
We program these two schemes on the \emph{ibmqx4}, a $5$-qubit superconducting quantum processor via IBM cloud, with the help of the $QSI$ modules [S. Liu et al.,~arXiv:1710. 09500, 2017].
Quantum Physics
no code implementations • 19 Dec 2017 • Jayaraman J. Thiagarajan, Shusen Liu, Karthikeyan Natesan Ramamurthy, Peer-Timo Bremer
Furthermore, we introduce a new approach to discover a diverse set of high quality linear projections and show that in practice the information of $k$ linear projections is often jointly encoded in $\sim k$ axis aligned plots.