no code implementations • NAACL (TrustNLP) 2021 • Samhita Vadrevu, Rakesh Nagi, JinJun Xiong, Wen-mei Hwu
In this paper, we use Clique Partition- ing Problem (CPP), which is an Integer Pro- gram (IP) to formulate ER as a graph partition- ing problem and then highlight the explainable nature of this method.
no code implementations • 25 Nov 2022 • Zhixuan Zhou, Jiao Sun, Jiaxin Pei, Nanyun Peng, JinJun Xiong
Our analysis further reveal stereotypical portrayals of both male and female characters in terms of moral foundations and events.
no code implementations • 8 Nov 2022 • Omer Anjum, Alok Kamatar, Toby Liang, JinJun Xiong, Wen-mei Hwu
We propose an approach that learns from each abstract published by a potential reviewer the topics studied and the explicit context in which the reviewer studied the topics.
1 code implementation • 17 Oct 2022 • Yuhong Li, Jiajie Li, Cong Han, Pan Li, JinJun Xiong, Deming Chen
(2) Efficient proxies are not extensible to multi-modality downstream tasks.
1 code implementation • 11 Oct 2022 • Jie Huang, Kevin Chen-Chuan Chang, JinJun Xiong, Wen-mei Hwu
In this paper, we propose to measure how specific the language of pre-trained language models (PLMs) is.
1 code implementation • 9 Oct 2022 • Bhavya Bhavya, JinJun Xiong, ChengXiang Zhai
We propose a novel application of prompting Pre-trained Language Models (PLMs) to generate analogies and study how to design effective prompts for two task settings: generating a source concept analogous to a given target concept (aka Analogous Concept Generation or ACG), and generating an explanation of the similarity between a given pair of target concept and source concept (aka Analogous Explanation Generation or AEG).
no code implementations • 22 Jul 2022 • Yao Chen, Junhao Pan, Xinheng Liu, JinJun Xiong, Deming Chen
In this study, we propose HiKonv, a unified solution that maximizes the throughput of convolution on a given underlying processing unit with low-bitwidth quantized data inputs through novel bit-wise management and parallel computation.
no code implementations • 7 Jul 2022 • Hongkang Li, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Graph convolutional networks (GCNs) have recently achieved great empirical success in learning graph-structured data.
1 code implementation • 21 May 2022 • Jie Huang, Kerui Zhu, Kevin Chen-Chuan Chang, JinJun Xiong, Wen-mei Hwu
Experiments demonstrate that our system can extract and generate high-quality relation descriptions for explaining entity relationships.
1 code implementation • NAACL 2022 • Yong Xie, Dakuo Wang, Pin-Yu Chen, JinJun Xiong, Sijia Liu, Sanmi Koyejo
More and more investors and machine learning models rely on social media (e. g., Twitter and Reddit) to gather real-time information and sentiment to predict stock price movements.
no code implementations • 1 Apr 2022 • Zirui Xu, Fuxun Yu, JinJun Xiong, Xiang Chen
The significant success of Deep Neural Networks (DNNs) is highly promoted by the multiple sophisticated DNN libraries.
no code implementations • 21 Jan 2022 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.
no code implementations • 28 Dec 2021 • Xinheng Liu, Yao Chen, Prakhar Ganesh, Junhao Pan, JinJun Xiong, Deming Chen
Quantization for Convolutional Neural Network (CNN) has shown significant progress with the intention of reducing the cost of computation and storage with low-bitwidth data inputs.
1 code implementation • 14 Nov 2021 • Jie Huang, Hanyin Shao, Kevin Chen-Chuan Chang, JinJun Xiong, Wen-mei Hwu
From the composition of this phrase, machines may guess twin prime is a certain kind of prime, but it is still difficult to deduce exactly what twin stands for without additional knowledge.
no code implementations • 10 Nov 2021 • Seung Won Min, Kun Wu, Mert Hidayetoğlu, JinJun Xiong, Xiang Song, Wen-mei Hwu
With our data tiering method, we additionally provide a new data placement and access strategy to further minimize the CPU-GPU communication overhead.
no code implementations • 9 Nov 2021 • Yen-Hsiang Chang, Jianhao Pu, Wen-mei Hwu, JinJun Xiong
With the society's growing adoption of machine learning (ML) and deep learning (DL) for various intelligent solutions, it becomes increasingly imperative to standardize a common set of measures for ML/DL models with large scale open datasets under common development practices and resources so that people can benchmark and compare models quality and performance on a common ground.
no code implementations • 12 Oct 2021 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Moreover, when the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer.
no code implementations • ICLR 2022 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited.
no code implementations • 8 Sep 2021 • Zhiding Liang, Zhepeng Wang, Junhuan Yang, Lei Yang, JinJun Xiong, Yiyu Shi, Weiwen Jiang
Specifically, this paper targets quantum neural network (QNN), and proposes to learn the errors in the training phase, so that the identified QNN model can be resilient to noise.
1 code implementation • Findings (ACL) 2022 • Jie Huang, Kevin Chen-Chuan Chang, JinJun Xiong, Wen-mei Hwu
Relations between entities can be represented by different instances, e. g., a sentence containing both entities or a fact in a Knowledge Graph (KG).
2 code implementations • NeurIPS 2021 • Yuhong Li, Cong Hao, Pan Li, JinJun Xiong, Deming Chen
Such a self-supervised regression task can effectively evaluate the intrinsic power of an architecture to capture and transform the input signal patterns, and allow more sufficient usage of training samples.
Ranked #1 on
Neural Architecture Search
on NAS-Bench-101
1 code implementation • 16 Jun 2021 • Kaizhi Qian, Yang Zhang, Shiyu Chang, JinJun Xiong, Chuang Gan, David Cox, Mark Hasegawa-Johnson
In this paper, we propose AutoPST, which can disentangle global prosody style from speech without relying on any text transcriptions.
1 code implementation • ACL 2021 • Jie Huang, Kevin Chen-Chuan Chang, JinJun Xiong, Wen-mei Hwu
To support a fine-grained domain without relying on a matching corpus for supervision, we develop hierarchical core-fringe learning, which learns core and fringe terms jointly in a semi-supervised manner contextualized in the hierarchy of the domain.
1 code implementation • 19 May 2021 • Lecheng Zheng, JinJun Xiong, Yada Zhu, Jingrui He
We first provide a theoretical analysis showing that the vanilla contrastive learning loss easily leads to the sub-optimal solution in the presence of false negative pairs, whereas the proposed weighted loss could automatically adjust the weight based on the similarity of the learned representations to mitigate this issue.
1 code implementation • 29 Apr 2021 • Jiachen Li, Bowen Cheng, Rogerio Feris, JinJun Xiong, Thomas S. Huang, Wen-mei Hwu, Humphrey Shi
Current anchor-free object detectors are quite simple and effective yet lack accurate label assignment methods, which limits their potential in competing with classic anchor-based models that are supported by well-designed assignment methods based on the Intersection-over-Union~(IoU) metric.
no code implementations • 25 Mar 2021 • Cong Hao, Jordan Dotzel, JinJun Xiong, Luca Benini, Zhiru Zhang, Deming Chen
Artificial intelligence (AI) technologies have dramatically advanced in recent years, resulting in revolutionary changes in people's lives.
1 code implementation • 4 Mar 2021 • Seung Won Min, Kun Wu, Sitao Huang, Mert Hidayetoğlu, JinJun Xiong, Eiman Ebrahimi, Deming Chen, Wen-mei Hwu
In this work, we propose a novel GPU-oriented data communication approach for GCN training, where GPU threads directly access sparse features in host memory through zero-copy accesses without much CPU help.
1 code implementation • 20 Jan 2021 • Seung Won Min, Kun Wu, Sitao Huang, Mert Hidayetoğlu, JinJun Xiong, Eiman Ebrahimi, Deming Chen, Wen-mei Hwu
While this process accounts for a significant portion of the training time, we find existing GNN implementations using popular deep neural network (DNN) libraries such as PyTorch are limited to a CPU-centric approach for the entire data preparation step.
1 code implementation • 1 Jan 2021 • Yuhong Li, Cong Hao, Xiaofan Zhang, JinJun Xiong, Wen-mei Hwu, Deming Chen
This raises the question of whether we can find an effective proxy search space (PS) that is only a small subset of GS to dramatically improve RandomNAS’s search efficiency while at the same time keeping a good correlation for the top-performing architectures.
no code implementations • NeurIPS 2021 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
Moreover, as the algorithm for training a sparse neural network is specified as (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned model weights in the hidden layer.
1 code implementation • 28 Dec 2020 • Carl Pearson, Kun Wu, I-Hsin Chung, JinJun Xiong, Wen-mei Hwu
MPI derived datatypes are an abstraction that simplifies handling of non-contiguous data in MPI applications.
Distributed, Parallel, and Cluster Computing
3 code implementations • 18 Dec 2020 • Weiwen Jiang, JinJun Xiong, Yiyu Shi
It is imminent to know how to design the quantum circuit for accelerating neural networks.
1 code implementation • ICCV 2021 • Zhonghao Wang, Kai Wang, Mo Yu, JinJun Xiong, Wen-mei Hwu, Mark Hasegawa-Johnson, Humphrey Shi
Finally, we achieve a higher level of interpretability by imposing OCCAM on the objects represented in the induced symbolic concept space.
Ranked #3 on
Visual Question Answering (VQA)
on CLEVR
no code implementations • 14 Oct 2020 • Cong Hao, Yao Chen, Xiaofan Zhang, Yuhong Li, JinJun Xiong, Wen-mei Hwu, Deming Chen
High quality AI solutions require joint optimization of AI algorithms, such as deep neural networks (DNNs), and their hardware accelerators.
1 code implementation • EMNLP 2020 • Jie Huang, Zilong Wang, Kevin Chen-Chuan Chang, Wen-mei Hwu, JinJun Xiong
We introduce and study semantic capacity of terms.
1 code implementation • ECCV 2020 • Ren Wang, Gaoyuan Zhang, Sijia Liu, Pin-Yu Chen, JinJun Xiong, Meng Wang
When the training data are maliciously tampered, the predictions of the acquired deep neural network (DNN) can be manipulated by an adversary known as the Trojan attack (or poisoning backdoor attack).
1 code implementation • 28 Jul 2020 • Mert Hidayetoglu, Carl Pearson, Vikram Sharma Mailthody, Eiman Ebrahimi, JinJun Xiong, Rakesh Nagi, Wen-mei Hwu
This paper presents GPU performance optimization and scaling results for inference models of the Sparse Deep Neural Network Challenge 2020.
no code implementations • 18 Jul 2020 • Tianchen Wang, Xiaowei Xu, JinJun Xiong, Qianjun Jia, Haiyun Yuan, Meiping Huang, Jian Zhuang, Yiyu Shi
Real-time cine magnetic resonance imaging (MRI) plays an increasingly important role in various cardiac interventions.
3 code implementations • 26 Jun 2020 • Weiwen Jiang, JinJun Xiong, Yiyu Shi
We discover that, in order to make full use of the strength of quantum representation, it is best to represent data in a neural network as either random variables or numbers in unitary matrices, such that they can be directly operated by the basic quantum logical gates.
no code implementations • ICML 2020 • Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, JinJun Xiong
In this paper, we provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
no code implementations • ACL 2020 • Rajarshi Haldar, Lingfei Wu, JinJun Xiong, Julia Hockenmaier
The ability to match pieces of code to their corresponding natural language descriptions and vice versa is fundamental for natural language search interfaces to software repositories.
no code implementations • 6 May 2020 • Yuhong Li, Cong Hao, Xiaofan Zhang, Xinheng Liu, Yao Chen, JinJun Xiong, Wen-mei Hwu, Deming Chen
We formulate the co-search problem by fusing DNN search variables and hardware implementation variables into one solution space, and maximize both algorithm accuracy and hardware implementation quality.
no code implementations • 2 Apr 2020 • Zhonghao Wang, Yunchao Wei, Rogerior Feris, JinJun Xiong, Wen-mei Hwu, Thomas S. Huang, Humphrey Shi
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains, i. e. reducing domain shift.
1 code implementation • CVPR 2020 • Zhonghao Wang, Mo Yu, Yunchao Wei, Rogerio Feris, JinJun Xiong, Wen-mei Hwu, Thomas S. Huang, Humphrey Shi
We consider the problem of unsupervised domain adaptation for semantic segmentation by easing the domain shift between the source domain (synthetic data) and the target domain (real data) in this work.
Ranked #8 on
Semantic Segmentation
on DensePASS
no code implementations • 27 Feb 2020 • Jinglan Liu, Yukun Ding, JinJun Xiong, Qianjun Jia, Meiping Huang, Jian Zhuang, Bike Xie, Chun-Chen Liu, Yiyu Shi
For example, if the noise is large leading to significant difference between domain $X$ and domain $Y$, can we bridge $X$ and $Y$ with an intermediate domain $Z$ such that both the denoising process between $X$ and $Z$ and that between $Z$ and $Y$ are easier to learn?
no code implementations • 26 Feb 2020 • Abdul Dakkak, Cheng Li, JinJun Xiong, Wen-mei Hwu
Deep Learning (DL) innovations are being introduced at a rapid pace.
no code implementations • 19 Feb 2020 • Abdul Dakkak, Cheng Li, JinJun Xiong, Wen-mei Hwu
Machine Learning (ML) and Deep Learning (DL) innovations are being introduced at such a rapid pace that researchers are hard-pressed to analyze and study them.
no code implementations • MIDL 2019 • Yukun Ding, Jinglan Liu, Xiaowei Xu, Meiping Huang, Jian Zhuang, JinJun Xiong, Yiyu Shi
Existing selective segmentation methods, however, ignore this unique property of selective segmentation and train their DNN models by optimizing accuracy on the entire dataset.
1 code implementation • 8 Jan 2020 • Fenglei Fan, JinJun Xiong, Mengzhou Li, Ge Wang
Deep learning as represented by the artificial deep neural networks (DNNs) has achieved great success in many important areas that deal with text, images, videos, graphs, and so on.
no code implementations • 5 Dec 2019 • Ren Wang, Meng Wang, JinJun Xiong
Existing works on tensor recovery have focused on data losses and random noises.
no code implementations • 3 Dec 2019 • Zirui Xu, Zhao Yang, JinJun Xiong, Jianlei Yang, Xiang Chen
In this paper, we propose Helios, a heterogeneity-aware FL framework to tackle the straggler issue.
Distributed, Parallel, and Cluster Computing
no code implementations • 26 Nov 2019 • E. A. Huerta, Gabrielle Allen, Igor Andreoni, Javier M. Antelis, Etienne Bachelet, Bruce Berriman, Federica Bianco, Rahul Biswas, Matias Carrasco, Kyle Chard, Minsik Cho, Philip S. Cowperthwaite, Zachariah B. Etienne, Maya Fishbach, Francisco Förster, Daniel George, Tom Gibbs, Matthew Graham, William Gropp, Robert Gruendl, Anushri Gupta, Roland Haas, Sarah Habib, Elise Jennings, Margaret W. G. Johnson, Erik Katsavounidis, Daniel S. Katz, Asad Khan, Volodymyr Kindratenko, William T. C. Kramer, Xin Liu, Ashish Mahabal, Zsuzsa Marka, Kenton McHenry, Jonah Miller, Claudia Moreno, Mark Neubauer, Steve Oberlin, Alexander R. Olivas, Donald Petravick, Adam Rebei, Shawn Rosofsky, Milton Ruiz, Aaron Saxton, Bernard F. Schutz, Alex Schwing, Ed Seidel, Stuart L. Shapiro, Hongyu Shen, Yue Shen, Leo Singer, Brigitta M. Sipőcz, Lunan Sun, John Towns, Antonios Tsokaros, Wei Wei, Jack Wells, Timothy J. Williams, JinJun Xiong, Zhizhen Zhao
Multi-messenger astrophysics is a fast-growing, interdisciplinary field that combines data, which vary in volume and speed of data processing, from many different instruments that probe the Universe using different cosmic messengers: electromagnetic waves, cosmic rays, gravitational waves and neutrinos.
no code implementations • 19 Nov 2019 • Cheng Li, Abdul Dakkak, JinJun Xiong, Wen-mei Hwu
MLModelScope defines abstractions for frameworks and supports board range of DL models and evaluation scenarios.
no code implementations • 18 Nov 2019 • Cong Hao, Yao Chen, Xinheng Liu, Atif Sarwari, Daryl Sew, Ashutosh Dhar, Bryan Wu, Dongdong Fu, JinJun Xiong, Wen-mei Hwu, Junli Gu, Deming Chen
The rapidly growing demands for powerful AI algorithms in many application domains have motivated massive investment in both high-quality deep neural network (DNN) models and high-efficiency implementations.
no code implementations • 18 Nov 2019 • Cheng Li, Abdul Dakkak, JinJun Xiong, Wen-mei Hwu
We show that DLBricks provides an accurate performance estimate for the DL models and reduces the benchmarking time across systems (e. g. within $95\%$ accuracy and up to $4. 4\times$ benchmarking time speedup on Amazon EC2 c5. xlarge).
no code implementations • 16 Nov 2019 • Cheng Li, Abdul Dakkak, JinJun Xiong, Wen-mei Hwu
An important venue for such improvement is to profile the execution of these models and characterize their performance to identify possible optimization opportunities.
no code implementations • WS 2019 • Qingkai Zeng, Mengxia Yu, Wenhao Yu, JinJun Xiong, Yiyu Shi, Meng Jiang
On a scientific concept hierarchy, a parent concept may have a few attributes, each of which has multiple values being a group of child concepts.
no code implementations • IJCNLP 2019 • Omer Anjum, Hongyu Gong, Suma Bhat, Wen-mei Hwu, JinJun Xiong
Finding the right reviewers to assess the quality of conference submissions is a time consuming process for conference organizers.
no code implementations • 25 Sep 2019 • Cheng Li, Abdul Dakkak, JinJun Xiong, Wen-mei Hwu
Machine Learning (ML) and Deep Learning (DL) innovations are being introduced at such a rapid pace that researchers are hard-pressed to analyze and study them.
no code implementations • 25 Sep 2019 • Heechul Lim, Min-Soo Kim, JinJun Xiong
There is growing interest in automating designing good neural network architectures.
2 code implementations • 20 Sep 2019 • Xiaofan Zhang, Haoming Lu, Cong Hao, Jiachen Li, Bowen Cheng, Yuhong Li, Kyle Rupnow, JinJun Xiong, Thomas Huang, Honghui Shi, Wen-mei Hwu, Deming Chen
Object detection and tracking are challenging tasks for resource-constrained embedded systems.
no code implementations • 15 Sep 2019 • Tianchen Wang, JinJun Xiong, Xiaowei Xu, Meng Jiang, Yiyu Shi, Haiyun Yuan, Meiping Huang, Jian Zhuang
Cardiac magnetic resonance imaging (MRI) is an essential tool for MRI-guided surgery and real-time intervention.
no code implementations • ICCV 2019 • Bowen Cheng, Liang-Chieh Chen, Yunchao Wei, Yukun Zhu, Zilong Huang, JinJun Xiong, Thomas Huang, Wen-mei Hwu, Honghui Shi
The multi-scale context module refers to the operations to aggregate feature responses from a large spatial extent, while the single-stage encoder-decoder structure encodes the high-level semantic information in the encoder path and recovers the boundary information in the decoder path.
no code implementations • 19 Aug 2019 • Cheng Li, Abdul Dakkak, JinJun Xiong, Wei Wei, Lingjie Xu, Wen-mei Hwu
Such an endeavor is challenging as the characteristics of an ML model depend on the interplay between the model, framework, system libraries, and the hardware (or the HW/SW stack).
no code implementations • WS 2019 • Tarek Sakakini, Hongyu Gong, Jong Yoon Lee, Robert Schloss, JinJun Xiong, Suma Bhat
One of the challenges of building natural language processing (NLP) applications for education is finding a large domain-specific corpus for the subject of interest (e. g., history or science).
1 code implementation • 25 Jun 2019 • Xiaofan Zhang, Cong Hao, Haoming Lu, Jiachen Li, Yuhong Li, Yuchen Fan, Kyle Rupnow, JinJun Xiong, Thomas Huang, Honghui Shi, Wen-mei Hwu, Deming Chen
Developing artificial intelligence (AI) at the edge is always challenging, since edge devices have limited computation capability and memory resources but need to meet demanding requirements, such as real-time processing, high throughput performance, and high inference accuracy.
no code implementations • 22 Jun 2019 • Omer Anjum, Wen-mei Hwu, JinJun Xiong
Recently we decided to conduct a more thorough study based on all past papers of International Symposium on Computer Architecture (ISCA) from 1973 to 2018, which resulted this article.
2 code implementations • 20 May 2019 • Xiaofan Zhang, Cong Hao, Yuhong Li, Yao Chen, JinJun Xiong, Wen-mei Hwu, Deming Chen
Developing deep learning models for resource-constrained Internet-of-Things (IoT) devices is challenging, as it is difficult to achieve both good quality of results (QoR), such as DNN model inference accuracy, and quality of service (QoS), such as inference latency, throughput, and power consumption.
no code implementations • 29 Apr 2019 • Cheng Li, Abdul Dakkak, JinJun Xiong, Wen-mei Hwu
An increasingly complex and diverse collection of Machine Learning (ML) models as well as hardware/software stacks, collectively referred to as "ML artifacts", are being proposed - leading to a diverse landscape of ML.
2 code implementations • 9 Apr 2019 • Cong Hao, Xiaofan Zhang, Yuhong Li, Sitao Huang, JinJun Xiong, Kyle Rupnow, Wen-mei Hwu, Deming Chen
While embedded FPGAs are attractive platforms for DNN acceleration on edge-devices due to their low latency and high energy efficiency, the scarcity of resources of edge-scale FPGA devices also makes it challenging for DNN deployment.
no code implementations • NAACL 2019 • Hongyu Gong, Suma Bhat, Lingfei Wu, JinJun Xiong, Wen-mei Hwu
Our generator employs an attention-based encoder-decoder to transfer a sentence from the source style to the target style.
1 code implementation • ACL 2018 • Hongyu Gong, Tarek Sakakini, Suma Bhat, JinJun Xiong
This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information.
no code implementations • 15 Mar 2019 • Tianchen Wang, JinJun Xiong, Xiaowei Xu, Yiyu Shi
By introducing a parameterized canonical model to model correlated data and defining corresponding operations as required for CNN training and inference, we show that SCNN can process multiple frames of correlated images effectively, hence achieving significant speedup over existing CNN models.
1 code implementation • 5 Mar 2019 • Yukun Ding, Jinglan Liu, JinJun Xiong, Yiyu Shi
Accurately estimating uncertainties in neural network predictions is of great importance in building trusted DNNs-based models, and there is an increasing interest in providing accurate uncertainty estimation on many tasks, such as security cameras and autonomous driving vehicles.
no code implementations • 1 Feb 2019 • Gabrielle Allen, Igor Andreoni, Etienne Bachelet, G. Bruce Berriman, Federica B. Bianco, Rahul Biswas, Matias Carrasco Kind, Kyle Chard, Minsik Cho, Philip S. Cowperthwaite, Zachariah B. Etienne, Daniel George, Tom Gibbs, Matthew Graham, William Gropp, Anushri Gupta, Roland Haas, E. A. Huerta, Elise Jennings, Daniel S. Katz, Asad Khan, Volodymyr Kindratenko, William T. C. Kramer, Xin Liu, Ashish Mahabal, Kenton McHenry, J. M. Miller, M. S. Neubauer, Steve Oberlin, Alexander R. Olivas Jr, Shawn Rosofsky, Milton Ruiz, Aaron Saxton, Bernard Schutz, Alex Schwing, Ed Seidel, Stuart L. Shapiro, Hongyu Shen, Yue Shen, Brigitta M. Sipőcz, Lunan Sun, John Towns, Antonios Tsokaros, Wei Wei, Jack Wells, Timothy J. Williams, JinJun Xiong, Zhizhen Zhao
We discuss key aspects to realize this endeavor, namely (i) the design and exploitation of scalable and computationally efficient AI algorithms for Multi-Messenger Astrophysics; (ii) cyberinfrastructure requirements to numerically simulate astrophysical sources, and to process and interpret Multi-Messenger Astrophysics data; (iii) management of gravitational wave detections and triggers to enable electromagnetic and astro-particle follow-ups; (iv) a vision to harness future developments of machine and deep learning and cyberinfrastructure resources to cope with the scale of discovery in the Big Data Era; (v) and the need to build a community that brings domain experts together with data scientists on equal footing to maximize and accelerate discovery in the nascent field of Multi-Messenger Astrophysics.
no code implementations • 24 Nov 2018 • Abdul Dakkak, Cheng Li, JinJun Xiong, Wen-mei Hwu
Machine Learning (ML) and Deep Learning (DL) innovations are being introduced at such a rapid pace that model owners and evaluators are hard-pressed analyzing and studying them.
no code implementations • 24 Nov 2018 • Abdul Dakkak, Cheng Li, Simon Garcia de Gonzalo, JinJun Xiong, Wen-mei Hwu
Deep neural networks (DNNs) have become core computation components within low latency Function as a Service (FaaS) prediction pipelines: including image recognition, object detection, natural language processing, speech synthesis, and personalized recommendation pipelines.
Distributed, Parallel, and Cluster Computing
no code implementations • 23 Nov 2018 • Bowen Cheng, Yunchao Wei, Jiahui Yu, Shiyu Chang, JinJun Xiong, Wen-mei Hwu, Thomas S. Huang, Humphrey Shi
While training on samples drawn from independent and identical distribution has been a de facto paradigm for optimizing image classification networks, humans learn new concepts in an easy-to-hard manner and on the selected examples progressively.
2 code implementations • ICCV 2019 • Khoi-Nguyen C. Mac, Dhiraj Joshi, Raymond A. Yeh, JinJun Xiong, Rogerio S. Feris, Minh N. Do
Fine-grained action detection is an important task with numerous applications in robotics and human-computer interaction.
3 code implementations • 5 Oct 2018 • Bowen Cheng, Yunchao Wei, Rogerio Feris, JinJun Xiong, Wen-mei Hwu, Thomas Huang, Humphrey Shi
In particular, DCR places a separate classification network in parallel with the localization network (base detector).
2 code implementations • 18 Sep 2018 • Carl Pearson, Abdul Dakkak, Cheng Li, Sarah Hashash, JinJun Xiong, Wen-mei Hwu
This report presents the design of the Scope infrastructure for extensible and portable benchmarking.
Performance
no code implementations • 31 Jul 2018 • Fenglei Fan, JinJun Xiong, Ge Wang
(4) To approximate the same class of functions with the same error bound, is a quantized quadratic network able to enjoy a lower number of weights than a quantized conventional network?
no code implementations • ECCV 2018 • Yunchao Wei, Zhiqiang Shen, Bowen Cheng, Honghui Shi, JinJun Xiong, Jiashi Feng, Thomas Huang
This work provides a simple approach to discover tight object bounding boxes with only image-level supervision, called Tight box mining with Surrounding Segmentation Context (TS2C).
no code implementations • NeurIPS 2017 • Raymond A. Yeh, JinJun Xiong, Wen-mei W. Hwu, Minh N. Do, Alexander G. Schwing
Textual grounding is an important but challenging task for human-computer interaction, robotics and knowledge mining.
no code implementations • 23 Mar 2018 • Chuanhao Zhuge, Xinheng Liu, Xiaofan Zhang, Sudeep Gummadi, JinJun Xiong, Deming Chen
Deep Convolutional Neural Networks have become a Swiss knife in solving critical artificial intelligence tasks.
3 code implementations • ECCV 2018 • Bowen Cheng, Yunchao Wei, Honghui Shi, Rogerio Feris, JinJun Xiong, Thomas Huang
Recent region-based object detectors are usually built with separate classification and localization branches on top of shared feature extraction networks.
no code implementations • ICLR 2019 • Yukun Ding, Jinglan Liu, JinJun Xiong, Yiyu Shi
To the best of our knowledge, this is the first in-depth study on the complexity bounds of quantized neural networks.