no code implementations • LREC 2022 • Liang Zhao, Eleanor Chodroff
In the present paper, we introduce the ManDi Corpus, a spoken corpus of regional Mandarin dialects and Standard Mandarin.
no code implementations • 20 Sep 2023 • Runpei Dong, Chunrui Han, Yuang Peng, Zekun Qi, Zheng Ge, Jinrong Yang, Liang Zhao, Jianjian Sun, HongYu Zhou, Haoran Wei, Xiangwen Kong, Xiangyu Zhang, Kaisheng Ma, Li Yi
This paper presents DreamLLM, a learning framework that first achieves versatile Multimodal Large Language Models (MLLMs) empowered with frequently overlooked synergy between multimodal comprehension and creation.
no code implementations • 7 Sep 2023 • Chen Ling, Xujiang Zhao, Xuchao Zhang, Yanchi Liu, Wei Cheng, Haoyu Wang, Zhengzhang Chen, Takao Osaki, Katsushi Matsuda, Haifeng Chen, Liang Zhao
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text, typically in the form of (subject, relation, object) triples.
no code implementations • 6 Sep 2023 • Junruo Gao, Chen Ling, Carl Yang, Liang Zhao
Online health communities (OHCs) are forums where patients with similar conditions communicate their experiences and provide moral support.
no code implementations • 30 Aug 2023 • Yangkun Chen, Joseph Suarez, Junjie Zhang, Chenghui Yu, Bo Wu, HanMo Chen, Hengman Zhu, Rui Du, Shanliang Qian, Shuai Liu, Weijun Hong, Jinke He, Yibing Zhang, Liang Zhao, Clare Zhu, Julian Togelius, Sharada Mohanty, Jiaxin Chen, Xiu Li, Xiaolong Zhu, Phillip Isola
We present the results of the second Neural MMO challenge, hosted at IJCAI 2022, which received 1600+ submissions.
no code implementations • 30 Aug 2023 • Muhammad Hamza, Ammar Hawbani, Sami Ul Rehman, Xingfu Wang, Liang Zhao
In particular, we propose a Residual Feature Attention Block (RFAB), containing the channel attention, pixel attention, and residual learning mechanism with long and short skip connections.
no code implementations • 25 Aug 2023 • Guangji Bai, Ziyang Yu, Zheng Chai, Yue Cheng, Liang Zhao
It utilizes an offline memory to cache historical information (e. g., node embedding) as an affordable approximation of the exact value and achieves high concurrency.
no code implementations • 18 Jul 2023 • Liang Zhao, En Yu, Zheng Ge, Jinrong Yang, Haoran Wei, HongYu Zhou, Jianjian Sun, Yuang Peng, Runpei Dong, Chunrui Han, Xiangyu Zhang
Based on precise referring instruction, we propose ChatSpot, a unified end-to-end multimodal large language model that supports diverse forms of interactivity including mouse clicks, drag-and-drop, and drawing boxes, which provides a more flexible and seamless interactive experience.
1 code implementation • 8 Jul 2023 • Tong Steven Sun, Yuyang Gao, Shubham Khaladkar, Sijia Liu, Liang Zhao, Young-Ho Kim, Sungsoo Ray Hong
To mitigate the gap, we designed DeepFuse, the first interactive design that realizes the direct feedback loop between a user and CNNs in diagnosing and revising CNN's vulnerability using local explanations.
no code implementations • 28 Jun 2023 • Yiwen Shi, Ping Ren, Jing Wang, Biao Han, Taha ValizadehAslani, Felix Agbavor, Yi Zhang, Meng Hu, Liang Zhao, Hualou Liang
Specifically, we propose a three-turn iterative prompting approach to food effect summarization in which the keyword-focused and length-controlled prompts are respectively provided in consecutive turns to refine the quality of the generated summary.
no code implementations • 7 Jun 2023 • Hejie Cui, Jiaying Lu, Shiyu Wang, ran Xu, Wenjing Ma, Shaojun Yu, Yue Yu, Xuan Kan, Chen Ling, Liang Zhao, Joyce Ho, Fei Wang, Carl Yang
Healthcare knowledge graphs (HKGs) have emerged as a promising tool for organizing medical knowledge in a structured and interpretable way, which provides a comprehensive view of medical concepts and their relationships.
no code implementations • 6 Jun 2023 • Jiang Liu, Hao Fei, Fei Li, Jingye Li, Bobo Li, Liang Zhao, Chong Teng, Donghong Ji
Few-shot named entity recognition (NER) exploits limited annotated instances to identify named mentions.
1 code implementation • 3 Jun 2023 • Han Yi Chiu, Liang Zhao, Anqi Wu
However, traditional approaches for integrating FC and SC overlook the dynamical variations, which stand a great chance to over-generalize the brain neural network.
no code implementations • 30 May 2023 • Yun Li, Dazhou Yu, Zhenke Liu, Minxing Zhang, Xiaoyun Gong, Liang Zhao
Graph neural networks (GNNs) have emerged as a powerful tool for modeling and understanding data with dependencies to each other such as spatial and temporal dependencies.
no code implementations • 30 May 2023 • Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Jian Pei, Liang Zhao
In this article, we present a comprehensive survey on domain specification techniques for large language models, an emerging direction critical for large language model applications.
no code implementations • 19 May 2023 • Shiyu Wang, Guangji Bai, Qingyang Zhu, Zhaohui Qin, Liang Zhao
As a result, domain generalization graph transformation that predicts graphs not available in the training data is under-explored, with multiple key challenges to be addressed including (1) the extreme space complexity when training on all input-output mode combinations, (2) difference of graph topologies between the input and the output modes, and (3) how to generalize the model to (unseen) target domains that are not in the training data.
1 code implementation • 1 May 2023 • Chen Ling, Junji Jiang, Junxiang Wang, My Thai, Lukas Xue, James Song, Meikang Qiu, Liang Zhao
Influence maximization (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users.
no code implementations • 27 Apr 2023 • Jianshen Zhu, Naveed Ahmed Azam, Kazuya Haraguchi, Liang Zhao, Hiroshi Nagamochi, Tatsuya Akutsu
A novel framework for designing the molecular structure of chemical compounds with a desired chemical property has recently been proposed.
1 code implementation • 25 Mar 2023 • Xiaoxiao He, Chaowei Tan, Bo Liu, Liping Si, Weiwu Yao, Liang Zhao, Di Liu, Qilong Zhangli, Qi Chang, Kang Li, Dimitris N. Metaxas
The supervised learning of the proposed method extracts features from limited labeled data in each client, while the unsupervised data is used to distill both feature and response-based knowledge from a national data repository to further improve the accuracy of the collaborative model and reduce the communication cost.
no code implementations • 4 Feb 2023 • Tanmoy Chowdhury, Chen Ling, Xuchao Zhang, Xujiang Zhao, Guangji Bai, Jian Pei, Haifeng Chen, Liang Zhao
Knowledge-enhanced neural machine reasoning has garnered significant attention as a cutting-edge yet challenging research area with numerous practical applications.
1 code implementation • 26 Dec 2022 • Guangji Bai, Chen Ling, Yuyang Gao, Liang Zhao
Specifically, we innovatively propose to store the part of the image most important to the tasks in episodic memory by saliency map extraction and memory encoding.
no code implementations • 7 Dec 2022 • Yuyang Gao, Siyi Gu, Junji Jiang, Sungsoo Ray Hong, Dazhou Yu, Liang Zhao
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing DNNs become more complex and diverse, ranging from improving a conventional model accuracy metric to infusing advanced human virtues such as fairness, accountability, transparency (FaccT), and unbiasedness.
Explainable artificial intelligence
Explainable Artificial Intelligence (XAI)
+1
1 code implementation • 19 Nov 2022 • Chen Ling, Tanmoy Chowdhury, Junji Jiang, Junxiang Wang, Xuchao Zhang, Haifeng Chen, Liang Zhao
As the most well-known computational method of analogical reasoning, Structure-Mapping Theory (SMT) abstracts both target and base subjects into relational graphs and forms the cognitive process of analogical reasoning by finding a corresponding subgraph (i. e., correspondence) in the target graph that is aligned with the base graph.
no code implementations • 9 Nov 2022 • Liang Zhao, Xinyuan Zhao, Hailong Ma, Xinyu Zhang, Long Zeng
We then fill the hole in the target image with the contents of the aligned image.
1 code implementation • 1 Nov 2022 • Jiang Liu, Donghong Ji, Jingye Li, Dongdong Xie, Chong Teng, Liang Zhao, Fei Li
Concretely, we construct tag representations and embed them into TREM, so that TREM can treat tag and word representations as queries/keys/values and utilize self-attention to model their relationships.
1 code implementation • 3 Oct 2022 • Dazhou Yu, Guangji Bai, Yun Li, Liang Zhao
Spatial domain generalization is a spatial extension of domain generalization, which can generalize to unseen spatial domains in continuous 2D space.
no code implementations • 1 Oct 2022 • Shiyu Wang, Xiaojie Guo, Xuanyang Lin, Bo Pan, Yuanqi Du, Yinkai Wang, Yanfang Ye, Ashley Ann Petersen, Austin Leitgeb, Saleh AlKhalifa, Kevin Minbiole, William Wuest, Amarda Shehu, Liang Zhao
Developing deep generative models has been an emerging field due to the ability to model and generate complex data for various purposes, such as image synthesis and molecular design.
1 code implementation • 13 Sep 2022 • Jianshen Zhu, Naveed Ahmed Azam, Shengjuan Cao, Ryota Ido, Kazuya Haraguchi, Liang Zhao, Hiroshi Nagamochi, Tatsuya Akutsu
A set of graph theoretical descriptors in the feature function plays a key role to derive a compact formulation of such an MILP.
1 code implementation • COLING 2022 • Hu Cao, Jingye Li, Fangfang Su, Fei Li, Hao Fei, Shengqiong Wu, Bobo Li, Liang Zhao, Donghong Ji
Event extraction (EE) is an essential task of information extraction, which aims to extract structured event information from unstructured text.
no code implementations • 22 Jul 2022 • Taha ValizadehAslani, Yiwen Shi, Jing Wang, Ping Ren, Yi Zhang, Meng Hu, Liang Zhao, Hualou Liang
Owing to this paucity of samples, learning on the tail classes is especially challenging for the fine-tuning when transferring a pretrained model to a downstream task.
no code implementations • 19 Jul 2022 • Shiyu Wang, Yuanqi Du, Xiaojie Guo, Bo Pan, Zhaohui Qin, Liang Zhao
Finally, the promising future directions of controllable deep data generation are highlighted and five potential challenges are identified.
1 code implementation • 3 Jul 2022 • Guangji Bai, Liang Zhao
Specifically, we propose to model the task relation as the similarity between task input gradients, with a theoretical analysis of their equivalency.
1 code implementation • 27 Jun 2022 • Yuyang Gao, Tong Steven Sun, Guangji Bai, Siyi Gu, Sungsoo Ray Hong, Liang Zhao
Despite the fast progress of explanation techniques in modern Deep Neural Networks (DNNs) where the main focus is handling "how to generate the explanations", advanced research questions that examine the quality of the explanation itself (e. g., "whether the explanations are accurate") and improve the explanation quality (e. g., "how to adjust the model to generate more accurate explanations when explanations are inaccurate") are still relatively under-explored.
1 code implementation • 24 Jun 2022 • Chen Ling, Junji Jiang, Junxiang Wang, Liang Zhao
Different from most traditional source localization methods, this paper focuses on a probabilistic manner to account for the uncertainty of different candidate sources.
1 code implementation • 18 Jun 2022 • Junxiang Wang, Junji Jiang, Liang Zhao
This paper aims to establish a generic framework of invertible graph diffusion models for source localization on graphs, namely Invertible Validity-aware Graph Diffusion (IVGD), to handle major challenges including 1) Difficulty to leverage knowledge in graph diffusion models for modeling their inverse processes in an end-to-end fashion, 2) Difficulty to ensure the validity of the inferred sources, and 3) Efficiency and scalability in source inference.
no code implementations • 31 May 2022 • Zheng Chai, Guangji Bai, Liang Zhao, Yue Cheng
Traditional sampling-based methods accelerate GNN training by dropping edges and nodes, which impairs the graph integrity and model performance.
1 code implementation • 21 May 2022 • Guangji Bai, Chen Ling, Liang Zhao
Temporal domain generalization is a promising yet extremely challenging area where the goal is to learn models under temporally changing data distributions and generalize to unseen data distributions following the trends of the change.
no code implementations • 14 Apr 2022 • Xinyu Wang, Liang Zhao, Ning Zhang, Liu Feng, Haibo Lin
As far as we know, this is the first paper to apply Ricci curvature to forecast the systemic stability of domestic stock market, and our results show that Ricci curvature has good explanatory power for the market stability and can be a good indicator to judge the future risk and volatility of the domestic market.
no code implementations • 31 Mar 2022 • Liang Zhao, Yao Teng, LiMin Wang
Real-world data exhibiting skewed distributions pose a serious challenge to existing object detectors.
1 code implementation • CVPR 2022 • Liang Zhao, LiMin Wang
To address this issue, in this paper, we propose Task-specific Inconsistency Alignment (TIA), by developing a new alignment mechanism in separate task spaces, improving the performance of the detector on both subtasks.
no code implementations • 28 Feb 2022 • Yuanqi Du, Xiaojie Guo, Hengning Cao, Yanfang Ye, Liang Zhao
Spatiotemporal graph represents a crucial data structure where the nodes and edges are embedded in a geometric space and can evolve dynamically over time.
no code implementations • 28 Feb 2022 • Yuanqi Du, Xiaojie Guo, Amarda Shehu, Liang Zhao
Recent advances in deep graph generative models treat molecule design as graph generation problems which provide new opportunities toward the breakthrough of this long-lasting problem.
no code implementations • 18 Feb 2022 • Mingxuan Ju, Yujie Fan, Yanfang Ye, Liang Zhao
Graph Neural Networks (GNNs) have drawn significant attentions over the years and been broadly applied to vital fields that require high security standard such as product recommendation and traffic forecasting.
1 code implementation • 6 Feb 2022 • Yuyang Gao, Tong Sun, Liang Zhao, Sungsoo Hong
We propose a novel framework of Interactive Attention Alignment (IAA) that aims at realizing human-steerable Deep Neural Networks (DNNs).
1 code implementation • 28 Jan 2022 • Shiyu Wang, Xiaojie Guo, Liang Zhao
To address them, this paper proposes Periodical-Graph Disentangled Variational Auto-encoder (PGD-VAE), a new deep generative models for periodic graphs that can automatically learn, disentangle, and generate local and global graph patterns.
1 code implementation • 6 Jan 2022 • Yuanpeng Li, Joel Hestness, Mohamed Elhoseiny, Liang Zhao, Kenneth Church
This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions.
no code implementations • 23 Dec 2021 • Junxiang Wang, Xuchao Zhang, Bo Zong, Yanchi Liu, Wei Cheng, Jingchao Ni, Haifeng Chen, Liang Zhao
During the past several years, a surge of multi-lingual Pre-trained Language Models (PLMs) has been proposed to achieve state-of-the-art performance in many cross-lingual downstream tasks.
1 code implementation • 22 Dec 2021 • Junxiang Wang, Hongyi Li, Liang Zhao
As a well-known optimization framework, the Alternating Direction Method of Multipliers (ADMM) has achieved tremendous success in many classification and regression applications.
1 code implementation • 8 Dec 2021 • Mingxuan Ju, Shifu Hou, Yujie Fan, Jianan Zhao, Liang Zhao, Yanfang Ye
To solve this problem, in this paper, we propose a novel framework - i. e., namely Adaptive Kernel Graph Neural Network (AKGNN) - which learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
1 code implementation • 1 Dec 2021 • Liyan Xu, Xuchao Zhang, Bo Zong, Yanchi Liu, Wei Cheng, Jingchao Ni, Haifeng Chen, Liang Zhao, Jinho D. Choi
We target the task of cross-lingual Machine Reading Comprehension (MRC) in the direct zero-shot setting, by incorporating syntactic features from Universal Dependencies (UD), and the key features we use are the syntactic relations within each sentence.
1 code implementation • NeurIPS 2021 • Zheng Zhang, Liang Zhao
Specifically, a provably information-lossless and roto-translation invariant representation of spatial information on networks is presented.
1 code implementation • 26 Nov 2021 • Jingjing Xu, Liang Zhao, Junyang Lin, Rundong Gao, Xu sun, Hongxia Yang
Many existing neural architecture search (NAS) solutions rely on downstream training for architecture evaluation, which takes enormous computations.
1 code implementation • 26 Oct 2021 • Yujie Fan, Mingxuan Ju, Chuxu Zhang, Liang Zhao, Yanfang Ye
To retain the heterogeneity, intra-relation aggregation is first performed over each slice of HTG to attentively aggregate information of neighbors with the same type of relation, and then intra-relation aggregation is exploited to gather information over different types of relations; to handle temporal dependencies, across-time aggregation is conducted to exchange information across different graph slices over the HTG.
1 code implementation • NeurIPS Workshop AI4Scien 2021 • Yuanqi Du, Shiyu Wang, Xiaojie Guo, Hengning Cao, Shujie Hu, Junji Jiang, Aishwarya Varala, Abhinav Angirekula, Liang Zhao
Graph generation, which learns from known graphs and discovers novel graphs, has great potential in numerous research topics like drug design and mobility synthesis and is one of the fastest-growing domains recently due to its promise for discovering new knowledge.
no code implementations • 24 Aug 2021 • Ryota Ido, Shengjuan Cao, Jianshen Zhu, Naveed Ahmed Azam, Kazuya Haraguchi, Liang Zhao, Hiroshi Nagamochi, Tatsuya Akutsu
For this, we introduce a new way of representing a polymer as a form of monomer and define new descriptors that feature the structure of polymers.
no code implementations • 23 Aug 2021 • Naveed Ahmed Azam, Jianshen Zhu, Kazuya Haraguchi, Liang Zhao, Hiroshi Nagamochi, Tatsuya Akutsu
In the framework, a chemical graph with a target chemical value is inferred as a feasible solution of a mixed integer linear program that represents a prediction function and other requirements on the structure of graphs.
1 code implementation • 23 Aug 2021 • Liang Zhao, Wei Li, Ruihan Bao, Keiko Harimoto, YunfangWu, Xu sun
Trading volume movement prediction is the key in a variety of financial applications.
no code implementations • 23 Jul 2021 • Osama Shahid, Seyedamin Pouriyeh, Reza M. Parizi, Quan Z. Sheng, Gautam Srivastava, Liang Zhao
Over the years, this has become an emerging technology especially with various data protection and privacy policies being imposed FL allows performing machine learning tasks whilst adhering to these challenges.
1 code implementation • 21 Jul 2021 • Zhiqian Chen, Fanglan Chen, Lei Zhang, Taoran Ji, Kaiqun Fu, Liang Zhao, Feng Chen, Lingfei Wu, Charu Aggarwal, Chang-Tien Lu
Deep learning's performance has been extensively recognized recently.
1 code implementation • 6 Jul 2021 • Jianshen Zhu, Naveed Ahmed Azam, Kazuya Haraguchi, Liang Zhao, Hiroshi Nagamochi, Tatsuya Akutsu
In the framework, we first define a feature vector $f(C)$ of a chemical graph $C$ and construct an ANN that maps $x=f(C)$ to a predicted value $\eta(x)$ of a chemical property $\pi$ to $C$.
no code implementations • 11 Jun 2021 • Xinyi Wang, Haiqin Yang, Liang Zhao, Yang Mo, Jianping Shen
Differently, in this paper, we propose RefBERT to leverage the knowledge learned from the teacher, i. e., facilitating the pre-computed BERT representation on the reference sample and compressing BERT into a smaller student model.
no code implementations • 26 May 2021 • Jiajia Li, Peihua Feng, Liang Zhao, Junying Chen, Mengmeng Du, Yangyang Yu, Jian Song, Ying Wu
Our simulation results show that the increase of the IP3 noise intensity induces the depolarization-block epileptic seizures together with an increase in neuronal firing frequency.
1 code implementation • 21 May 2021 • Zhehua Mao, Liang Zhao, Shoudong Huang, Yiting Fan, Alex Pui-Wai Lee
This paper presents a novel algorithm named Direct Simultaneous Registration (DSR) that registers a collection of 3D images in a simultaneous fashion without specifying any reference image, feature extraction and matching, or information loss or reuse.
1 code implementation • 20 May 2021 • Junxiang Wang, Hongyi Li, Zheng Chai, Yongchao Wang, Yue Cheng, Liang Zhao
Theoretical convergence to a (quantized) stationary point of the pdADMM-G algorithm and the pdADMM-G-Q algorithm is provided with a sublinear convergence rate $o(1/k)$, where $k$ is the number of iterations.
no code implementations • 5 May 2021 • Yuyang Gao, Giorgio A. Ascoli, Liang Zhao
However, since forgetting is inevitable given bounded memory and unbounded task loads, 'how to reasonably forget' is a problem continual learning must address in order to reduce the performance gap between AIs and humans, in terms of 1) memory efficiency, 2) generalizability, and 3) robustness when dealing with noisy data.
no code implementations • 17 Mar 2021 • Boxiang Dong, Hui, Wang, Aparna S. Varde, Dawei Li, Bharath K. Samanthula, Weifeng Sun, Liang Zhao
To achieve high detection accuracy on imbalanced data, we design a novel attack-sharing loss function that can effectively move the decision boundary towards the attack classes and eliminates the bias towards the majority/benign class.
no code implementations • 17 Mar 2021 • Boxiang Dong, Aparna S. Varde, Danilo Stevanovic, Jiayin Wang, Liang Zhao
In this paper, we propose an interpretable distance metric learning approach for handwritten Chinese character recognition.
no code implementations • 22 Feb 2021 • Johnny Torres, Guangji Bai, Junxiang Wang, Liang Zhao, Carmen Vaca, Cristina Abad
Multi-task learning is a framework that enforces different learning tasks to share their knowledge to improve their generalization performance.
no code implementations • 25 Jan 2021 • Liang Zhao, Hexin Cao, Yunsong Zhao
A new method for Text-to-SQL parsing, Grammar Pre-training (GP), is proposed to decode deep relations between question and database.
no code implementations • 11 Jan 2021 • Wenhao fan, Liang Zhao, Jiayang Wang, Ye Chen, Fan Wu, Yuan'an Liu
At present, the main problem of existing research works on Android malware family classification lies in that the extracted features are inadequate to represent the common behavior characteristics of the malware in malicious families, and leveraging a single classifier or a static ensemble classifier is restricted to further improve the accuracy of classification.
Malware Detection
Cryptography and Security
no code implementations • 1 Jan 2021 • Jingjing Xu, Liang Zhao, Junyang Lin, Xu sun, Hongxia Yang
Inspired by our new finding, we explore a simple yet effective network architecture search (NAS) approach that leverages gradient correlation and gradient values to find well-performing architectures.
no code implementations • 1 Jan 2021 • Yuanpeng Li, Liang Zhao, Joel Hestness, Kenneth Church, Mohamed Elhoseiny
In this paper, we argue that gradient descent is one of the reasons that make compositionality learning hard during neural network optimization.
no code implementations • 1 Jan 2021 • Yuanpeng Li, Liang Zhao, Joel Hestness, Ka Yee Lun, Kenneth Church, Mohamed Elhoseiny
To our best knowledge, this is the first work to focus on the transferability of compositionality, and it is orthogonal to existing efforts of learning compositional representations in training distribution.
no code implementations • ICLR 2021 • Xiaojie Guo, Yuanqi Du, Liang Zhao
Deep generative models have made important progress towards modeling complex, high dimensional data via learning latent representations.
no code implementations • SEMEVAL 2020 • Yili Ma, Liang Zhao, Jie Hao
In this paper, we present an approach for sentiment analysis in code-mixed language on twitter defined in SemEval-2020 Task 9.
1 code implementation • 1 Nov 2020 • Junxiang Wang, Zheng Chai, Yue Cheng, Liang Zhao
In this paper, we propose a novel parallel deep learning ADMM framework (pdADMM) to achieve layer parallelism: parameters in each layer of neural networks can be updated independently in parallel.
no code implementations • 15 Oct 2020 • Wenbin Zhang, Liang Zhao
In this paper, we propose a novel framework of online decision tree with fairness in the data stream with possible distribution drifting.
1 code implementation • 14 Oct 2020 • Wenbin Zhang, Liming Zhang, Dieter Pfoser, Liang Zhao
Extending existing deep generative models from static to dynamic graphs is a challenging task, which requires to handle the factorization of static and dynamic characteristics as well as mutual interactions among node and edge patterns.
no code implementations • 12 Oct 2020 • Zheng Chai, Yujing Chen, Ali Anwar, Liang Zhao, Yue Cheng, Huzefa Rangwala
By bridging the synchronous and asynchronous training through tiering, FedAT minimizes the straggler effect with improved convergence speed and test accuracy.
no code implementations • 28 Sep 2020 • Esteban Wilfredo Vilca Zuñiga, Liang Zhao
The current results show us that this approach improves the accuracy of the high-level classification algorithm based on betweenness centrality.
no code implementations • 28 Sep 2020 • Liang Zhao, Jingjing Xu, Junyang Lin, Yichang Zhang, Hongxia Yang, Xu sun
The reasoning module is responsible for searching skeleton paths from a knowledge graph to imitate the imagination process in the human writing for semantic transfer.
1 code implementation • 21 Sep 2020 • Naveed Ahmed Azam, Jianshen Zhu, Yanming Sun, Yu Shi, Aleksandar Shurbevski, Liang Zhao, Hiroshi Nagamochi, Tatsuya Akutsu
In the second phase, given a target value $y^*$ of property $\pi$, a feature vector $x^*$ is inferred by solving an MILP formulated from the trained ANN so that $\psi(x^*)$ is close to $y^*$ and then a set of chemical structures $G^*$ such that $f(G^*)= x^*$ is enumerated by a graph search algorithm.
Data Structures and Algorithms Computational Engineering, Finance, and Science 05C92, 92E10, 05C30, 68T07, 90C11, 92-04
no code implementations • 20 Sep 2020 • Liming Zhang, Liang Zhao, Dieter Pfoser
Inspired by the success of deep generative neural networks for images and texts, a fast-developing research topic is deep generative models for trajectory data which can learn expressively explanatory models for sophisticated latent patterns.
no code implementations • 16 Sep 2020 • Esteban Vilca, Liang Zhao
Data classification is a major machine learning paradigm, which has been widely applied to solve a large number of real-world problems.
1 code implementation • 9 Sep 2020 • Junxiang Wang, Zheng Chai, Yue Cheng, Liang Zhao
In this paper, we analyze the reason and propose to achieve a compelling trade-off between parallelism and accuracy by a reformulation called Tunable Subnetwork Splitting Method (TSSM), which can tune the decomposition granularity of deep neural networks.
no code implementations • 19 Jul 2020 • Liang Zhao
This paper aims to provide a systematic and comprehensive survey of the technologies, applications, and evaluations of event prediction in the big data era.
no code implementations • 13 Jul 2020 • Xiaojie Guo, Liang Zhao
Graphs are important data representations for describing objects and their relationships, which appear in a wide diversity of real-world scenarios.
1 code implementation • 9 Jun 2020 • Xiaojie Guo, Liang Zhao, Zhao Qin, Lingfei Wu, Amarda Shehu, Yanfang Ye
Disentangled representation learning has recently attracted a significant amount of attention, particularly in the field of image representation learning.
1 code implementation • 17 May 2020 • Liming Zhang, Liang Zhao, Shan Qin, Dieter Pfoser
The recent deep generative models for static graphs that are now being actively developed have achieved significant success in areas such as molecule design.
1 code implementation • ICLR 2020 • Yuanpeng Li, Liang Zhao, Kenneth Church, Mohamed Elhoseiny
It also shows significant improvement in machine translation task.
1 code implementation • 23 Apr 2020 • Leonardo N. Ferreira, Didier A. Vega-Oliveros, Moshe Cotacallapa, Manoel F. Cardoso, Marcos G. Quiles, Liang Zhao, Elbert E. N. Macau
In this paper, we propose a network-based model for spatiotemporal data analysis called chronnet.
1 code implementation • 8 Apr 2020 • Xiaojie Guo, Yuanqi Du, Sivani Tadepalli, Liang Zhao, Amarda Shehu
Much scientific enquiry across disciplines is founded upon a mechanistic treatment of dynamic systems that ties form to function.
no code implementations • 22 Mar 2020 • Jingwei Song, Jun Wang, Liang Zhao, Shoudong Huang, Gamini Dissanayake
Our SLAM system can: (1) Incrementally build a live model by progressively fusing new observations with vivid accurate texture.
Dynamic Reconstruction
Simultaneous Localization and Mapping
1 code implementation • 22 Mar 2020 • Xiaojie Guo, Liang Zhao, Cameron Nowzari, Setareh Rafatirad, Houman Homayoun, Sai Manoj Pudukotai Dinakarrao
Then, a spectral graph regularization based on our non-parametric graph Laplacian is proposed in order to learn and maintain the consistency of the predicted nodes and edges.
no code implementations • 27 Feb 2020 • Zhiqian Chen, Fanglan Chen, Lei Zhang, Taoran Ji, Kaiqun Fu, Liang Zhao, Feng Chen, Lingfei Wu, Charu Aggarwal, Chang-Tien Lu
Deep learning's success has been widely recognized in a variety of machine learning tasks, including image classification, audio recognition, and natural language processing.
no code implementations • 13 Feb 2020 • Fabricio A Breve, Marcos G. Quiles, Liang Zhao, Elbert E. N. Macau
Oscillators in the network representing the salient object in a given scene are phase synchronized, while no phase synchronization occurs for background objects.
no code implementations • 12 Feb 2020 • Fabricio Aparecido Breve, Liang Zhao, Marcos Gonçalves Quiles
Computer simulations show the classification accuracy of the proposed method when applied to some artificial and real-world data sets, in which we introduce increasing amounts of label noise.
no code implementations • 16 Jan 2020 • Farnaz Behnia, Ali Mirzaeian, Mohammad Sabokrou, Sai Manoj, Tinoosh Mohsenin, Khaled N. Khasawneh, Liang Zhao, Houman Homayoun, Avesta Sasan
In this paper, we propose Code-Bridged Classifier (CBC), a framework for making a Convolutional Neural Network (CNNs) robust against adversarial attacks without increasing or even by decreasing the overall models' computational complexity.
no code implementations • 12 Dec 2019 • Liang Zhao, Brendan Odigwe, Susan Lessner, Daniel G. Clair, Firas Mussa, Homayoun Valafar
We report an object tracking algorithm that combines geometrical constraints, thresholding, and motion detection for tracking of the descending aorta and the network of major arteries that branch from the aorta including the iliac and femoral arteries.
no code implementations • 4 Dec 2019 • Liang Zhao, Yang Wang, daxiang dong, Hao Tian
The fixed part, capturing user invariant features, is shared by all users and is learned during offline meta learning stage.
no code implementations • 25 Nov 2019 • Lingfei Wu, Ian En-Hsu Yen, Zhen Zhang, Kun Xu, Liang Zhao, Xi Peng, Yinglong Xia, Charu Aggarwal
In particular, RGE is shown to achieve \emph{(quasi-)linear scalability} with respect to the number and the size of the graphs.
no code implementations • 25 Nov 2019 • Lingfei Wu, Ian En-Hsu Yen, Siyu Huo, Liang Zhao, Kun Xu, Liang Ma, Shouling Ji, Charu Aggarwal
In this paper, we present a new class of global string kernels that aims to (i) discover global properties hidden in the strings through global alignments, (ii) maintain positive-definiteness of the kernel, without introducing a diagonal dominant kernel matrix, and (iii) have a training cost linear with respect to not only the length of the string but also the number of training string samples.
no code implementations • 20 Nov 2019 • Kaiqun Fu, Taoran Ji, Liang Zhao, Chang-Tien Lu
In this paper, we propose a traffic incident duration prediction model that simultaneously predicts the impact of the traffic incidents and identifies the critical groups of temporal features via a multi-task learning framework.
no code implementations • IJCNLP 2019 • Jingjing Xu, Liang Zhao, Hanqi Yan, Qi Zeng, Yun Liang, Xu sun
The generator learns to generate examples to attack the classifier while the classifier learns to defend these attacks.
1 code implementation • IJCNLP 2019 • Yuanpeng Li, Liang Zhao, Jian-Yu Wang, Joel Hestness
Compositional generalization is a basic mechanism in human language learning, but current neural networks lack such ability.
no code implementations • 27 Sep 2019 • Yuyang Gao, Giorgio A. Ascoli, Liang Zhao
Deep neural networks (DNNs) are known for extracting useful information from large amounts of data.
no code implementations • 25 Sep 2019 • Junxiang Wang, Fuxun Yu, Xiang Chen, Liang Zhao
To overcome these drawbacks, alternating minimization-based methods for deep neural network optimization have attracted fast-increasing attention recently.
no code implementations • 25 Sep 2019 • Liang Zhao, Qingzhe Li, Negar Etemadyrad, Xiaojie Guo
On the other hand, graph topological evolution has been investigated in the graph signal processing domain historically, but it involves intensive labors to manually determine suitable prescribed spectral models and prohibitive difficulty to fit their potential combinations and compositions.
1 code implementation • 26 Aug 2019 • Xiaojie Guo, Amir Alipour-Fanid, Lingfei Wu, Hemant Purohit, Xiang Chen, Kai Zeng, Liang Zhao
At present, object recognition studies are mostly conducted in a closed lab setting with classes in test phase typically in training phase.
no code implementations • 22 Aug 2019 • Yuyang Gao, Lingfei Wu, Houman Homayoun, Liang Zhao
In this paper, we first formulate the transition of user activities as a dynamic graph with multi-attributed nodes, then formalize the health stage inference task as a dynamic graph-to-sequence learning problem, and hence propose a novel dynamic graph-to-sequence neural networks architecture (DynGraph2Seq) to address all the challenges.
no code implementations • 20 Aug 2019 • Liang Zhao, Zhiyuan Ma, Yangming Zhou, Kai Wang, Shengping Liu, Ju Gao
Electronic health record is an important source for clinical researches and applications, and errors inevitably occur in the data, which could lead to severe damages to both patients and hospital services.
no code implementations • 29 Jul 2019 • Hosein Mohammadi Makrani, Farnoud Farahmand, Hossein Sayadi, Sara Bondi, Sai Manoj Pudukotai Dinakarrao, Liang Zhao, Avesta Sasan, Houman Homayoun, Setareh Rafatirad
HLS tools offer a plethora of techniques to optimize designs for both area and performance, but resource usage and timing reports of HLS tools mostly deviate from the post-implementation results.
no code implementations • 20 Jun 2019 • Jingwei Song, Fang Bai, Liang Zhao, Shoudong Huang, Rong Xiong
In this paper, we propose an approach to decouple nodes of deformation graph in large scale dense deformable SLAM and keep the estimation time to be constant.
no code implementations • 10 Jun 2019 • Victor Y. Pan, Qi Luan, John Svadlenka, Liang Zhao
Low rank approximation of a matrix (hereafter LRA) is a highly important area of Numerical Linear and Multilinear Algebra and Data Mining and Analysis.
Numerical Analysis Numerical Analysis
1 code implementation • 31 May 2019 • Junxiang Wang, Fuxun Yu, Xiang Chen, Liang Zhao
However, as an emerging domain, several challenges remain, including 1) The lack of global convergence guarantees, 2) Slow convergence towards solutions, and 3) Cubic time complexity with regard to feature dimensions.
no code implementations • 10 May 2019 • Fuxun Yu, Zhuwei Qin, Chenchen Liu, Liang Zhao, Yanzhi Wang, Xiang Chen
Recently, adversarial deception becomes one of the most considerable threats to deep neural networks.
no code implementations • ICLR 2019 • Xiaodong Jia, Liang Zhao, Lian Zhang, Juncai He, Jinchao Xu
We propose a new approach, known as the iterative regularized dual averaging (iRDA), to improve the efficiency of convolutional neural networks (CNN) by significantly reducing the redundancy of the model without reducing its accuracy.
no code implementations • 29 Mar 2019 • Liang Zhao, Wei Xu
In this paper we present our scientific discovery that good representation can be learned via continuous attention during the interaction between Unsupervised Learning(UL) and Reinforcement Learning(RL) modules driven by intrinsic motivation.
1 code implementation • 11 Mar 2019 • Leonardo N. Ferreira, Didier A. Vega-Oliveros, Liang Zhao, Manoel F. Cardoso, Elbert E. N. Macau
In this paper, we evaluate the possibility of using historical data from 2003 to 2017 of active fire detections (NASA's MODIS MCD14ML C6) and time series forecasting methods to estimate global fire season severity (FSS), here defined as the accumulated fire detections in a season.
Applications
no code implementations • 28 Feb 2019 • Siyu Liao, Zhe Li, Liang Zhao, Qinru Qiu, Yanzhi Wang, Bo Yuan
Deep neural networks (DNNs), especially deep convolutional neural networks (CNNs), have emerged as the powerful technique in various machine learning applications.
no code implementations • 14 Feb 2019 • Zhiqian Chen, Gaurav Kolhe, Setareh Rafatirad, Sai Manoj P. D., Houman Homayoun, Liang Zhao, Chang-Tien Lu
Deobfuscation runtime could have a large span ranging from few milliseconds to thousands of years or more, depending on the number and layouts of the ICs and camouflaged gates.
2 code implementations • 8 Feb 2019 • Leonardo N. Ferreira, Nicole C. R. Ferreira, Maria Livia L. M. Gava, Liang Zhao, Elbert E. N. Macau
In this context, functional climate networks can be constructed using a spatiotemporal climate dataset and a suitable time series distance function.
Data Analysis, Statistics and Probability Atmospheric and Oceanic Physics
no code implementations • 5 Feb 2019 • Xuchao Zhang, Shuo Lei, Liang Zhao, Arnold P. Boedihardjo, Chang-Tien Lu
The presence of data corruption in user-generated streaming data, such as social media, motivates a new fundamental problem that learns reliable regression coefficient when features are not accessible entirely at one time.
1 code implementation • 18 Jan 2019 • Tianbing Xu, Andrew Zhang, Liang Zhao
There are two halves to RL systems: experience collection time and policy learning time.
no code implementations • NIPS 2018 2018 • Lingfei Wu, Ian En-Hsu Yen, Kun Xu, Liang Zhao, Yinglong Xia, Michael Witbrock
Graph kernels are one of the most important methods for graph data analysis and have been successfully applied in diverse applications.
1 code implementation • 5 Nov 2018 • Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki, Marcel Prastawa, Esther Alberts, Jana Lipkova, John Freymann, Justin Kirby, Michel Bilello, Hassan Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Benedikt Wiestler, Rivka Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-Andre Weber, Abhishek Mahajan, Ujjwal Baid, Elizabeth Gerstner, Dongjin Kwon, Gagan Acharya, Manu Agarwal, Mahbubul Alam, Alberto Albiol, Antonio Albiol, Francisco J. Albiol, Varghese Alex, Nigel Allinson, Pedro H. A. Amorim, Abhijit Amrutkar, Ganesh Anand, Simon Andermatt, Tal Arbel, Pablo Arbelaez, Aaron Avery, Muneeza Azmat, Pranjal B., W Bai, Subhashis Banerjee, Bill Barth, Thomas Batchelder, Kayhan Batmanghelich, Enzo Battistella, Andrew Beers, Mikhail Belyaev, Martin Bendszus, Eze Benson, Jose Bernal, Halandur Nagaraja Bharath, George Biros, Sotirios Bisdas, James Brown, Mariano Cabezas, Shilei Cao, Jorge M. Cardoso, Eric N Carver, Adrià Casamitjana, Laura Silvana Castillo, Marcel Catà, Philippe Cattin, Albert Cerigues, Vinicius S. Chagas, Siddhartha Chandra, Yi-Ju Chang, Shiyu Chang, Ken Chang, Joseph Chazalon, Shengcong Chen, Wei Chen, Jefferson W. Chen, Zhaolin Chen, Kun Cheng, Ahana Roy Choudhury, Roger Chylla, Albert Clérigues, Steven Colleman, Ramiro German Rodriguez Colmeiro, Marc Combalia, Anthony Costa, Xiaomeng Cui, Zhenzhen Dai, Lutao Dai, Laura Alexandra Daza, Eric Deutsch, Changxing Ding, Chao Dong, Shidu Dong, Wojciech Dudzik, Zach Eaton-Rosen, Gary Egan, Guilherme Escudero, Théo Estienne, Richard Everson, Jonathan Fabrizio, Yong Fan, Longwei Fang, Xue Feng, Enzo Ferrante, Lucas Fidon, Martin Fischer, Andrew P. French, Naomi Fridman, Huan Fu, David Fuentes, Yaozong Gao, Evan Gates, David Gering, Amir Gholami, Willi Gierke, Ben Glocker, Mingming Gong, Sandra González-Villá, T. Grosges, Yuanfang Guan, Sheng Guo, Sudeep Gupta, Woo-Sup Han, Il Song Han, Konstantin Harmuth, Huiguang He, Aura Hernández-Sabaté, Evelyn Herrmann, Naveen Himthani, Winston Hsu, Cheyu Hsu, Xiaojun Hu, Xiaobin Hu, Yan Hu, Yifan Hu, Rui Hua, Teng-Yi Huang, Weilin Huang, Sabine Van Huffel, Quan Huo, Vivek HV, Khan M. Iftekharuddin, Fabian Isensee, Mobarakol Islam, Aaron S. Jackson, Sachin R. Jambawalikar, Andrew Jesson, Weijian Jian, Peter Jin, V Jeya Maria Jose, Alain Jungo, B Kainz, Konstantinos Kamnitsas, Po-Yu Kao, Ayush Karnawat, Thomas Kellermeier, Adel Kermi, Kurt Keutzer, Mohamed Tarek Khadir, Mahendra Khened, Philipp Kickingereder, Geena Kim, Nik King, Haley Knapp, Urspeter Knecht, Lisa Kohli, Deren Kong, Xiangmao Kong, Simon Koppers, Avinash Kori, Ganapathy Krishnamurthi, Egor Krivov, Piyush Kumar, Kaisar Kushibar, Dmitrii Lachinov, Tryphon Lambrou, Joon Lee, Chengen Lee, Yuehchou Lee, M Lee, Szidonia Lefkovits, Laszlo Lefkovits, James Levitt, Tengfei Li, Hongwei Li, Hongyang Li, Xiaochuan Li, Yuexiang Li, Heng Li, Zhenye Li, Xiaoyu Li, Zeju Li, Xiaogang Li, Wenqi Li, Zheng-Shen Lin, Fengming Lin, Pietro Lio, Chang Liu, Boqiang Liu, Xiang Liu, Mingyuan Liu, Ju Liu, Luyan Liu, Xavier Llado, Marc Moreno Lopez, Pablo Ribalta Lorenzo, Zhentai Lu, Lin Luo, Zhigang Luo, Jun Ma, Kai Ma, Thomas Mackie, Anant Madabushi, Issam Mahmoudi, Klaus H. Maier-Hein, Pradipta Maji, CP Mammen, Andreas Mang, B. S. Manjunath, Michal Marcinkiewicz, S McDonagh, Stephen McKenna, Richard McKinley, Miriam Mehl, Sachin Mehta, Raghav Mehta, Raphael Meier, Christoph Meinel, Dorit Merhof, Craig Meyer, Robert Miller, Sushmita Mitra, Aliasgar Moiyadi, David Molina-Garcia, Miguel A. B. Monteiro, Grzegorz Mrukwa, Andriy Myronenko, Jakub Nalepa, Thuyen Ngo, Dong Nie, Holly Ning, Chen Niu, Nicholas K Nuechterlein, Eric Oermann, Arlindo Oliveira, Diego D. C. Oliveira, Arnau Oliver, Alexander F. I. Osman, Yu-Nian Ou, Sebastien Ourselin, Nikos Paragios, Moo Sung Park, Brad Paschke, J. Gregory Pauloski, Kamlesh Pawar, Nick Pawlowski, Linmin Pei, Suting Peng, Silvio M. Pereira, Julian Perez-Beteta, Victor M. Perez-Garcia, Simon Pezold, Bao Pham, Ashish Phophalia, Gemma Piella, G. N. Pillai, Marie Piraud, Maxim Pisov, Anmol Popli, Michael P. Pound, Reza Pourreza, Prateek Prasanna, Vesna Prkovska, Tony P. Pridmore, Santi Puch, Élodie Puybareau, Buyue Qian, Xu Qiao, Martin Rajchl, Swapnil Rane, Michael Rebsamen, Hongliang Ren, Xuhua Ren, Karthik Revanuru, Mina Rezaei, Oliver Rippel, Luis Carlos Rivera, Charlotte Robert, Bruce Rosen, Daniel Rueckert, Mohammed Safwan, Mostafa Salem, Joaquim Salvi, Irina Sanchez, Irina Sánchez, Heitor M. Santos, Emmett Sartor, Dawid Schellingerhout, Klaudius Scheufele, Matthew R. Scott, Artur A. Scussel, Sara Sedlar, Juan Pablo Serrano-Rubio, N. Jon Shah, Nameetha Shah, Mazhar Shaikh, B. Uma Shankar, Zeina Shboul, Haipeng Shen, Dinggang Shen, Linlin Shen, Haocheng Shen, Varun Shenoy, Feng Shi, Hyung Eun Shin, Hai Shu, Diana Sima, M Sinclair, Orjan Smedby, James M. Snyder, Mohammadreza Soltaninejad, Guidong Song, Mehul Soni, Jean Stawiaski, Shashank Subramanian, Li Sun, Roger Sun, Jiawei Sun, Kay Sun, Yu Sun, Guoxia Sun, Shuang Sun, Yannick R Suter, Laszlo Szilagyi, Sanjay Talbar, DaCheng Tao, Zhongzhao Teng, Siddhesh Thakur, Meenakshi H Thakur, Sameer Tharakan, Pallavi Tiwari, Guillaume Tochon, Tuan Tran, Yuhsiang M. Tsai, Kuan-Lun Tseng, Tran Anh Tuan, Vadim Turlapov, Nicholas Tustison, Maria Vakalopoulou, Sergi Valverde, Rami Vanguri, Evgeny Vasiliev, Jonathan Ventura, Luis Vera, Tom Vercauteren, C. A. Verrastro, Lasitha Vidyaratne, Veronica Vilaplana, Ajeet Vivekanandan, Qian Wang, Chiatse J. Wang, Wei-Chung Wang, Duo Wang, Ruixuan Wang, Yuanyuan Wang, Chunliang Wang, Guotai Wang, Ning Wen, Xin Wen, Leon Weninger, Wolfgang Wick, Shaocheng Wu, Qiang Wu, Yihong Wu, Yong Xia, Yanwu Xu, Xiaowen Xu, Peiyuan Xu, Tsai-Ling Yang, Xiaoping Yang, Hao-Yu Yang, Junlin Yang, Haojin Yang, Guang Yang, Hongdou Yao, Xujiong Ye, Changchang Yin, Brett Young-Moxon, Jinhua Yu, Xiangyu Yue, Songtao Zhang, Angela Zhang, Kun Zhang, Xue-jie Zhang, Lichi Zhang, Xiaoyue Zhang, Yazhuo Zhang, Lei Zhang, Jian-Guo Zhang, Xiang Zhang, Tianhao Zhang, Sicheng Zhao, Yu Zhao, Xiaomei Zhao, Liang Zhao, Yefeng Zheng, Liming Zhong, Chenhong Zhou, Xiaobing Zhou, Fan Zhou, Hongtu Zhu, Jin Zhu, Ying Zhuge, Weiwei Zong, Jayashree Kalpathy-Cramer, Keyvan Farahani, Christos Davatzikos, Koen van Leemput, Bjoern Menze
This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i. e., 2012-2018.
no code implementations • ICLR 2019 • Fuxun Yu, ChenChen Liu, Yanzhi Wang, Liang Zhao, Xiang Chen
One popular hypothesis of neural network generalization is that the flat local minima of loss surface in parameter space leads to good generalization.
no code implementations • 25 Aug 2018 • Yueyue Wang, Liang Zhao, Zhijian Song, Manning Wang
Accurate segmentation of organ at risk (OAR) play a critical role in the treatment planning of image guided radiation treatment of head and neck cancer.
no code implementations • 19 Aug 2018 • He Bai, Yu Zhou, Jiajun Zhang, Liang Zhao, Mei-Yuh Hwang, Cheng-qing Zong
This paper focuses on the language transferring task given a tiny in-domain parallel SLU corpus.
Cultural Vocal Bursts Intensity Prediction
General Classification
+5
no code implementations • COLING 2018 • He Bai, Yu Zhou, Jiajun Zhang, Liang Zhao, Mei-Yuh Hwang, Cheng-qing Zong
An SLU corpus is a monolingual corpus with domain/intent/slot labels.
Cultural Vocal Bursts Intensity Prediction
General Classification
+6
no code implementations • 11 Jul 2018 • Juncai He, Xiaodong Jia, Jinchao Xu, Lian Zhang, Liang Zhao
Compressed Sensing using $\ell_1$ regularization is among the most powerful and popular sparsification technique in many applications, but why has it not been used to obtain sparse deep learning model such as convolutional neural network (CNN)?
no code implementations • 6 Jul 2018 • Xuchao Zhang, Liang Zhao, Zhiqian Chen, Chang-Tien Lu
One key issue in SPL is the training process required for each instance weight depends on the other samples and thus cannot easily be run in a distributed manner in a large-scale dataset.
no code implementations • ICML 2018 • Tianbing Xu, Qiang Liu, Liang Zhao, Jian Peng
The performance of off-policy learning, including deep Q-learning and deep deterministic policy gradient (DDPG), critically depends on the choice of the exploration policy.
2 code implementations • 25 May 2018 • Xiaojie Guo, Lingfei Wu, Liang Zhao
To achieve this, we propose a novel Graph-Translation-Generative Adversarial Networks (GT-GAN) which will generate a graph translator from input to target graphs.
no code implementations • 14 Mar 2018 • Yanzhi Wang, Zheng Zhan, Jiayu Li, Jian Tang, Bo Yuan, Liang Zhao, Wujie Wen, Siyue Wang, Xue Lin
Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity.
no code implementations • 13 Mar 2018 • Tianbing Xu, Qiang Liu, Liang Zhao, Jian Peng
The performance of off-policy learning, including deep Q-learning and deep deterministic policy gradient (DDPG), critically depends on the choice of the exploration policy.
no code implementations • 6 Mar 2018 • Jingwei Song, Jun Wang, Liang Zhao, Shoudong Huang, Gamini Dissanayake
Idled CPU is used to perform ORB- SLAM for providing robust global pose.
no code implementations • CVPR 2018 • Yang Wang, Yi Yang, Zhenheng Yang, Liang Zhao, Peng Wang, Wei Xu
Especially on KITTI dataset where abundant unlabeled samples exist, our unsupervised method outperforms its counterpart trained with supervised learning.
1 code implementation • 12 Nov 2017 • Hamed Jelodar, Yongli Wang, Chi Yuan, Xia Feng, Xiahui Jiang, Yanchao Li, Liang Zhao
Topic modeling is one of the most powerful techniques in text mining for data mining, latent data discovery, and finding relationships among data, text documents.
no code implementations • 10 Nov 2017 • Zhenheng Yang, Peng Wang, Wei Xu, Liang Zhao, Ramakant Nevatia
Learning to reconstruct depths in a single image by watching unlabeled videos via deep convolutional network (DCN) is attracting significant attention in recent years.
no code implementations • 25 Oct 2017 • Filipe Alves Neto Verri, Renato Tinós, Liang Zhao
We show that the enhanced network contains more information and can be exploited to improve the performance of machine learning methods.
no code implementations • 2 Oct 2017 • Xuchao Zhang, Liang Zhao, Arnold P. Boedihardjo, Chang-Tien Lu
In today's era of big data, robust least-squares regression becomes a more challenging problem when considering the adversarial corruption along with explosive growth of datasets.
no code implementations • 15 Sep 2017 • Feng Chen, Baojian Zhou, Adil Alim, Liang Zhao
As a case study, we specialize SG-Pursuit to optimize a number of well-known score functions for two typical tasks, including detection of coherent dense and anomalous connected subspace clusters in real-world networks.
no code implementations • 24 May 2017 • Liang Zhao, Yang Wang, Yi Yang, Wei Xu
This paper presents two unsupervised learning layers (UL layers) for label-free video analysis: one for fully connected layers, and the other for convolutional ones.
no code implementations • 9 May 2017 • Junxiang Wang, Liang Zhao
The classic Alternating Direction Method of Multipliers (ADMM) is a popular framework to solve linear-equality constrained problems.
Optimization and Control Social and Information Networks
no code implementations • ICML 2017 • Liang Zhao, Siyu Liao, Yanzhi Wang, Zhe Li, Jian Tang, Victor Pan, Bo Yuan
Recently low displacement rank (LDR) matrices, or so-called structured matrices, have been proposed to compress large-scale neural networks.
no code implementations • 3 Mar 2016 • Filipe Alves Neto Verri, Paulo Roberto Urio, Liang Zhao
Labeled vertices generate new particles that compete against rival particles for edge domination.
1 code implementation • 19 Aug 2015 • Leonardo N. Ferreira, Liang Zhao
In this paper, we propose a technique for time series clustering using community detection in complex networks.
no code implementations • 15 Sep 2014 • Liang Zhao, Jianhua Xuan, Yue Wang
A statistical volumetric model, showing the probability map of localized prostate cancer within the host anatomical structure, has been developed from 90 optically-imaged surgical specimens.
no code implementations • 3 May 2014 • Lucas Antiqueira, Liang Zhao
Models of neural networks have proven their utility in the development of learning algorithms in computer science and in the theoretical study of brain dynamics in computational neuroscience.
no code implementations • 7 May 2013 • Thiago Christiano Silva, Liang Zhao
Out of various high level perspectives that can be utilized to capture semantic meaning, we utilize the dynamical features that are generated from a tourist walker in a networked environment.