no code implementations • COLING 2022 • Kun Zhang, Yunqi Qiu, Yuanzhuo Wang, Long Bai, Wei Li, Xuhui Jiang, HuaWei Shen, Xueqi Cheng
Complex question generation over knowledge bases (KB) aims to generate natural language questions involving multiple KB relations or functional constraints.
no code implementations • ICML 2020 • Jiaxian Guo, Mingming Gong, Tongliang Liu, Kun Zhang, DaCheng Tao
Distribution shift is a major obstacle to the deployment of current deep learning models on real-world problems.
no code implementations • ICML 2020 • Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao
Domain adaptation aims to correct the classifiers when faced with distribution shift between source (training) and target (test) domains.
no code implementations • 26 Mar 2024 • Wentao Ouyang, Xiuwu Zhang, Chaofeng Guo, Shukui Ren, Yupei Sui, Kun Zhang, Jinmei Luo, Yunfeng Chen, Dongbo Xu, Xiangzheng Liu, Yanlong Du
A desired model for this problem should satisfy the following requirements: 1) Accuracy: the model should achieve fine-grained accuracy with respect to any conversion type in any display scenario.
no code implementations • 23 Mar 2024 • Yuhang Liu, Zhen Zhang, Dong Gong, Mingming Gong, Biwei Huang, Anton Van Den Hengel, Kun Zhang, Javen Qinfeng Shi
This work establishes a {sufficient} and {necessary} condition characterizing the types of distribution shifts for identifiability in the context of latent additive noise models.
no code implementations • 21 Mar 2024 • Haoyue Dai, Ignavier Ng, Gongxu Luo, Peter Spirtes, Petar Stojanov, Kun Zhang
This particular test-wise deletion procedure, in which we perform CI tests on the samples without zeros for the conditioned variables, can be seamlessly integrated with existing structure learning approaches including constraint-based and greedy score-based methods, thus giving rise to a principled framework for GRNI in the presence of dropouts.
no code implementations • 21 Mar 2024 • Haoyue Dai, Ignavier Ng, Yujia Zheng, Zhengqing Gao, Kun Zhang
Local causal discovery is of great practical significance, as there are often situations where the discovery of the global causal structure is unnecessary, and the interest lies solely on a single target variable.
no code implementations • 13 Mar 2024 • Jingling Li, Zeyu Tang, Xiaoyu Liu, Peter Spirtes, Kun Zhang, Liu Leqi, Yang Liu
Large language models (LLMs) can easily generate biased and discriminative responses.
1 code implementation • NeurIPS 2023 • Hanqi Yan, Lingjing Kong, Lin Gui, Yuejie Chi, Eric Xing, Yulan He, Kun Zhang
In this work, we tackle the domain-varying dependence between the content and the style variables inherent in the counterfactual generation task.
no code implementations • 20 Feb 2024 • Zijian Li, Ruichu Cai, Zhenhui Yang, Haiqin Huang, Guangyi Chen, Yifan Shen, Zhengming Chen, Xiangchen Song, Zhifeng Hao, Kun Zhang
To solve this problem, we propose to learn IDentifiable latEnt stAtes (IDEA) to detect when the distribution shifts occur.
1 code implementation • 20 Feb 2024 • Loka Li, Ignavier Ng, Gongxu Luo, Biwei Huang, Guangyi Chen, Tongliang Liu, Bin Gu, Kun Zhang
This discrepancy has motivated the development of federated causal discovery (FCD) approaches.
1 code implementation • 19 Feb 2024 • Loka Li, Guangyi Chen, Yusheng Su, Zhenhao Chen, Yixuan Zhang, Eric Xing, Kun Zhang
We have experimentally observed that LLMs possess the capability to understand the "confidence" in their own responses.
no code implementations • 18 Feb 2024 • Peijie Sun, Le Wu, Kun Zhang, Xiangzhi Chen, Meng Wang
Using the graph-based collaborative filtering model as our backbone and following the same data augmentation methods as the existing contrastive learning model SGL, we effectively enhance the performance of the recommendation model.
no code implementations • 15 Feb 2024 • Pengyang Shao, Chen Gao, Lei Chen, Yonghui Yang, Kun Zhang, Meng Wang
Typically, these CD algorithms assist students by inferring their abilities (i. e., their proficiency levels on various knowledge concepts).
no code implementations • 9 Feb 2024 • Yuhang Liu, Zhen Zhang, Dong Gong, Biwei Huang, Mingming Gong, Anton Van Den Hengel, Kun Zhang, Javen Qinfeng Shi
Multimodal contrastive representation learning methods have proven successful across a range of domains, partly due to their ability to generate meaningful shared representations of complex phenomena.
no code implementations • 7 Feb 2024 • Kun Zhang, Shaoan Xie, Ignavier Ng, Yujia Zheng
We show that under the sparsity constraint on the recovered graph over the latent variables and suitable sufficient change conditions on the causal influences, interestingly, one can recover the moralized graph of the underlying directed acyclic graph, and the recovered latent variables and their relations are related to the underlying causal model in a specific, nontrivial way.
no code implementations • 6 Feb 2024 • Chenxi Liu, Yongqiang Chen, Tongliang Liu, Mingming Gong, James Cheng, Bo Han, Kun Zhang
The rise of large language models (LLMs) that are trained to learn rich knowledge from the massive observations of the world, provides a new opportunity to assist with discovering high-level hidden variables from the raw observational data.
no code implementations • 2 Feb 2024 • Guang-Yuan Hao, Jiji Zhang, Biwei Huang, Hao Wang, Kun Zhang
Counterfactual reasoning is pivotal in human cognition and especially important for providing explanations and making decisions.
no code implementations • 30 Jan 2024 • Yewen Fan, Nian Si, Xiangchen Song, Kun Zhang
The metric variance comes from the randomness inherent in the training process of deep learning pipelines.
no code implementations • 25 Jan 2024 • Guangyi Chen, Yifan Shen, Zhenhao Chen, Xiangchen Song, Yuewen Sun, Weiran Yao, Xiao Liu, Kun Zhang
Identifying the underlying time-delayed latent causal processes in sequential data is vital for grasping temporal dynamics and making downstream reasoning.
no code implementations • 24 Jan 2024 • Qi Sun, Kun Huang, Xiaocui Yang, Rong Tong, Kun Zhang, Soujanya Poria
In this paper, we propose a Zero-shot Document-level Relation Triplet Extraction (ZeroDocRTE) framework, which generates labeled data by retrieval and denoising knowledge from LLMs, called GenRDK.
no code implementations • 18 Jan 2024 • Guanglin Zhou, Zhongyi Han, Shiming Chen, Biwei Huang, Liming Zhu, Tongliang Liu, Lina Yao, Kun Zhang
Domain Generalization (DG) endeavors to create machine learning models that excel in unseen scenarios by learning invariant features.
no code implementations • 17 Jan 2024 • Tian-Le Yang, Kuang-Yao Lee, Kun Zhang, Joe Suzuki
To expand this concept, we extend the notion of variables to encompass vectors and even functions, leading to the Functional Linear Non-Gaussian Acyclic Model (Func-LiNGAM).
no code implementations • 28 Dec 2023 • Xinshuai Dong, Haoyue Dai, Yewen Fan, Songyao Jin, Sathyamoorthy Rajendran, Kun Zhang
Financial data is generally time series in essence and thus suffers from three fundamental issues: the mismatch in time resolution, the time-varying property of the distribution - nonstationarity, and causal factors that are important but unknown/unobserved.
no code implementations • 22 Dec 2023 • Yuke Li, Lixiong Chen, Guangyi Chen, Ching-Yao Chan, Kun Zhang, Stefano Anzellotti, Donglai Wei
In order to predict a pedestrian's trajectory in a crowd accurately, one has to take into account her/his underlying socio-temporal interactions with other pedestrians consistently.
no code implementations • 19 Dec 2023 • Wei Chen, Zhiyi Huang, Ruichu Cai, Zhifeng Hao, Kun Zhang
Despite the emergence of numerous methods aimed at addressing this challenge, they are not fully identified to the structure that two observed variables are influenced by one latent variable and there might be a directed edge in between.
no code implementations • 18 Dec 2023 • Xinshuai Dong, Biwei Huang, Ignavier Ng, Xiangchen Song, Yujia Zheng, Songyao Jin, Roberto Legaspi, Peter Spirtes, Kun Zhang
Most existing causal discovery methods rely on the assumption of no latent confounders, limiting their applicability in solving real-life problems.
1 code implementation • 12 Dec 2023 • Zhongyi Han, Guanglin Zhou, Rundong He, Jindong Wang, Tailin Wu, Yilong Yin, Salman Khan, Lina Yao, Tongliang Liu, Kun Zhang
We further investigate its adaptability to controlled data perturbations and examine the efficacy of in-context learning as a tool to enhance its adaptation.
no code implementations • 5 Dec 2023 • Shaoan Xie, Yang Zhao, Zhisheng Xiao, Kelvin C. K. Chan, Yandong Li, Yanwu Xu, Kun Zhang, Tingbo Hou
Our extensive experiments demonstrate the superior performance of our method in terms of visual quality, identity preservation, and text control, showcasing its effectiveness in the context of text-guided subject-driven image inpainting.
1 code implementation • 8 Nov 2023 • Zijian Li, Zunhong Xu, Ruichu Cai, Zhenhui Yang, Yuguang Yan, Zhifeng Hao, Guangyi Chen, Kun Zhang
Specifically, we first formulate the data generation process from the atom level to the molecular level, where the latent space is split into SI substructures, SR substructures, and SR atom variables.
1 code implementation • 7 Nov 2023 • Enhong Liu, Joseph Suarez, Chenhui You, Bo Wu, BingCheng Chen, Jun Hu, Jiaxin Chen, Xiaolong Zhu, Clare Zhu, Julian Togelius, Sharada Mohanty, Weijun Hong, Rui Du, Yibing Zhang, Qinwen Wang, Xinhang Li, Zheng Yuan, Xiang Li, Yuejia Huang, Kun Zhang, Hanhui Yang, Shiqi Tang, Phillip Isola
In this paper, we present the results of the NeurIPS-2022 Neural MMO Challenge, which attracted 500 participants and received over 1, 600 submissions.
1 code implementation • 5 Nov 2023 • Zeyu Tang, Jialu Wang, Yang Liu, Peter Spirtes, Kun Zhang
We reveal and address the frequently overlooked yet important issue of disguised procedural unfairness, namely, the potentially inadvertent alterations on the behavior of neutral (i. e., not problematic) aspects of data generating process, and/or the lack of procedural assurance of the greatest benefit of the least advantaged individuals.
1 code implementation • NeurIPS 2023 • Xiangchen Song, Weiran Yao, Yewen Fan, Xinshuai Dong, Guangyi Chen, Juan Carlos Niebles, Eric Xing, Kun Zhang
In unsupervised causal representation learning for sequential data with time-delayed latent causal influences, strong identifiability results for the disentanglement of causally-related latent variables have been established in stationary settings by leveraging temporal structure.
no code implementations • 24 Oct 2023 • Yuhang Liu, Zhen Zhang, Dong Gong, Mingming Gong, Biwei Huang, Anton Van Den Hengel, Kun Zhang, Javen Qinfeng Shi
However, this progress rests on the assumption that the causal relationships among latent causal variables adhere strictly to linear Gaussian models.
1 code implementation • NeurIPS 2023 • Zijian Li, Ruichu Cai, Guangyi Chen, Boyang Sun, Zhifeng Hao, Kun Zhang
To mitigate the need for these strict assumptions, we propose a subspace identification theory that guarantees the disentanglement of domain-invariant and domain-specific variables under less restrictive constraints regarding domain numbers and transformation properties, thereby facilitating domain adaptation by minimizing the impact of domain shifts on invariant variables.
1 code implementation • 24 Aug 2023 • Sheng Zhang, Muzammal Naseer, Guangyi Chen, Zhiqiang Shen, Salman Khan, Kun Zhang, Fahad Khan
To address this challenge, we propose the Self Structural Semantic Alignment (S^3A) framework, which extracts the structural semantic information from unlabeled data while simultaneously self-learning.
1 code implementation • ICCV 2023 • Guangyi Chen, Xiao Liu, Guangrun Wang, Kun Zhang, Philip H. S. Torr, Xiao-Ping Zhang, Yansong Tang
To bridge these gaps, in this paper, we propose Tem-Adapter, which enables the learning of temporal dynamics and complex semantics by a visual Temporal Aligner and a textual Semantic Aligner.
Ranked #1 on Video Question Answering on SUTD-TrafficQA
no code implementations • 13 Aug 2023 • Feng Xie, Biwei Huang, Zhengming Chen, Ruichu Cai, Clark Glymour, Zhi Geng, Kun Zhang
To address this, we propose a Generalized Independent Noise (GIN) condition for linear non-Gaussian acyclic causal models that incorporate latent variables, which establishes the independence between a linear combination of certain measured variables and some other measured variables.
1 code implementation • 31 Jul 2023 • Yujia Zheng, Biwei Huang, Wei Chen, Joseph Ramsey, Mingming Gong, Ruichu Cai, Shohei Shimizu, Peter Spirtes, Kun Zhang
Causal discovery aims at revealing causal relations from observational data, which is a fundamental task in science and engineering.
1 code implementation • 11 Jul 2023 • Yonghui Yang, Zhengwei Wu, Le Wu, Kun Zhang, Richang Hong, Zhiqiang Zhang, Jun Zhou, Meng Wang
Second, feature augmentation imposes the same scale noise augmentation on each node, which neglects the unique characteristics of nodes on the graph.
no code implementations • 20 Jun 2023 • Xuemei Mao, Gang Wang, Bei Peng, Jiacheng He, Kun Zhang, Song Gao
A DKF, called model fusion DKF (MFDKF) is proposed against the non-Gaussain noise.
1 code implementation • 15 Jun 2023 • Kun Zhang, Le Wu, Guangyi Lv, Enhong Chen, Shulan Ruan, Jing Liu, Zhiqiang Zhang, Jun Zhou, Meng Wang
Then, we propose a novel Relation of Relation Learning Network (R2-Net) for text classification, in which text classification and R2 classification are treated as optimization targets.
no code implementations • 12 Jun 2023 • Shiming Chen, Wenjin Hou, Ziming Hong, Xiaohan Ding, Yibing Song, Xinge You, Tongliang Liu, Kun Zhang
After alignment, synthesized sample features from unseen classes are closer to the real sample features and benefit DSP to improve existing generative ZSL methods by 8. 5\%, 8. 0\%, and 9. 7\% on the standard CUB, SUN AWA2 datasets, the significant performance improvement indicates that evolving semantic prototype explores a virgin field in ZSL.
no code implementations • 10 Jun 2023 • Lingjing Kong, Shaoan Xie, Weiran Yao, Yujia Zheng, Guangyi Chen, Petar Stojanov, Victor Akinwande, Kun Zhang
In general, without further assumptions, the joint distribution of the features and the label is not identifiable in the target domain.
no code implementations • 9 Jun 2023 • Shaoan Xie, Biwei Huang, Bin Gu, Tongliang Liu, Kun Zhang
Traditional counterfactual inference, under Pearls' counterfactual framework, typically depends on having access to or estimating a structural causal model.
1 code implementation • CVPR 2023 • Lingjing Kong, Martin Q. Ma, Guangyi Chen, Eric P. Xing, Yuejie Chi, Louis-Philippe Morency, Kun Zhang
In this work, we formally characterize and justify existing empirical insights and provide theoretical guarantees of MAE.
no code implementations • 31 May 2023 • Ruichu Cai, Zhiyi Huang, Wei Chen, Zhifeng Hao, Kun Zhang
In light of the power of the closed-form solution to OICA corresponding to the One-Latent-Component structure, we formulate a way to estimate the mixing matrix using the higher-order cumulants, and further propose the testable One-Latent-Component condition to identify the latent variables and determine causal orders.
no code implementations • 28 May 2023 • Mugariya Farooq, Shahad Hardan, Aigerim Zhumbhayeva, Yujia Zheng, Preslav Nakov, Kun Zhang
The need for more usable and explainable machine learning models in healthcare increases the importance of developing and utilizing causal discovery algorithms, which aim to discover causal relations by analyzing observational data.
1 code implementation • 24 May 2023 • Yiwen Ding, Jiarui Liu, Zhiheng Lyu, Kun Zhang, Bernhard Schoelkopf, Zhijing Jin, Rada Mihalcea
While several previous studies have analyzed gender bias in research, we are still missing a comprehensive analysis of gender differences in the AI community, covering diverse topics and different development trends.
no code implementations • 19 May 2023 • Yujia Zheng, Ignavier Ng, Yewen Fan, Kun Zhang
A Markov network characterizes the conditional independence structure, or Markov property, among a set of random variables.
1 code implementation • 18 May 2023 • Qi Sun, Kun Huang, Xiaocui Yang, Pengfei Hong, Kun Zhang, Soujanya Poria
Therefore, how to select effective pseudo labels to denoise DS data is still a challenge in document-level distant relation extraction.
1 code implementation • 10 May 2023 • Jiaqi Sun, Lin Zhang, Guangyi Chen, Kun Zhang, Peng Xu, Yujiu Yang
Graph neural networks aim to learn representations for graph-structured data and show impressive performance, particularly in node classification.
no code implementations • 9 May 2023 • Hanqi Yan, Lin Gui, Menghan Wang, Kun Zhang, Yulan He
Explainable recommender systems can explain their recommendation decisions, enhancing user trust in the systems.
1 code implementation • CVPR 2023 • Guangyi Chen, Zhenhao Chen, Shunxing Fan, Kun Zhang
Specifically, we model the trajectory sampling as a Gaussian process and construct an acquisition function to measure the potential sampling value.
no code implementations • 6 Apr 2023 • Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Kun Zhang, Francesco Locatello
Causal discovery methods are intrinsically constrained by the set of assumptions needed to ensure structure identifiability.
no code implementations • 6 Apr 2023 • Francesco Montagna, Nicoletta Noceti, Lorenzo Rosasco, Kun Zhang, Francesco Locatello
This paper demonstrates how to discover the whole causal graph from the second derivative of the log-likelihood in observational non-linear additive Gaussian noise models.
no code implementations • 4 Apr 2023 • Ignavier Ng, Biwei Huang, Kun Zhang
This paper investigates in which cases continuous optimization for directed acyclic graph (DAG) structure learning can and cannot perform well and why this happens, and suggests possible directions to make the search procedure more reliable.
no code implementations • 28 Mar 2023 • Mark Whitmeyer, Kun Zhang
We revisit Popper's falsifiability criterion.
1 code implementation • 10 Mar 2023 • Jie zhou, Xianshuai Cao, Wenhao Li, Lin Bo, Kun Zhang, Chuan Luo, Qian Yu
Multi-scenario & multi-task learning has been widely applied to many recommendation systems in industrial applications, wherein an effective and practical approach is to carry out multi-scenario transfer learning on the basis of the Mixture-of-Expert (MoE) architecture.
no code implementations • 9 Mar 2023 • Zhengmao Zhu, YuRen Liu, Honglong Tian, Yang Yu, Kun Zhang
Playing an important role in Model-Based Reinforcement Learning (MBRL), environment models aim to predict future states based on the past.
Model-based Reinforcement Learning reinforcement-learning +1
1 code implementation • 13 Feb 2023 • Lei Chen, Le Wu, Kun Zhang, Richang Hong, Defu Lian, Zhiqiang Zhang, Jun Zhou, Meng Wang
We augment imbalanced training data towards balanced data distribution to improve fairness.
no code implementations • 8 Feb 2023 • Huixin Zhan, Kun Zhang, Keyi Lu, Victor S. Sheng
In this paper, we measure the privacy leakage via studying whether graph representations can be inverted to recover the graph used to generate them via graph reconstruction attack (GRA).
no code implementations • 29 Jan 2023 • Guanglin Zhou, Shaoan Xie, GuangYuan Hao, Shiming Chen, Biwei Huang, Xiwei Xu, Chen Wang, Liming Zhu, Lina Yao, Kun Zhang
In the field of artificial intelligence (AI), the quest to understand and model data-generating processes (DGPs) is of paramount importance.
no code implementations • 25 Jan 2023 • Yijun Bian, Kun Zhang, Anqi Qiu, Nanguang Chen
Furthermore, we investigate the properties of the proposed measure and propose first- and second-order oracle bounds to show that fairness can be boosted via ensemble combination with theoretical learning guarantees.
1 code implementation • 25 Jan 2023 • Devansh Arpit, Matthew Fernandez, Itai Feigenbaum, Weiran Yao, Chenghao Liu, Wenzhuo Yang, Paul Josel, Shelby Heinecke, Eric Hu, Huan Wang, Stephen Hoi, Caiming Xiong, Kun Zhang, Juan Carlos Niebles
Finally, we provide a user interface (UI) that allows users to perform causal analysis on data without coding.
1 code implementation • 21 Jan 2023 • Zeyu Tang, Yatong Chen, Yang Liu, Kun Zhang
The pursuit of long-term fairness involves the interplay between decision-making and the underlying data generating process.
no code implementations • CVPR 2023 • Shaoan Xie, Yanwu Xu, Mingming Gong, Kun Zhang
In this paper, we start from a different perspective and consider the paths connecting the two domains.
1 code implementation • 24 Dec 2022 • Wenxuan Ma, Xing Yan, Kun Zhang
A tree is built upon giving the training data, whose leaf nodes represent different regions where region-specific neural networks are trained to predict both the mean and the variance for quantifying uncertainty.
no code implementations • CVPR 2023 • Shaoan Xie, Zhifei Zhang, Zhe Lin, Tobias Hinz, Kun Zhang
By contrast, multi-modal inpainting provides more flexible and useful controls on the inpainted content, \eg, a text prompt can be used to describe an object with richer attributes, and a mask can be used to constrain the shape of the inpainted object rather than being only considered as a missing area.
1 code implementation • 8 Nov 2022 • Yuqin Yang, AmirEmad Ghassami, Mohamed Nafea, Negar Kiyavash, Kun Zhang, Ilya Shpitser
We demonstrate a somewhat surprising connection between this problem and causal discovery in the presence of unobserved parentless causes, in the sense that there is a mapping, given by the mixing matrix, between the underlying models to be inferred in these problems.
1 code implementation • 1 Nov 2022 • Yue Yu, Xuan Kan, Hejie Cui, ran Xu, Yujia Zheng, Xiangchen Song, Yanqiao Zhu, Kun Zhang, Razieh Nabi, Ying Guo, Chao Zhang, Carl Yang
To better adapt GNNs for fMRI analysis, we propose TBDS, an end-to-end framework based on \underline{T}ask-aware \underline{B}rain connectivity \underline{D}AG (short for Directed Acyclic Graph) \underline{S}tructure generation for fMRI analysis.
no code implementations • 24 Oct 2022 • Weiran Yao, Guangyi Chen, Kun Zhang
In this work, we establish the identifiability theories of nonparametric latent causal processes from their nonlinear mixtures under fixed temporal causal influences and analyze how distribution changes can further benefit the disentanglement.
no code implementations • 20 Oct 2022 • Haoyue Dai, Peter Spirtes, Kun Zhang
Causal discovery under measurement error aims to recover the causal graph among unobserved target variables from observations made with measurement error.
no code implementations • 12 Oct 2022 • Yuanyuan Wang, Wei Huang, Mingming Gong, Xi Geng, Tongliang Liu, Kun Zhang, DaCheng Tao
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
1 code implementation • 3 Oct 2022 • Guangyi Chen, Weiran Yao, Xiangchen Song, Xinyue Li, Yongming Rao, Kun Zhang
To solve this problem, we propose to apply optimal transport to match the vision and text modalities.
no code implementations • 1 Oct 2022 • Biwei Huang, Charles Jia Han Low, Feng Xie, Clark Glymour, Kun Zhang
Most causal discovery procedures assume that there are no latent confounders in the system, which is often violated in real-world problems.
1 code implementation • 30 Aug 2022 • Zhen Zhang, Ignavier Ng, Dong Gong, Yuhang Liu, Ehsan M Abbasnejad, Mingming Gong, Kun Zhang, Javen Qinfeng Shi
Recovering underlying Directed Acyclic Graph (DAG) structures from observational data is highly challenging due to the combinatorial nature of the DAG-constrained optimization problem.
no code implementations • 30 Aug 2022 • Yuhang Liu, Zhen Zhang, Dong Gong, Mingming Gong, Biwei Huang, Kun Zhang, Javen Qinfeng Shi
This motivates us to propose a novel method for MSDA, which learns the invariant label distribution conditional on the latent content variable, instead of learning invariant representations.
no code implementations • 30 Aug 2022 • Yuhang Liu, Zhen Zhang, Dong Gong, Mingming Gong, Biwei Huang, Anton Van Den Hengel, Kun Zhang, Javen Qinfeng Shi
The task of causal representation learning aims to uncover latent higher-level causal representations that affect lower-level observations.
no code implementations • 9 Aug 2022 • Mark Whitmeyer, Kun Zhang
When acquisition is covert, the receiver does not.
no code implementations • 30 Jun 2022 • Wenzhuo Yang, Kun Zhang, Steven C. H. Hoi
In light of the modularity property of causal systems (the causal processes to generate different variables are irrelevant modules), the original problem is divided into a series of separate, simpler, and low-dimensional anomaly detection problems so that where an anomaly happens (root causes) can be directly identified.
no code implementations • 20 Jun 2022 • Kun Zhang
These results help in characterizing the sender's preferred equilibria and her equilibrium payoff set in a class of verifiable disclosure games.
no code implementations • 15 Jun 2022 • Yujia Zheng, Ignavier Ng, Kun Zhang
We show that under specific instantiations of such constraints, the independent latent sources can be identified from their nonlinear mixtures up to a permutation and a component-wise transformation, thus achieving nontrivial identifiability of nonlinear ICA without auxiliary variables.
1 code implementation • 10 Jun 2022 • Xinyi Wang, Michael Saxon, Jiachen Li, Hongyang Zhang, Kun Zhang, William Yang Wang
While machine learning models rapidly advance the state-of-the-art on various real-world tasks, out-of-domain (OOD) generalization remains a challenging problem given the vulnerability of these models to spurious correlations.
no code implementations • 8 Jun 2022 • Zeyu Tang, Jiji Zhang, Kun Zhang
In this paper, we review and reflect on various fairness notions previously proposed in machine learning literature, and make an attempt to draw connections to arguments in moral and political philosophy, especially theories of justice.
no code implementations • 3 Jun 2022 • Zheng-Mao Zhu, Xiong-Hui Chen, Hong-Long Tian, Kun Zhang, Yang Yu
Model-based methods have recently shown promising for offline reinforcement learning (RL), aiming to learn good policies from historical data without interacting with the environment.
no code implementations • 27 May 2022 • Aoqi Zuo, Susan Wei, Tongliang Liu, Bo Han, Kun Zhang, Mingming Gong
Interestingly, we find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided: the sensitive attributes do not have ancestors in the causal graph.
1 code implementation • 27 May 2022 • Erdun Gao, Ignavier Ng, Mingming Gong, Li Shen, Wei Huang, Tongliang Liu, Kun Zhang, Howard Bondell
In this paper, we develop a general method, which we call MissDAG, to perform causal discovery from data with incomplete observations.
1 code implementation • 19 May 2022 • Yewen Fan, Nian Si, Kun Zhang
Calibration is defined as the ratio of the average predicted click rate to the true click rate.
no code implementations • 18 May 2022 • Kai Zhang, Qi Liu, Zhenya Huang, Mingyue Cheng, Kun Zhang, Mengdi Zhang, Wei Wu, Enhong Chen
Existing studies in this task attach more attention to the sequence modeling of sentences while largely ignoring the rich domain-invariant semantics embedded in graph structures (i. e., the part-of-speech tags and dependency relations).
1 code implementation • 26 Apr 2022 • Jie Shuai, Kun Zhang, Le Wu, Peijie Sun, Richang Hong, Meng Wang, Yong Li
Second, while most current models suffer from limited user behaviors, can we exploit the unique self-supervised signals in the review-aware graph to guide two recommendation components better?
no code implementations • 30 Mar 2022 • Fan Feng, Biwei Huang, Kun Zhang, Sara Magliacane
Dealing with non-stationarity in environments (e. g., in the transition dynamics) and objectives (e. g., in the reward functions) is a challenging problem that is crucial in real-world applications of reinforcement learning (RL).
no code implementations • Findings (ACL) 2022 • Kai Zhang, Kun Zhang, Mengdi Zhang, Hongke Zhao, Qi Liu, Wei Wu, Enhong Chen
Aspect-based sentiment analysis (ABSA) predicts sentiment polarity towards a specific aspect in the given sentence.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2
no code implementations • 29 Mar 2022 • Kun Zhang, Ben Mingbin Feng, Guangwu Liu, Shiyu Wang
The resulting sample of conditional expectations is then used to estimate different risk measures of interest.
1 code implementation • CVPR 2022 • Yanwu Xu, Shaoan Xie, Wenhao Wu, Kun Zhang, Mingming Gong, Kayhan Batmanghelich
The first one lets T compete with G to achieve maximum perturbation.
no code implementations • 24 Feb 2022 • Zeyu Tang, Kun Zhang
In particular, for prediction performed by a deterministic function of input features, we give conditions under which Equalized Odds can hold true; if the stochastic prediction is acceptable, we show that under mild assumptions, fair predictors can always be derived.
1 code implementation • ICLR 2022 • Yao-Hung Hubert Tsai, Tianqin Li, Martin Q. Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, Ruslan Salakhutdinov
Conditional contrastive learning frameworks consider the conditional sampling procedure that constructs positive or negative data pairs conditioned on specific variables.
no code implementations • 10 Feb 2022 • Mark Whitmeyer, Kun Zhang
A principal hires an agent to acquire soft information about an unknown state.
no code implementations • 10 Feb 2022 • Weiran Yao, Guangyi Chen, Kun Zhang
Specifically, the framework factorizes unknown distribution shifts into transition distribution changes caused by fixed dynamics and time-varying latent causal relations, and by global changes in observation.
no code implementations • 4 Feb 2022 • Yang Liu, Hao Cheng, Kun Zhang
When label noise transition depends on each instance, the problem of identifying the instance-dependent noise transition matrix becomes substantially more challenging.
no code implementations • ICLR 2022 • Ruibo Tu, Kun Zhang, Hedvig Kjellström, Cheng Zhang
With this criterion, we propose a novel optimal transport-based algorithm for ANMs which is robust to the choice of models and extend it to post-nonlinear models.
1 code implementation • NeurIPS 2021 • Ignavier Ng, Yujia Zheng, Jiji Zhang, Kun Zhang
Many of the causal discovery methods rely on the faithfulness assumption to guarantee asymptotic correctness.
1 code implementation • CVPR 2022 • Jiaxian Guo, Jiachen Li, Huan Fu, Mingming Gong, Kun Zhang, DaCheng Tao
Unsupervised image-to-image (I2I) translation aims to learn a domain mapping function that can preserve the semantics of the input images without paired data.
1 code implementation • CVPR 2022 • Kun Zhang, Zhendong Mao, Quan Wang, Yongdong Zhang
Image-text matching, as a fundamental task, bridges the gap between vision and language.
no code implementations • 26 Dec 2021 • Chengjun Tang, Kun Zhang, Chunfang Xing, Yong Ding, Zengmin Xu
Combined with the defensive idea of adversarial training, we use Perlin noise to train the neural network to obtain a model that can defend against procedural noise adversarial examples.
no code implementations • 13 Dec 2021 • Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V Le, Yonghui Wu, Zhifeng Chen, Claire Cui
Scaling language models with more data, compute and parameters has driven significant progress in natural language processing.
Ranked #10 on Language Modelling on LAMBADA
1 code implementation • NeurIPS 2021 • Petar Stojanov, Zijian Li, Mingming Gong, Ruichu Cai, Jaime Carbonell, Kun Zhang
We provide reasoning why when the supports of the source and target data from overlap, any map of $X$ that is fixed across domains may not be suitable for domain adaptation via invariant features.
no code implementations • NeurIPS 2021 • Jeffrey Adams, Niels Hansen, Kun Zhang
Existing results on the identification of the causal structure among the latent variables often require very strong graphical assumptions.
1 code implementation • 5 Nov 2021 • Zijian Li, Ruichu Cai, Tom Z. J Fu, Zhifeng Hao, Kun Zhang
In order to address these challenges, we analyze variational conditional dependencies in time-series data and find that the causal structures are usually stable among domains, and further raise the causal conditional shift assumption.
no code implementations • 29 Oct 2021 • Kun Zhang, Ji-Feng Zhang, Rong Su, Huaguang Zhang
With the secure hierarchical structure, the relationship between the secure consensus problem and global Nash equilibrium is discussed under potential packet loss attacks, and the necessary and sufficient condition for the existence of global Nash equilibrium is provided regarding the soft-constrained graphical game.
2 code implementations • 18 Oct 2021 • Ignavier Ng, Kun Zhang
Traditionally, Bayesian network structure learning is often carried out at a central site, in which all data is gathered.
no code implementations • 12 Oct 2021 • Biwei Huang, Chaochao Lu, Liu Leqi, José Miguel Hernández-Lobato, Clark Glymour, Bernhard Schölkopf, Kun Zhang
Perceived signals in real-world scenarios are usually high-dimensional and noisy, and finding and using their representation that contains essential and sufficient information required by downstream decision-making tasks will help improve computational efficiency and generalization ability in the tasks.
2 code implementations • 11 Oct 2021 • Weiran Yao, Yuewen Sun, Alex Ho, Changyin Sun, Kun Zhang
In this work, we consider both a nonparametric, nonstationary setting and a parametric setting for the latent processes and propose two provable conditions under which temporally causal latent processes can be identified from their nonlinear mixtures.
2 code implementations • ICLR 2022 • Weiran Yao, Yuewen Sun, Alex Ho, Changyin Sun, Kun Zhang
Our goal is to find time-delayed latent causal variables and identify their relations from temporal measured variables.
1 code implementation • ICCV 2021 • Shaoan Xie, Mingming Gong, Yanwu Xu, Kun Zhang
An essential yet restrictive assumption for unsupervised image translation is that the two domains are aligned, e. g., for the selfie2anime task, the anime (selfie) domain must contain only anime (selfie) face images that can be translated to some images in the other domain.
no code implementations • 10 Sep 2021 • Kai Zhang, Chao Tian, Kun Zhang, Todd Johnson, Xiaoqian Jiang
The PC algorithm is the state-of-the-art algorithm for causal structure discovery on observational data.
1 code implementation • 8 Sep 2021 • Jiacheng He, Gang Wang, Bei Peng, Zhenyu Feng, Kun Zhang
In our study, a novel concept, called generalized error entropy, utilizing the generalized Gaussian density (GGD) function as the kernel function is proposed.
2 code implementations • NeurIPS 2021 • Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang
In particular, we show that properly modeling the instances will contribute to the identifiability of the label noise transition matrix and thus lead to a better classifier.
1 code implementation • ICCV 2021 • Shulan Ruan, Yong Zhang, Kun Zhang, Yanbo Fan, Fan Tang, Qi Liu, Enhong Chen
Text-to-image synthesis refers to generating an image from a given text description, the key goal of which lies in photo realism and semantic consistency.
no code implementations • 24 Aug 2021 • Yige Zhang, Weixiong Rao, Kun Zhang, Lei Chen
The HMM approaches typically assume stable mobility patterns of the underlying mobile devices.
no code implementations • 6 Aug 2021 • Kun Zhang, Guangyi Lv, Le Wu, Enhong Chen, Qi Liu, Meng Wang
In order to overcome this problem and boost the performance of attention mechanism, we propose a novel dynamic re-read attention, which can pay close attention to one small region of sentences at each step and re-read the important parts for better sentence representations.
no code implementations • 15 Jul 2021 • Andrew Moyes, Richard Gault, Kun Zhang, Ji Ming, Danny Crookes, Jing Wang
Experimental results show that the MCAE model produces feature representations that are less sensitive to inter-domain variations than the comparative StaNoSA method when tested on the novel synthetic data.
1 code implementation • 13 Jul 2021 • Yatong Chen, Zeyu Tang, Kun Zhang, Yang Liu
We provide both upper bounds for the performance gap due to the induced domain shift, as well as lower bounds for the trade-offs that a classifier has to suffer on either the source training distribution or the induced target distribution.
1 code implementation • ICLR 2022 • Biwei Huang, Fan Feng, Chaochao Lu, Sara Magliacane, Kun Zhang
We show that by explicitly leveraging this compact representation to encode changes, we can efficiently adapt the policy to the target domain, in which only a few samples are needed and further policy optimization is avoided.
no code implementations • 23 Jun 2021 • Yuehai Chen, Jing Yang, Dong Zhang, Kun Zhang, Badong Chen, Shaoyi Du
More specifically, we scan the whole input images and its priority maps in the form of column vector to obtain a relevance matrix estimating their similarity.
1 code implementation • Proceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation 2021 • Jie Zhao, Bojie Li, Wang Nie, Zhen Geng, Renwei Zhang, Xiong Gao, Bin Cheng, Chen Wu, Yun Cheng, Zheng Li, Peng Di, Kun Zhang, Xuefeng Jin
Existing tensor compilers have proven their effectiveness in deploying deep neural networks on general-purpose hardware like CPU and GPU, but optimizing for neural processing units (NPUs) is still challenging due to the heterogeneous compute units and complicated memory hierarchy.
no code implementations • 14 Jun 2021 • Ruichu Cai, Fengzhu Wu, Zijian Li, Pengfei Wei, Lingling Yi, Kun Zhang
Based on this assumption, we propose a disentanglement-based unsupervised domain adaptation method for the graph-structured data, which applies variational graph auto-encoders to recover these latent variables and disentangles them via three supervised learning modules.
1 code implementation • ICLR 2022 • Yonggang Zhang, Mingming Gong, Tongliang Liu, Gang Niu, Xinmei Tian, Bo Han, Bernhard Schölkopf, Kun Zhang
The adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.
no code implementations • 9 Jun 2021 • Kun Zhang, Guangyi Lv, Meng Wang, Enhong Chen
Then, we develop a Dynamic Gaussian Attention (DGA) to dynamically capture the important parts and corresponding local contexts from a detailed perspective.
no code implementations • 7 Jun 2021 • Haiqin Yang, Xiaoyuan Yao, Yiqun Duan, Jianping Shen, Jie Zhong, Kun Zhang
More specifically, PHED deploys Conditional Variational AutoEncoder (CVAE) on Transformer to include one aspect of attributes at one stage.
no code implementations • 5 Jun 2021 • Martin Q. Ma, Yao-Hung Hubert Tsai, Paul Pu Liang, Han Zhao, Kun Zhang, Ruslan Salakhutdinov, Louis-Philippe Morency
In this paper, we propose a Conditional Contrastive Learning (CCL) approach to improve the fairness of contrastive SSL methods.
no code implementations • 1 Jun 2021 • Mengfan Liu, Pengyang Shao, Kun Zhang
Predicting student performance is a fundamental task in Intelligent Tutoring Systems (ITSs), by which we can learn about students' knowledge level and provide personalized teaching strategies for them.
no code implementations • 31 May 2021 • Shuai Wang, Kun Zhang, Le Wu, Haiping Ma, Richang Hong, Meng Wang
The teacher model is composed of a heterogeneous graph structure for warm users and items with privileged CF links.
1 code implementation • 19 May 2021 • Wentao Ouyang, Xiuwu Zhang, Shukui Ren, Li Li, Kun Zhang, Jinmei Luo, Zhaojie Liu, Yanlong Du
For existing old ads, GMEs first build a graph to connect them with new ads, and then adaptively distill useful information.
1 code implementation • 16 May 2021 • Lei Chen, Le Wu, Kun Zhang, Richang Hong, Meng Wang
Despite the performance gain of these implicit feedback based models, the recommendation results are still far from satisfactory due to the sparsity of the observed item set for each user.
1 code implementation • 27 Apr 2021 • Le Wu, Xiangnan He, Xiang Wang, Kun Zhang, Meng Wang
Influenced by the great success of deep learning in computer vision and language understanding, research in recommendation has shifted to inventing new recommender models based on neural networks.
no code implementations • 26 Mar 2021 • Wei Chen, Kun Zhang, Ruichu Cai, Biwei Huang, Joseph Ramsey, Zhifeng Hao, Clark Glymour
The first step of our method uses the FCI procedure, which allows confounders and is able to produce asymptotically correct results.
no code implementations • 18 Feb 2021 • Ran Li, Kun Zhang, Jin Wang
By treating black hole as the macroscopic stable state on the free energy landscape, we propose that the stochastic dynamics of the black hole phase transition can be effectively described by the Langevin equation or equivalently by the Fokker-Planck equation in phase space.
General Relativity and Quantum Cosmology
no code implementations • 31 Jan 2021 • Pan Xiong, Lei Tong, Kun Zhang, Xuhui Shen, Roberto Battiston, Dimitar Ouzounov, Roberto Iuppa, Danny Crookes, Cheng Long, Huiyu Zhou
Amongst the available technologies for earthquake research, remote sensing has been commonly used due to its unique features such as fast imaging and wide image-acquisition range.
no code implementations • 21 Jan 2021 • Zhenyi Zheng, Yue Zhang, Victor Lopez-Dominguez, Luis Sánchez-Tejerina, Jiacheng Shi, Xueqiang Feng, Lei Chen, Zilu Wang, Zhizhong Zhang, Kun Zhang, Bin Hong, Yong Xu, Youguang Zhang, Mario Carpentieri, Albert Fert, Giovanni Finocchio, Weisheng Zhao, Pedram Khalili Amiri
Existing methods to do so involve the application of an in-plane bias magnetic field, or incorporation of in-plane structural asymmetry in the device, both of which can be difficult to implement in practical applications.
Mesoscale and Nanoscale Physics
no code implementations • 20 Jan 2021 • Gang Qu, Li Xiao, Wenxing Hu, Kun Zhang, Vince D. Calhoun, Yu-Ping Wang
Methods: To take advantage of complementary information from multi-modal fMRI, we propose an interpretable multi-modal graph convolutional network (MGCN) model, incorporating the fMRI time series and the functional connectivity (FC) between each pair of brain regions.
no code implementations • 1 Jan 2021 • Jiaxian Guo, Jiachen Li, Mingming Gong, Huan Fu, Kun Zhang, DaCheng Tao
Unsupervised image-to-image (I2I) translation, which aims to learn a domain mapping function without paired data, is very challenging because the function is highly under-constrained.
no code implementations • 1 Jan 2021 • Zeyu Tang, Kun Zhang
In this paper, focusing on the Equalized Odds notion of fairness, we consider the attainability of this criterion, and furthermore, if attainable, the optimality of the prediction performance under various settings.
no code implementations • 1 Jan 2021 • Chenwei Ding, Biwei Huang, Mingming Gong, Kun Zhang, Tongliang Liu, DaCheng Tao
Most algorithms in causal discovery consider a single domain with a fixed distribution.
no code implementations • 1 Jan 2021 • Chenghao Liu, Tao Lu, Doyen Sahoo, Yuan Fang, Kun Zhang, Steven Hoi
Meta-learning methods learn the meta-knowledge among various training tasks and aim to promote the learning of new tasks under the task similarity assumption.
1 code implementation • 22 Dec 2020 • Ruichu Cai, Zijian Li, Pengfei Wei, Jie Qiao, Kun Zhang, Zhifeng Hao
Different from previous efforts on the entangled feature space, we aim to extract the domain invariant semantic information in the latent disentangled semantic representation (DSR) of the data.
no code implementations • 16 Dec 2020 • Kun Zhang, Le Wu, Guangyi Lv, Meng Wang, Enhong Chen, Shulan Ruan
Sentence semantic matching is one of the fundamental tasks in natural language processing, which requires an agent to determine the semantic relation among input sentences.
no code implementations • 16 Dec 2020 • Chaochao Lu, Biwei Huang, Ke Wang, José Miguel Hernández-Lobato, Kun Zhang, Bernhard Schölkopf
We propose counterfactual RL algorithms to learn both population-level and individual-level policies.
no code implementations • 13 Dec 2020 • Kun Zhang, Rui Wu, Ping Yao, Kai Deng, Ding Li, Renbiao Liu, Chuanguang Yang, Ge Chen, Min Du, Tianyao Zheng
We note that 2D pose estimation task is highly dependent on the contextual relationship between image patches, thus we introduce a self-supervised method for pretraining 2D pose estimation networks.
1 code implementation • 23 Nov 2020 • Ignavier Ng, Sébastien Lachapelle, Nan Rosemary Ke, Simon Lacoste-Julien, Kun Zhang
Recently, structure learning of directed acyclic graphs (DAGs) has been formulated as a continuous optimization problem by leveraging an algebraic characterization of acyclicity.
1 code implementation • NeurIPS 2020 • Xueru Zhang, Ruibo Tu, Yang Liu, Mingyan Liu, Hedvig Kjellström, Kun Zhang, Cheng Zhang
Our results show that static fairness constraints can either promote equality or exacerbate disparity depending on the driving factor of qualification transitions and the effect of sensitive attributes on feature distributions.
no code implementations • NeurIPS 2020 • Feng Xie, Ruichu Cai, Biwei Huang, Clark Glymour, Zhifeng Hao, Kun Zhang
Despite its success in certain domains, most existing methods focus on causal relations between observed variables, while in many scenarios the observed ones may not be the underlying causal variables (e. g., image pixels), but are generated by latent causal variables or confounders that are causally related.
1 code implementation • 16 Aug 2020 • Sizhe Chen, Fan He, Xiaolin Huang, Kun Zhang
This paper focuses on high-transferable adversarial attacks on detectors, which are hard to attack in a black-box manner, because of their multiple-output characteristics and the diversity across architectures.
no code implementations • ECCV 2020 • Chenghao Liu, Zhihao Wang, Doyen Sahoo, Yuan Fang, Kun Zhang, Steven C. H. Hoi
Meta-learning methods have been extensively studied and applied in computer vision, especially for few-shot classification tasks.
1 code implementation • NeurIPS 2020 • Ignavier Ng, AmirEmad Ghassami, Kun Zhang
Extensive experiments validate the effectiveness of our proposed method and show that the DAG-penalized likelihood objective is indeed favorable over the least squares one with the hard DAG constraint.
no code implementations • 25 May 2020 • Le Wu, Yonghui Yang, Kun Zhang, Richang Hong, Yanjie Fu, Meng Wang
Therefore, item recommendation and attribute inference have become two main tasks in these platforms.
no code implementations • 3 May 2020 • Cheng Zhang, Kun Zhang, Yingzhen Li
We present a causal view on the robustness of neural networks against input manipulations, which applies not only to traditional classification tasks but also to general measurement data.
no code implementations • 2 Mar 2020 • Naji Shajarisales, Peter Spirtes, Kun Zhang
Yet the annotation process is an important part of the data collection, and in many cases it naturally depends on certain features of the data (e. g., the intensity of an image and the size of the object to be detected in the image).
1 code implementation • NeurIPS 2020 • Kun Zhang, Mingming Gong, Petar Stojanov, Biwei Huang, Qingsong Liu, Clark Glymour
Such a graphical model distinguishes between constant and varied modules of the distribution and specifies the properties of the changes across domains, which serves as prior knowledge of the changing modules for the purpose of deriving the posterior of the target variable $Y$ in the target domain.
2 code implementations • 28 Jan 2020 • Lei Chen, Le Wu, Richang Hong, Kun Zhang, Meng Wang
Second, we propose a residual network structure that is specifically designed for CF with user-item interaction modeling, which alleviates the over smoothing problem in graph convolution aggregation operation with sparse user-item interaction data.
no code implementations • 2 Jan 2020 • Han Wu, Kun Zhang, Guangyi Lv, Qi Liu, Runlong Yu, Weihao Zhao, Enhong Chen, Jianhui Ma
Technological change and innovation are vitally important, especially for high-tech companies.
no code implementations • 12 Dec 2019 • Menghan Wang, Kun Zhang, Gulin Li, Keping Yang, Luo Si
We generalize the propagation strategies of current GCNs as a \emph{"Sink$\to$Source"} mode, which seems to be an underlying cause of the two challenges.
no code implementations • 10 Dec 2019 • Yige Zhang, Aaron Yi Ding, Jorg Ott, Mingxuan Yuan, Jia Zeng, Kun Zhang, Weixiong Rao
In this paper, by leveraging the recently developed transfer learning techniques, we design a novel Telco position recovery framework, called TLoc, to transfer good models in the carefully selected source domains (those fine-grained small subareas) to a target one which originally suffers from poor localization accuracy.
no code implementations • 4 Dec 2019 • Yuan Xue, Denny Zhou, Nan Du, Andrew Dai, Zhen Xu, Kun Zhang, Claire Cui
Clinical forecasting based on electronic medical records (EMR) can uncover the temporal correlations between patients' conditions and outcomes from sequences of longitudinal clinical measurements.
1 code implementation • NeurIPS 2019 • Mingming Gong, Yanwu Xu, Chunyuan Li, Kun Zhang, Kayhan Batmanghelich
One of the popular conditional models is Auxiliary Classifier GAN (AC-GAN) that generates highly discriminative images by extending the loss function of GAN with an auxiliary classifier.
Ranked #2 on Image Generation on CIFAR-100
no code implementations • NeurIPS 2019 • Ruichu Cai, Feng Xie, Clark Glymour, Zhifeng Hao, Kun Zhang
In this paper, by properly leveraging the non-Gaussianity of the data, we propose to estimate the structure over latent variables with the so-called Triad constraints: we design a form of "pseudo-residual" from three variables, and show that when causal relations are linear and noise terms are non-Gaussian, the causal direction between the latent variables for the three observed variables is identifiable by checking a certain kind of independence relationship.
1 code implementation • NeurIPS 2019 • Biwei Huang, Kun Zhang, Pengtao Xie, Mingming Gong, Eric P. Xing, Clark Glymour
The learned SSCM gives the specific causal knowledge for each individual as well as the general trend over the population.
no code implementations • 30 Nov 2019 • Jie Qiao, Zijian Li, Boyan Xu, Ruichu Cai, Kun Zhang
The challenge of learning disentangled representation has recently attracted much attention and boils down to a competition using a new real world disentanglement dataset (Gondal et al., 2019).
no code implementations • 14 Nov 2019 • Kun Zhang, Yuan Xue, Gerardo Flores, Alvin Rajkomar, Claire Cui, Andrew M. Dai
Time series data are prevalent in electronic health records, mostly in the form of physiological parameters such as vital signs and lab tests.
1 code implementation • ICML 2020 • AmirEmad Ghassami, Alan Yang, Negar Kiyavash, Kun Zhang
The main approach to defining equivalence among acyclic directed causal graphical models is based on the conditional independence relationships in the distributions that the causal models can generate, in terms of the Markov equivalence.
no code implementations • 11 Sep 2019 • Kun Zhang, Peng He, Ping Yao, Ge Chen, Rui Wu, Min Du, Huimin Li, Li Fu, Tianyao Zheng
Specifically, RAM learns a group of weights to represent the different importance of feature maps across resolutions, and the GPR gradually merges every two feature maps from low to high resolutions to regress final human keypoint heatmaps.
no code implementations • 10 Sep 2019 • M. Reza Heydari, Saber Salehkaleybar, Kun Zhang
We propose two nonlinear regression methods, named Adversarial Orthogonal Regression (AdOR) for additive noise models and Adversarial Orthogonal Structural Equation Model (AdOSE) for the general case of structural equation models.
1 code implementation • NeurIPS 2019 • Chenwei Ding, Mingming Gong, Kun Zhang, DaCheng Tao
Causal discovery witnessed significant progress over the past decades.
1 code implementation • 26 Aug 2019 • Chuanguang Yang, Zhulin An, Hui Zhu, Xiaolong Hu, Kun Zhang, Kaiqiang Xu, Chao Li, Yongjun Xu
We propose a simple yet effective method to reduce the redundancy of DenseNet by substantially decreasing the number of stacked modules by replacing the original bottleneck by our SMG module, which is augmented by local residual.
Ranked #57 on Image Classification on CIFAR-10
no code implementations • 11 Aug 2019 • Saber Salehkaleybar, AmirEmad Ghassami, Negar Kiyavash, Kun Zhang
It can be shown that causal effects among observed variables cannot be identified uniquely even under the assumptions of faithfulness and non-Gaussianity of exogenous noises.
no code implementations • 8 Aug 2019 • Ruben Sanchez-Romero, Joseph D. Ramsey, Kun Zhang, Clark Glymour
These algorithms allow for identification of subregions of voxels driving the connectivity between regions of interest, recovering valuable anatomical and functional information that is lost when ROIs are aggregated.
no code implementations • 25 Jul 2019 • Shoubo Hu, Kun Zhang, Zhitang Chen, Laiwan Chan
Domain generalization (DG) aims to incorporate knowledge from multiple source domains into a single model that could generalize well on unseen target domains.
no code implementations • 16 Jul 2019 • Yipeng Mou, Mingming Gong, Huan Fu, Kayhan Batmanghelich, Kun Zhang, DaCheng Tao
Due to the stylish difference between synthetic and real images, we propose a temporally-consistent domain adaptation (TCDA) approach that simultaneously explores labels in the synthetic domain and temporal constraints in the videos to improve style transfer and depth prediction.
4 code implementations • 5 Jul 2019 • Mingming Gong, Yanwu Xu, Chunyuan Li, Kun Zhang, Kayhan Batmanghelich
One of the popular conditional models is Auxiliary Classifier GAN (AC-GAN), which generates highly discriminative images by extending the loss function of GAN with an auxiliary classifier.
Ranked #2 on Conditional Image Generation on CIFAR-100
1 code implementation • ACL 2019 • Mingxiao An, Fangzhao Wu, Chuhan Wu, Kun Zhang, Zheng Liu, Xing Xie
In this paper, we propose a neural news recommendation approach which can learn both long- and short-term user representations.
Ranked #7 on News Recommendation on MIND
1 code implementation • NeurIPS 2019 • Ruibo Tu, Kun Zhang, Bo Christer Bertilson, Hedvig Kjellström, Cheng Zhang
We show that the data generated from our simulator have similar statistics as real-world data.
no code implementations • 26 May 2019 • Biwei Huang, Kun Zhang, Mingming Gong, Clark Glymour
In many scientific fields, such as economics and neuroscience, we are often faced with nonstationary time series, and concerned with both finding causal relations and forecasting the values of variables of interest, both of which are particularly challenging in such nonstationary environments.
2 code implementations • 23 May 2019 • Ruichu Cai, Jie Qiao, Kun Zhang, Zhenjie Zhang, Zhifeng Hao
In this work, we propose a cascade nonlinear additive noise model to represent such causal influences--each direct causal relation follows the nonlinear additive noise model but we observe only the initial cause and final effect.
no code implementations • 19 Apr 2019 • Ricardo Pio Monti, Kun Zhang, Aapo Hyvarinen
We consider the problem of inferring causal relationships between two or more passively observed variables.
no code implementations • 2 Apr 2019 • Yanwu Xu, Mingming Gong, Junxiang Chen, Tongliang Liu, Kun Zhang, Kayhan Batmanghelich
The success of such approaches heavily depends on high-quality labeled instances, which are not easy to obtain, especially as the number of candidate classes increases.
no code implementations • 18 Mar 2019 • Michael P Snyder, Shin Lin, Amanda Posgai, Mark Atkinson, Aviv Regev, Jennifer Rood, Orit Rosen, Leslie Gaffney, Anna Hupalowska, Rahul Satija, Nils Gehlenborg, Jay Shendure, Julia Laskin, Pehr Harbury, Nicholas A Nystrom, Ziv Bar-Joseph, Kun Zhang, Katy Börner, Yiing Lin, Richard Conroy, Dena Procaccini, Ananda L Roy, Ajay Pillai, Marishka Brown, Zorina S Galis
Transformative technologies are enabling the construction of three dimensional (3D) maps of tissues with unprecedented spatial and molecular resolution.
no code implementations • 5 Mar 2019 • Biwei Huang, Kun Zhang, Jiji Zhang, Joseph Ramsey, Ruben Sanchez-Romero, Clark Glymour, Bernhard Schölkopf
In this paper, we develop a framework for causal discovery from such data, called Constraint-based causal Discovery from heterogeneous/NOnstationary Data (CD-NOD), to find causal skeleton and directions and estimate the properties of mechanism changes.
no code implementations • 27 Jan 2019 • Biwei Huang, Kun Zhang, Ruben Sanchez-Romero, Joseph Ramsey, Madelyn Glymour, Clark Glymour
A substantial body of researches use Pearson's correlation coefficients, mutual information, or partial correlation to investigate the differences in brain connectivities between ASD and typical controls from functional Magnetic Resonance Imaging (fMRI).
2 code implementations • 27 Jan 2019 • Han Zhao, Remi Tachet des Combes, Kun Zhang, Geoffrey J. Gordon
Our result characterizes a fundamental tradeoff between learning invariant representations and achieving small joint error on both domains when the marginal label distributions differ from source to target.
no code implementations • NeurIPS 2018 • Menghan Wang, Mingming Gong, Xiaolin Zheng, Kun Zhang
Recent studies modeled \emph{exposure}, a latent missingness variable which indicates whether an item is missing to a user, to give each missing entry a confidence of being negative feedback.
no code implementations • NeurIPS 2018 • Amiremad Ghassami, Negar Kiyavash, Biwei Huang, Kun Zhang
We study the problem of causal structure learning in linear systems from observational data given in multiple domains, across which the causal coefficients and/or the distribution of the exogenous noises may vary.
no code implementations • NeurIPS 2018 • Ruichu Cai, Jie Qiao, Kun Zhang, Zhenjie Zhang, Zhifeng Hao
In this paper we make an attempt to find a way to solve this problem by assuming a two-stage causal process: the first stage maps the cause to a hidden variable of a lower cardinality, and the second stage generates the effect from the hidden representation.
1 code implementation • 5 Nov 2018 • Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki, Marcel Prastawa, Esther Alberts, Jana Lipkova, John Freymann, Justin Kirby, Michel Bilello, Hassan Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Benedikt Wiestler, Rivka Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-Andre Weber, Abhishek Mahajan, Ujjwal Baid, Elizabeth Gerstner, Dongjin Kwon, Gagan Acharya, Manu Agarwal, Mahbubul Alam, Alberto Albiol, Antonio Albiol, Francisco J. Albiol, Varghese Alex, Nigel Allinson, Pedro H. A. Amorim, Abhijit Amrutkar, Ganesh Anand, Simon Andermatt, Tal Arbel, Pablo Arbelaez, Aaron Avery, Muneeza Azmat, Pranjal B., W Bai, Subhashis Banerjee, Bill Barth, Thomas Batchelder, Kayhan Batmanghelich, Enzo Battistella, Andrew Beers, Mikhail Belyaev, Martin Bendszus, Eze Benson, Jose Bernal, Halandur Nagaraja Bharath, George Biros, Sotirios Bisdas, James Brown, Mariano Cabezas, Shilei Cao, Jorge M. Cardoso, Eric N Carver, Adrià Casamitjana, Laura Silvana Castillo, Marcel Catà, Philippe Cattin, Albert Cerigues, Vinicius S. Chagas, Siddhartha Chandra, Yi-Ju Chang, Shiyu Chang, Ken Chang, Joseph Chazalon, Shengcong Chen, Wei Chen, Jefferson W. Chen, Zhaolin Chen, Kun Cheng, Ahana Roy Choudhury, Roger Chylla, Albert Clérigues, Steven Colleman, Ramiro German Rodriguez Colmeiro, Marc Combalia, Anthony Costa, Xiaomeng Cui, Zhenzhen Dai, Lutao Dai, Laura Alexandra Daza, Eric Deutsch, Changxing Ding, Chao Dong, Shidu Dong, Wojciech Dudzik, Zach Eaton-Rosen, Gary Egan, Guilherme Escudero, Théo Estienne, Richard Everson, Jonathan Fabrizio, Yong Fan, Longwei Fang, Xue Feng, Enzo Ferrante, Lucas Fidon, Martin Fischer, Andrew P. French, Naomi Fridman, Huan Fu, David Fuentes, Yaozong Gao, Evan Gates, David Gering, Amir Gholami, Willi Gierke, Ben Glocker, Mingming Gong, Sandra González-Villá, T. Grosges, Yuanfang Guan, Sheng Guo, Sudeep Gupta, Woo-Sup Han, Il Song Han, Konstantin Harmuth, Huiguang He, Aura Hernández-Sabaté, Evelyn Herrmann, Naveen Himthani, Winston Hsu, Cheyu Hsu, Xiaojun Hu, Xiaobin Hu, Yan Hu, Yifan Hu, Rui Hua, Teng-Yi Huang, Weilin Huang, Sabine Van Huffel, Quan Huo, Vivek HV, Khan M. Iftekharuddin, Fabian Isensee, Mobarakol Islam, Aaron S. Jackson, Sachin R. Jambawalikar, Andrew Jesson, Weijian Jian, Peter Jin, V Jeya Maria Jose, Alain Jungo, B Kainz, Konstantinos Kamnitsas, Po-Yu Kao, Ayush Karnawat, Thomas Kellermeier, Adel Kermi, Kurt Keutzer, Mohamed Tarek Khadir, Mahendra Khened, Philipp Kickingereder, Geena Kim, Nik King, Haley Knapp, Urspeter Knecht, Lisa Kohli, Deren Kong, Xiangmao Kong, Simon Koppers, Avinash Kori, Ganapathy Krishnamurthi, Egor Krivov, Piyush Kumar, Kaisar Kushibar, Dmitrii Lachinov, Tryphon Lambrou, Joon Lee, Chengen Lee, Yuehchou Lee, M Lee, Szidonia Lefkovits, Laszlo Lefkovits, James Levitt, Tengfei Li, Hongwei Li, Hongyang Li, Xiaochuan Li, Yuexiang Li, Heng Li, Zhenye Li, Xiaoyu Li, Zeju Li, Xiaogang Li, Wenqi Li, Zheng-Shen Lin, Fengming Lin, Pietro Lio, Chang Liu, Boqiang Liu, Xiang Liu, Mingyuan Liu, Ju Liu, Luyan Liu, Xavier Llado, Marc Moreno Lopez, Pablo Ribalta Lorenzo, Zhentai Lu, Lin Luo, Zhigang Luo, Jun Ma, Kai Ma, Thomas Mackie, Anant Madabushi, Issam Mahmoudi, Klaus H. Maier-Hein, Pradipta Maji, CP Mammen, Andreas Mang, B. S. Manjunath, Michal Marcinkiewicz, S McDonagh, Stephen McKenna, Richard McKinley, Miriam Mehl, Sachin Mehta, Raghav Mehta, Raphael Meier, Christoph Meinel, Dorit Merhof, Craig Meyer, Robert Miller, Sushmita Mitra, Aliasgar Moiyadi, David Molina-Garcia, Miguel A. B. Monteiro, Grzegorz Mrukwa, Andriy Myronenko, Jakub Nalepa, Thuyen Ngo, Dong Nie, Holly Ning, Chen Niu, Nicholas K Nuechterlein, Eric Oermann, Arlindo Oliveira, Diego D. C. Oliveira, Arnau Oliver, Alexander F. I. Osman, Yu-Nian Ou, Sebastien Ourselin, Nikos Paragios, Moo Sung Park, Brad Paschke, J. Gregory Pauloski, Kamlesh Pawar, Nick Pawlowski, Linmin Pei, Suting Peng, Silvio M. Pereira, Julian Perez-Beteta, Victor M. Perez-Garcia, Simon Pezold, Bao Pham, Ashish Phophalia, Gemma Piella, G. N. Pillai, Marie Piraud, Maxim Pisov, Anmol Popli, Michael P. Pound, Reza Pourreza, Prateek Prasanna, Vesna Prkovska, Tony P. Pridmore, Santi Puch, Élodie Puybareau, Buyue Qian, Xu Qiao, Martin Rajchl, Swapnil Rane, Michael Rebsamen, Hongliang Ren, Xuhua Ren, Karthik Revanuru, Mina Rezaei, Oliver Rippel, Luis Carlos Rivera, Charlotte Robert, Bruce Rosen, Daniel Rueckert, Mohammed Safwan, Mostafa Salem, Joaquim Salvi, Irina Sanchez, Irina Sánchez, Heitor M. Santos, Emmett Sartor, Dawid Schellingerhout, Klaudius Scheufele, Matthew R. Scott, Artur A. Scussel, Sara Sedlar, Juan Pablo Serrano-Rubio, N. Jon Shah, Nameetha Shah, Mazhar Shaikh, B. Uma Shankar, Zeina Shboul, Haipeng Shen, Dinggang Shen, Linlin Shen, Haocheng Shen, Varun Shenoy, Feng Shi, Hyung Eun Shin, Hai Shu, Diana Sima, M Sinclair, Orjan Smedby, James M. Snyder, Mohammadreza Soltaninejad, Guidong Song, Mehul Soni, Jean Stawiaski, Shashank Subramanian, Li Sun, Roger Sun, Jiawei Sun, Kay Sun, Yu Sun, Guoxia Sun, Shuang Sun, Yannick R Suter, Laszlo Szilagyi, Sanjay Talbar, DaCheng Tao, Zhongzhao Teng, Siddhesh Thakur, Meenakshi H Thakur, Sameer Tharakan, Pallavi Tiwari, Guillaume Tochon, Tuan Tran, Yuhsiang M. Tsai, Kuan-Lun Tseng, Tran Anh Tuan, Vadim Turlapov, Nicholas Tustison, Maria Vakalopoulou, Sergi Valverde, Rami Vanguri, Evgeny Vasiliev, Jonathan Ventura, Luis Vera, Tom Vercauteren, C. A. Verrastro, Lasitha Vidyaratne, Veronica Vilaplana, Ajeet Vivekanandan, Qian Wang, Chiatse J. Wang, Wei-Chung Wang, Duo Wang, Ruixuan Wang, Yuanyuan Wang, Chunliang Wang, Guotai Wang, Ning Wen, Xin Wen, Leon Weninger, Wolfgang Wick, Shaocheng Wu, Qiang Wu, Yihong Wu, Yong Xia, Yanwu Xu, Xiaowen Xu, Peiyuan Xu, Tsai-Ling Yang, Xiaoping Yang, Hao-Yu Yang, Junlin Yang, Haojin Yang, Guang Yang, Hongdou Yao, Xujiong Ye, Changchang Yin, Brett Young-Moxon, Jinhua Yu, Xiangyu Yue, Songtao Zhang, Angela Zhang, Kun Zhang, Xue-jie Zhang, Lichi Zhang, Xiaoyue Zhang, Yazhuo Zhang, Lei Zhang, Jian-Guo Zhang, Xiang Zhang, Tianhao Zhang, Sicheng Zhao, Yu Zhao, Xiaomei Zhao, Liang Zhao, Yefeng Zheng, Liming Zhong, Chenhong Zhou, Xiaobing Zhou, Fan Zhou, Hongtu Zhu, Jin Zhu, Ying Zhuge, Weiwei Zong, Jayashree Kalpathy-Cramer, Keyvan Farahani, Christos Davatzikos, Koen van Leemput, Bjoern Menze
This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i. e., 2012-2018.
no code implementations • 26 Sep 2018 • Di Wu, Kun Zhang, Fei Cheng, Yang Zhao, Qi Liu, Chang-An Yuan, De-Shuang Huang
As a basic task of multi-camera surveillance system, person re-identification aims to re-identify a query pedestrian observed from non-overlapping multiple cameras or across different time with a single camera.
1 code implementation • CVPR 2019 • Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, Kun Zhang, DaCheng Tao
Unsupervised domain mapping aims to learn a function to translate domain X to Y by a function GXY in the absence of paired examples.
no code implementations • ECCV 2018 • Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, DaCheng Tao
Under the assumption that the conditional distribution $P(Y|X)$ remains unchanged across domains, earlier approaches to domain generalization learned the invariant representation $T(X)$ by minimizing the discrepancy of the marginal distribution $P(T(X))$.
Ranked #66 on Domain Generalization on PACS
1 code implementation • 11 Jul 2018 • Ruibo Tu, Kun Zhang, Paul Ackermann, Bo Christer Bertilson, Clark Glymour, Hedvig Kjellström, Cheng Zhang
When these data entries are not missing completely at random, the (conditional) independence relations in the observed data may be different from those in the complete data generated by the underlying causal process.
no code implementations • 12 Apr 2018 • Mingming Gong, Kun Zhang, Biwei Huang, Clark Glymour, DaCheng Tao, Kayhan Batmanghelich
For this purpose, we first propose a flexible Generative Domain Adaptation Network (G-DAN) with specific latent variables to capture changes in the generating process of features across domains.
no code implementations • 5 Feb 2018 • AmirEmad Ghassami, Saber Salehkaleybar, Negar Kiyavash, Kun Zhang
In this paper, we propose a new technique for counting the number of DAGs in a Markov equivalence class.
no code implementations • 24 Jan 2018 • Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M. Dai, Nissan Hajaj, Peter J. Liu, Xiaobing Liu, Mimi Sun, Patrik Sundberg, Hector Yee, Kun Zhang, Gavin E. Duggan, Gerardo Flores, Michaela Hardt, Jamie Irvine, Quoc Le, Kurt Litsch, Jake Marcus, Alexander Mossin, Justin Tansuwan, De Wang, James Wexler, Jimbo Wilson, Dana Ludwig, Samuel L. Volchenboum, Katherine Chou, Michael Pearson, Srinivasan Madabushi, Nigam H. Shah, Atul J. Butte, Michael Howell, Claire Cui, Greg Corrado, Jeff Dean
Predictive modeling with electronic health record (EHR) data is anticipated to drive personalized medicine and improve healthcare quality.