Search Results for author: Ying Nian Wu

Found 83 papers, 21 papers with code

Deep Generative model with Hierarchical Latent Factors for Time Series Anomaly Detection

1 code implementation15 Feb 2022 Cristian Challu, Peihong Jiang, Ying Nian Wu, Laurent Callot

Multivariate time series anomaly detection has become an active area of research in recent years, with Deep Learning models outperforming previous approaches on benchmark datasets.

Anomaly Detection Time Series

A Thousand Words Are Worth More Than a Picture: Natural Language-Centric Outside-Knowledge Visual Question Answering

no code implementations14 Jan 2022 Feng Gao, Qing Ping, Govind Thattai, Aishwarya Reganti, Ying Nian Wu, Prem Natarajan

Outside-knowledge visual question answering (OK-VQA) requires the agent to comprehend the image, make use of relevant knowledge from the entire web, and digest all the information to answer the question.

Generative Question Answering Passage Retrieval +2

Learning Algebraic Representation for Systematic Generalization in Abstract Reasoning

no code implementations25 Nov 2021 Chi Zhang, Sirui Xie, Baoxiong Jia, Ying Nian Wu, Song-Chun Zhu, Yixin Zhu

Extensive experiments show that by incorporating an algebraic treatment, the ALANS learner outperforms various pure connectionist models in domains requiring systematic generalization.

Systematic Generalization

Unsupervised Foreground Extraction via Deep Region Competition

1 code implementation NeurIPS 2021 Peiyu Yu, Sirui Xie, Xiaojian Ma, Yixin Zhu, Ying Nian Wu, Song-Chun Zhu

Foreground extraction can be viewed as a special case of generic image segmentation that focuses on identifying and disentangling objects from the background.

Semantic Segmentation

Iterative Teacher-Aware Learning

no code implementations NeurIPS 2021 Luyao Yuan, Dongruo Zhou, Junhong Shen, Jingdong Gao, Jeffrey L. Chen, Quanquan Gu, Ying Nian Wu, Song-Chun Zhu

Recently, the benefits of integrating this cooperative pedagogy into machine concept learning in discrete spaces have been proved by multiple works.

MCMC Should Mix: Learning Energy-Based Model with Flow-Based Backbone

no code implementations ICLR 2022 Erik Nijkamp, Ruiqi Gao, Pavel Sountsov, Srinivas Vasudevan, Bo Pang, Song-Chun Zhu, Ying Nian Wu

However, MCMC sampling of EBMs in high-dimensional data space is generally not mixing, because the energy function, which is usually parametrized by deep network, is highly multi-modal in the data space.

Latent Space Energy-Based Model of Symbol-Vector Coupling for Text Generation and Classification

1 code implementation26 Aug 2021 Bo Pang, Ying Nian Wu

The energy term of the prior model couples a continuous latent vector and a symbolic one-hot vector, so that discrete category can be inferred from the observed example based on the continuous latent vector.

Text Generation

Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI

no code implementations16 Jul 2021 Quanshi Zhang, Tian Han, Lixin Fan, Zhanxing Zhu, Hang Su, Ying Nian Wu, Jie Ren, Hao Zhang

This workshop pays a special interest in theoretic foundations, limitations, and new application trends in the scope of XAI.

SAS: Self-Augmentation Strategy for Language Model Pre-training

no code implementations14 Jun 2021 Yifei Xu, Jingqiao Zhang, Ru He, Liangzhu Ge, Chao Yang, Cheng Yang, Ying Nian Wu

In this paper, we propose a self-augmentation strategy (SAS) where a single network is utilized for both regular pre-training and contextualized data augmentation for the training in later epochs.

Data Augmentation Language Modelling +2

SocAoG: Incremental Graph Parsing for Social Relation Inference in Dialogues

no code implementations ACL 2021 Liang Qiu, Yuan Liang, Yizhou Zhao, Pan Lu, Baolin Peng, Zhou Yu, Ying Nian Wu, Song-Chun Zhu

Inferring social relations from dialogues is vital for building emotionally intelligent robots to interpret human language better and act accordingly.

Dialog Relation Extraction

Generative Text Modeling through Short Run Inference

1 code implementation EACL 2021 Bo Pang, Erik Nijkamp, Tian Han, Ying Nian Wu

It is initialized from the prior distribution of the latent variable and then runs a small number (e. g., 20) of Langevin dynamics steps guided by its posterior distribution.

Language Modelling

Trajectory Prediction with Latent Belief Energy-Based Model

1 code implementation CVPR 2021 Bo Pang, Tianyang Zhao, Xu Xie, Ying Nian Wu

Sampling from or optimizing the learned LB-EBM yields a belief vector which is used to make a path plan, which then in turn helps to predict a long-range trajectory.

Self-Driving Cars Trajectory Prediction

Learning Neural Representation of Camera Pose with Matrix Representation of Pose Shift via View Synthesis

1 code implementation CVPR 2021 Yaxuan Zhu, Ruiqi Gao, Siyuan Huang, Song-Chun Zhu, Ying Nian Wu

Specifically, the camera pose and 3D scene are represented as vectors and the local camera movement is represented as a matrix operating on the vector of the camera pose.

Novel View Synthesis

Congestion-aware Multi-agent Trajectory Prediction for Collision Avoidance

1 code implementation26 Mar 2021 Xu Xie, Chi Zhang, Yixin Zhu, Ying Nian Wu, Song-Chun Zhu

Predicting agents' future trajectories plays a crucial role in modern AI systems, yet it is challenging due to intricate interactions exhibited in multi-agent systems, especially when it comes to collision avoidance.

Trajectory Prediction

Learning Cycle-Consistent Cooperative Networks via Alternating MCMC Teaching for Unsupervised Cross-Domain Translation

no code implementations7 Mar 2021 Jianwen Xie, Zilong Zheng, Xiaolin Fang, Song-Chun Zhu, Ying Nian Wu

This paper studies the unsupervised cross-domain translation problem by proposing a generative framework, in which the probability distribution of each domain is represented by a generative cooperative network that consists of an energy-based model and a latent variable model.

Translation Unsupervised Image-To-Image Translation

A HINT from Arithmetic: On Systematic Generalization of Perception, Syntax, and Semantics

no code implementations2 Mar 2021 Qing Li, Siyuan Huang, Yining Hong, Yixin Zhu, Ying Nian Wu, Song-Chun Zhu

Inspired by humans' remarkable ability to master arithmetic and generalize to unseen problems, we present a new dataset, HINT, to study machines' capability of learning generalizable concepts at three different levels: perception, syntax, and semantics.

Program Synthesis Systematic Generalization

HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem Solving

no code implementations22 Feb 2021 Sirui Xie, Xiaojian Ma, Peiyu Yu, Yixin Zhu, Ying Nian Wu, Song-Chun Zhu

Leveraging these concepts, they could understand the internal structure of this task, without seeing all of the problem instances.

Learning Algebraic Representation for Abstract Spatial-Temporal Reasoning

no code implementations1 Jan 2021 Chi Zhang, Sirui Xie, Baoxiong Jia, Yixin Zhu, Ying Nian Wu, Song-Chun Zhu

We further show that the algebraic representation learned can be decoded by isomorphism and used to generate an answer.

Systematic Generalization

Learning Energy-Based Models by Diffusion Recovery Likelihood

1 code implementation ICLR 2021 Ruiqi Gao, Yang song, Ben Poole, Ying Nian Wu, Diederik P. Kingma

Inspired by recent progress on diffusion probabilistic models, we present a diffusion recovery likelihood method to tractably learn and sample from a sequence of EBMs trained on increasingly noisy versions of a dataset.

Image Generation

Learning Latent Space Energy-Based Prior Model for Molecule Generation

no code implementations19 Oct 2020 Bo Pang, Tian Han, Ying Nian Wu

Deep generative models have recently been applied to molecule design.

From em-Projections to Variational Auto-Encoder

no code implementations NeurIPS Workshop DL-IG 2020 Tian Han, Jun Zhang, Ying Nian Wu

This paper reviews the em-projections in information geometry and the recent understanding of variational auto-encoder, and explains that they share a common formulation as joint minimization of the Kullback-Leibler divergence between two manifolds of probability distributions, and the joint minimization can be implemented by alternating projections or alternating gradient descent.

A Representational Model of Grid Cells' Path Integration Based on Matrix Lie Algebras

no code implementations28 Sep 2020 Ruiqi Gao, Jianwen Xie, Xue-Xin Wei, Song-Chun Zhu, Ying Nian Wu

The grid cells in the mammalian medial entorhinal cortex exhibit striking hexagon firing patterns when the agent navigates in the open field.

On Path Integration of Grid Cells: Group Representation and Isotropic Scaling

1 code implementation NeurIPS 2021 Ruiqi Gao, Jianwen Xie, Xue-Xin Wei, Song-Chun Zhu, Ying Nian Wu

In this paper, we conduct theoretical analysis of a general representation model of path integration by grid cells, where the 2D self-position is encoded as a higher dimensional vector, and the 2D self-motion is represented by a general transformation of the vector.

Dimensionality Reduction

Learning Latent Space Energy-Based Prior Model

1 code implementation NeurIPS 2020 Bo Pang, Tian Han, Erik Nijkamp, Song-Chun Zhu, Ying Nian Wu

Due to the low dimensionality of the latent space and the expressiveness of the top-down network, a simple EBM in latent space can capture regularities in the data effectively, and MCMC sampling in latent space is efficient and mixes well.

Anomaly Detection Text Generation

MCMC Should Mix: Learning Energy-Based Model with Neural Transport Latent Space MCMC

no code implementations12 Jun 2020 Erik Nijkamp, Ruiqi Gao, Pavel Sountsov, Srinivas Vasudevan, Bo Pang, Song-Chun Zhu, Ying Nian Wu

Learning energy-based model (EBM) requires MCMC sampling of the learned model as an inner loop of the learning algorithm.

Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning

1 code implementation ICML 2020 Qing Li, Siyuan Huang, Yining Hong, Yixin Chen, Ying Nian Wu, Song-Chun Zhu

In this paper, we address these issues and close the loop of neural-symbolic learning by (1) introducing the \textbf{grammar} model as a \textit{symbolic prior} to bridge neural perception and symbolic reasoning, and (2) proposing a novel \textbf{back-search} algorithm which mimics the top-down human-like learning procedure to propagate the error through the symbolic reasoning module efficiently.

Question Answering Visual Question Answering

Joint Training of Variational Auto-Encoder and Latent Energy-Based Model

no code implementations CVPR 2020 Tian Han, Erik Nijkamp, Linqi Zhou, Bo Pang, Song-Chun Zhu, Ying Nian Wu

This paper proposes a joint training method to learn both the variational auto-encoder (VAE) and the latent energy-based model (EBM).

Anomaly Detection

Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense

no code implementations20 Apr 2020 Yixin Zhu, Tao Gao, Lifeng Fan, Siyuan Huang, Mark Edmonds, Hangxin Liu, Feng Gao, Chi Zhang, Siyuan Qi, Ying Nian Wu, Joshua B. Tenenbaum, Song-Chun Zhu

We demonstrate the power of this perspective to develop cognitive AI systems with humanlike common sense by showing how to observe and apply FPICU with little training data to solve a wide range of challenging tasks, including tool use, planning, utility inference, and social learning.

Common Sense Reasoning Small Data Image Classification

Generative PointNet: Deep Energy-Based Learning on Unordered Point Sets for 3D Generation, Reconstruction and Classification

1 code implementation CVPR 2021 Jianwen Xie, Yifei Xu, Zilong Zheng, Song-Chun Zhu, Ying Nian Wu

We propose a generative model of unordered point sets, such as point clouds, in the form of an energy-based model, where the energy function is parameterized by an input-permutation-invariant bottom-up neural network.

General Classification Point Cloud Classification +2

Learning Multi-layer Latent Variable Model via Variational Optimization of Short Run MCMC for Approximate Inference

no code implementations ECCV 2020 Erik Nijkamp, Bo Pang, Tian Han, Linqi Zhou, Song-Chun Zhu, Ying Nian Wu

Learning such a generative model requires inferring the latent variables for each training example based on the posterior distribution of these latent variables.

Representation Learning: A Statistical Perspective

no code implementations26 Nov 2019 Jianwen Xie, Ruiqi Gao, Erik Nijkamp, Song-Chun Zhu, Ying Nian Wu

Learning representations of data is an important problem in statistics and machine learning.

Representation Learning

Deep Unsupervised Clustering with Clustered Generator Model

no code implementations19 Nov 2019 Dandan Zhu, Tian Han, Linqi Zhou, Xiaokang Yang, Ying Nian Wu

We propose the clustered generator model for clustering which contains both continuous and discrete latent variables.

Learning Energy-based Spatial-Temporal Generative ConvNets for Dynamic Patterns

no code implementations26 Sep 2019 Jianwen Xie, Song-Chun Zhu, Ying Nian Wu

We show that an energy-based spatial-temporal generative ConvNet can be used to model and synthesize dynamic patterns.

Finding Deep Local Optima Using Network Pruning

no code implementations25 Sep 2019 Yangzi Guo, Yiyuan She, Ying Nian Wu, Adrian Barbu

However, in non-vision sparse datasets, especially with many irrelevant features where a standard neural network would overfit, this might not be the case and there might be many non-equivalent local optima.

Network Pruning

Inducing Hierarchical Compositional Model by Sparsifying Generator Network

no code implementations CVPR 2020 Xianglei Xing, Tianfu Wu, Song-Chun Zhu, Ying Nian Wu

To realize this AND-OR hierarchy in image synthesis, we learn a generator network that consists of the following two components: (i) Each layer of the hierarchy is represented by an over-complete set of convolutional basis functions.

Image Generation Image Reconstruction

Neural Architecture Search for Joint Optimization of Predictive Power and Biological Knowledge

1 code implementation1 Sep 2019 Zijun Zhang, Linqi Zhou, Liangke Gou, Ying Nian Wu

We report a neural architecture search framework, BioNAS, that is tailored for biomedical researchers to easily build, evaluate, and uncover novel knowledge from interpretable deep learning models.

Neural Architecture Search

Energy-Based Continuous Inverse Optimal Control

no code implementations10 Apr 2019 Yifei Xu, Jianwen Xie, Tianyang Zhao, Chris Baker, Yibiao Zhao, Ying Nian Wu

The problem of continuous inverse optimal control (over finite time horizon) is to learn the unknown cost function over the sequence of continuous control variables from expert demonstrations.

Autonomous Driving Continuous Control +1

Multi-Agent Tensor Fusion for Contextual Trajectory Prediction

1 code implementation CVPR 2019 Tianyang Zhao, Yifei Xu, Mathew Monfort, Wongun Choi, Chris Baker, Yibiao Zhao, Yizhou Wang, Ying Nian Wu

Specifically, the model encodes multiple agents' past trajectories and the scene context into a Multi-Agent Tensor, then applies convolutional fusion to capture multiagent interactions while retaining the spatial structure of agents and the scene context.

Autonomous Driving Trajectory Prediction

On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models

2 code implementations29 Mar 2019 Erik Nijkamp, Mitch Hill, Tian Han, Song-Chun Zhu, Ying Nian Wu

On the other hand, ConvNet potentials learned with non-convergent MCMC do not have a valid steady-state and cannot be considered approximate unnormalized densities of the training data because long-run MCMC samples differ greatly from observed images.

Cooperative Training of Fast Thinking Initializer and Slow Thinking Solver for Conditional Learning

no code implementations7 Feb 2019 Jianwen Xie, Zilong Zheng, Xiaolin Fang, Song-Chun Zhu, Ying Nian Wu

This paper studies the problem of learning the conditional distribution of a high-dimensional output given an input, where the output and input may belong to two different domains, e. g., the output is a photo image and the input is a sketch image.

Image-to-Image Translation

Learning V1 Simple Cells with Vector Representation of Local Content and Matrix Representation of Local Motion

no code implementations24 Jan 2019 Ruiqi Gao, Jianwen Xie, Siyuan Huang, Yufan Ren, Song-Chun Zhu, Ying Nian Wu

This paper proposes a representational model for image pairs such as consecutive video frames that are related by local pixel displacements, in the hope that the model may shed light on motion perception in primary visual cortex (V1).

Frame Optical Flow Estimation

Unsupervised Learning of Neural Networks to Explain Neural Networks (extended abstract)

no code implementations21 Jan 2019 Quanshi Zhang, Yu Yang, Ying Nian Wu

This paper presents an unsupervised method to learn a neural network, namely an explainer, to interpret a pre-trained convolutional neural network (CNN), i. e., the explainer uses interpretable visual concepts to explain features in middle conv-layers of a CNN.

Knowledge Distillation

Network Transplanting (extended abstract)

no code implementations21 Jan 2019 Quanshi Zhang, Yu Yang, Qian Yu, Ying Nian Wu

This paper focuses on a new task, i. e., transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision.

Inducing Sparse Coding and And-Or Grammar from Generator Network

no code implementations20 Jan 2019 Xianglei Xing, Song-Chun Zhu, Ying Nian Wu

We introduce an explainable generative model by applying sparse operation on the feature maps of the generator network.

Interpretable CNNs for Object Classification

no code implementations8 Jan 2019 Quanshi Zhang, Xin Wang, Ying Nian Wu, Huilin Zhou, Song-Chun Zhu

This paper proposes a generic method to learn interpretable convolutional filters in a deep convolutional neural network (CNN) for object classification, where each interpretable filter encodes features of a specific object part.

General Classification

Divergence Triangle for Joint Training of Generator Model, Energy-based Model, and Inference Model

1 code implementation28 Dec 2018 Tian Han, Erik Nijkamp, Xiaolin Fang, Mitch Hill, Song-Chun Zhu, Ying Nian Wu

This paper proposes the divergence triangle as a framework for joint training of generator model, energy-based model and inference model.

Learning Dynamic Generator Model by Alternating Back-Propagation Through Time

no code implementations27 Dec 2018 Jianwen Xie, Ruiqi Gao, Zilong Zheng, Song-Chun Zhu, Ying Nian Wu

The non-linear transformation of this transition model can be parametrized by a feedforward neural network.

Frame

Explanatory Graphs for CNNs

no code implementations18 Dec 2018 Quanshi Zhang, Xin Wang, Ruiming Cao, Ying Nian Wu, Feng Shi, Song-Chun Zhu

This paper introduces a graphical model, namely an explanatory graph, which reveals the knowledge hierarchy hidden inside conv-layers of a pre-trained CNN.

Learning Grid Cells as Vector Representation of Self-Position Coupled with Matrix Representation of Self-Motion

no code implementations ICLR 2019 Ruiqi Gao, Jianwen Xie, Song-Chun Zhu, Ying Nian Wu

In this model, the 2D self-position of the agent is represented by a high-dimensional vector, and the 2D self-motion or displacement of the agent is represented by a matrix that transforms the vector.

A Tale of Three Probabilistic Families: Discriminative, Descriptive and Generative Models

no code implementations9 Oct 2018 Ying Nian Wu, Ruiqi Gao, Tian Han, Song-Chun Zhu

In this paper, we review three families of probability models, namely, the discriminative models, the descriptive models, and the generative models.

Interactive Agent Modeling by Learning to Probe

no code implementations1 Oct 2018 Tianmin Shu, Caiming Xiong, Ying Nian Wu, Song-Chun Zhu

In particular, the probing agent (i. e. a learner) learns to interact with the environment and with a target agent (i. e., a demonstrator) to maximize the change in the observed behaviors of that agent.

Imitation Learning

Deformable Generator Network: Unsupervised Disentanglement of Appearance and Geometry

2 code implementations16 Jun 2018 Xianglei Xing, Ruiqi Gao, Tian Han, Song-Chun Zhu, Ying Nian Wu

We present a deformable generator model to disentangle the appearance and geometric information for both image and video data in a purely unsupervised manner.

Disentanglement Transfer Learning

Unsupervised Learning of Neural Networks to Explain Neural Networks

no code implementations18 May 2018 Quanshi Zhang, Yu Yang, Yuchen Liu, Ying Nian Wu, Song-Chun Zhu

Given feature maps of a certain conv-layer of the CNN, the explainer performs like an auto-encoder, which first disentangles the feature maps into object-part features and then inverts object-part features back to features of higher conv-layers of the CNN.

Disentanglement

Replicating Active Appearance Model by Generator Network

no code implementations14 May 2018 Tian Han, Jiawen Wu, Ying Nian Wu

A recent Cell paper [Chang and Tsao, 2017] reports an interesting discovery.

Face Recognition

Network Transplanting

no code implementations26 Apr 2018 Quanshi Zhang, Yu Yang, Qian Yu, Ying Nian Wu

This paper focuses on a new task, i. e., transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision.

Learning Descriptor Networks for 3D Shape Synthesis and Analysis

1 code implementation CVPR 2018 Jianwen Xie, Zilong Zheng, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, Ying Nian Wu

This paper proposes a 3D shape descriptor network, which is a deep convolutional energy-based model, for modeling volumetric shape patterns.

3D Object Super-Resolution

Interpreting CNNs via Decision Trees

no code implementations CVPR 2019 Quanshi Zhang, Yu Yang, Haotian Ma, Ying Nian Wu

We propose to learn a decision tree, which clarifies the specific reason for each prediction made by the CNN at the semantic level.

Interpretable Convolutional Neural Networks

2 code implementations CVPR 2018 Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu

Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process.

single catogory classification

Learning Energy-Based Models as Generative ConvNets via Multi-grid Modeling and Sampling

no code implementations CVPR 2018 Ruiqi Gao, Yang Lu, Junpei Zhou, Song-Chun Zhu, Ying Nian Wu

Within each iteration of our learning algorithm, for each observed training image, we generate synthesized images at multiple grids by initializing the finite-step MCMC sampling from a minimal 1 x 1 version of the training image.

Mining Deep And-Or Object Structures via Cost-Sensitive Question-Answer-Based Active Annotations

no code implementations13 Aug 2017 Quanshi Zhang, Ying Nian Wu, Hao Zhang, Song-Chun Zhu

The loss is defined for nodes in all layers of the AOG, including the generative loss (measuring the likelihood of the images) and the discriminative loss (measuring the fitness to human answers).

Question Answering

Interactively Transferring CNN Patterns for Part Localization

no code implementations5 Aug 2017 Quanshi Zhang, Ruiming Cao, Shengming Zhang, Mark Redmonds, Ying Nian Wu, Song-Chun Zhu

In the scenario of one/multi-shot learning, conventional end-to-end learning strategies without sufficient supervision are usually not powerful enough to learn correct patterns from noisy signals.

Interpreting CNN Knowledge via an Explanatory Graph

no code implementations5 Aug 2017 Quanshi Zhang, Ruiming Cao, Feng Shi, Ying Nian Wu, Song-Chun Zhu

Considering that each filter in a conv-layer of a pre-trained CNN usually represents a mixture of object parts, we propose a simple yet efficient method to automatically disentangles different part patterns from each filter, and construct an explanatory graph.

Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning

no code implementations14 Nov 2016 Quanshi Zhang, Ruiming Cao, Ying Nian Wu, Song-Chun Zhu

This paper proposes a learning strategy that extracts object-part concepts from a pre-trained convolutional neural network (CNN), in an attempt to 1) explore explicit semantics hidden in CNN units and 2) gradually grow a semantically interpretable graphical model on the pre-trained CNN for hierarchical object understanding.

Cooperative Training of Descriptor and Generator Networks

no code implementations29 Sep 2016 Jianwen Xie, Yang Lu, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu

Specifically, within each iteration of the cooperative learning algorithm, the generator model generates initial synthesized examples to initialize a finite-step MCMC that samples and trains the energy-based descriptor model.

Decoding the Encoding of Functional Brain Networks: an fMRI Classification Comparison of Non-negative Matrix Factorization (NMF), Independent Component Analysis (ICA), and Sparse Coding Algorithms

no code implementations1 Jul 2016 Jianwen Xie, Pamela K. Douglas, Ying Nian Wu, Arthur L. Brody, Ariana E. Anderson

Spatial sparse coding algorithms ($L1$ Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks.

Time Series

Alternating Back-Propagation for Generator Network

no code implementations28 Jun 2016 Tian Han, Yang Lu, Song-Chun Zhu, Ying Nian Wu

This paper proposes an alternating back-propagation algorithm for learning the generator network model.

Synthesizing Dynamic Patterns by Spatial-Temporal Generative ConvNet

no code implementations CVPR 2017 Jianwen Xie, Song-Chun Zhu, Ying Nian Wu

We show that a spatial-temporal generative ConvNet can be used to model and synthesize dynamic patterns.

A Theory of Generative ConvNet

no code implementations10 Feb 2016 Jianwen Xie, Yang Lu, Song-Chun Zhu, Ying Nian Wu

If we further assume that the non-linearity in the ConvNet is Rectified Linear Unit (ReLU) and the reference distribution is Gaussian white noise, then we obtain a generative ConvNet model that is unique among energy-based models: The model is piecewise Gaussian, and the means of the Gaussian pieces are defined by an auto-encoder, where the filters in the bottom-up encoding become the basis functions in the top-down decoding, and the binary activation variables detected by the filters in the bottom-up convolution process become the coefficients of the basis functions in the top-down deconvolution process.

Mining And-Or Graphs for Graph Matching and Object Discovery

no code implementations ICCV 2015 Quanshi Zhang, Ying Nian Wu, Song-Chun Zhu

This paper reformulates the theory of graph mining on the technical basis of graph matching, and extends its scope of applications to computer vision.

Graph Matching Graph Mining +1

Learning FRAME Models Using CNN Filters

no code implementations28 Sep 2015 Yang Lu, Song-Chun Zhu, Ying Nian Wu

We explain that each learned model corresponds to a new CNN unit at a layer above the layer of filters employed by the model.

Frame

Learning Inhomogeneous FRAME Models for Object Patterns

no code implementations CVPR 2014 Jianwen Xie, Wenze Hu, Song-Chun Zhu, Ying Nian Wu

We investigate an inhomogeneous version of the FRAME (Filters, Random field, And Maximum Entropy) model and apply it to modeling object patterns.

Frame

Unsupervised Learning of Dictionaries of Hierarchical Compositional Models

no code implementations CVPR 2014 Jifeng Dai, Yi Hong, Wenze Hu, Song-Chun Zhu, Ying Nian Wu

Given a set of unannotated training images, a dictionary of such hierarchical templates are learned so that each training image can be represented by a small number of templates that are spatially translated, rotated and scaled versions of the templates in the learned dictionary.

Domain Adaptation Template Matching

Learning Mixtures of Bernoulli Templates by Two-Round EM with Performance Guarantee

no code implementations2 May 2013 Adrian Barbu, Tianfu Wu, Ying Nian Wu

Each template is a binary vector, and a template generates examples by randomly switching its binary components independently with a certain probability.

Cannot find the paper you are looking for? You can Submit a new open access paper.