Search Results for author: Haotian Ma

Found 7 papers, 2 papers with code

AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100 Labels

no code implementations30 Aug 2022 Nicholas Roberts, Xintong Li, Tzu-Heng Huang, Dyah Adila, Spencer Schoenberg, Cheng-Yu Liu, Lauren Pick, Haotian Ma, Aws Albarghouthi, Frederic Sala

While it has been used successfully in many domains, weak supervision's application scope is limited by the difficulty of constructing labeling functions for domains with complex or high-dimensional features.

Benchmarking

Rotation-Equivariant Neural Networks for Privacy Protection

no code implementations21 Jun 2020 Hao Zhang, Yiting Chen, Haotian Ma, Xu Cheng, Qihan Ren, Liyao Xiang, Jie Shi, Quanshi Zhang

Compared to the traditional neural network, the RENN uses d-ary vectors/tensors as features, in which each element is a d-ary number.

Attribute

Deep Quaternion Features for Privacy Protection

no code implementations18 Mar 2020 Hao Zhang, Yi-Ting Chen, Liyao Xiang, Haotian Ma, Jie Shi, Quanshi Zhang

We propose a method to revise the neural network to construct the quaternion-valued neural network (QNN), in order to prevent intermediate-layer features from leaking input information.

Privacy Preserving

Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding

1 code implementation10 Jun 2019 Haotian Ma, Hao Zhang, Fan Zhou, Yinqing Zhang, Quanshi Zhang

We define two types of entropy-based metrics, i. e. (1) the discarding of pixel-wise information used in the forward propagation, and (2) the uncertainty of the input reconstruction, to measure input information contained by a specific layer from two perspectives.

Fairness

Interpretable Complex-Valued Neural Networks for Privacy Protection

1 code implementation ICLR 2020 Liyao Xiang, Haotian Ma, Hao Zhang, Yifan Zhang, Jie Ren, Quanshi Zhang

Previous studies have found that an adversary attacker can often infer unintended input information from intermediate-layer features.

Explaining AlphaGo: Interpreting Contextual Effects in Neural Networks

no code implementations8 Jan 2019 Zenan Ling, Haotian Ma, Yu Yang, Robert C. Qiu, Song-Chun Zhu, Quanshi Zhang

In this paper, we propose to disentangle and interpret contextual effects that are encoded in a pre-trained deep neural network.

Interpreting CNNs via Decision Trees

no code implementations CVPR 2019 Quanshi Zhang, Yu Yang, Haotian Ma, Ying Nian Wu

We propose to learn a decision tree, which clarifies the specific reason for each prediction made by the CNN at the semantic level.

Object

Cannot find the paper you are looking for? You can Submit a new open access paper.