Search Results for author: Ye Yu

Found 18 papers, 7 papers with code

Rule By Example: Harnessing Logical Rules for Explainable Hate Speech Detection

1 code implementation24 Jul 2023 Christopher Clarke, Matthew Hall, Gaurav Mittal, Ye Yu, Sandra Sajeev, Jason Mars, Mei Chen

In this paper, we present Rule By Example (RBE): a novel exemplar-based contrastive learning approach for learning from logical rules for the task of textual content moderation.

Contrastive Learning Hate Speech Detection

Rethinking Multimodal Content Moderation from an Asymmetric Angle with Mixed-modality

no code implementations17 May 2023 Jialin Yuan, Ye Yu, Gaurav Mittal, Matthew Hall, Sandra Sajeev, Mei Chen

There is a rapidly growing need for multimodal content moderation (CM) as more and more content on social media is multimodal in nature.

PivoTAL: Prior-Driven Supervision for Weakly-Supervised Temporal Action Localization

no code implementations CVPR 2023 Mamshad Nayeem Rizve, Gaurav Mittal, Ye Yu, Matthew Hall, Sandra Sajeev, Mubarak Shah, Mei Chen

To address this, we present PivoTAL, Prior-driven Supervision for Weakly-supervised Temporal Action Localization, to approach WTAL from a localization-by-localization perspective by learning to localize the action snippets directly.

Weakly Supervised Action Localization Weakly Supervised Temporal Action Localization

ProTeGe: Untrimmed Pretraining for Video Temporal Grounding by Video Temporal Grounding

no code implementations CVPR 2023 Lan Wang, Gaurav Mittal, Sandra Sajeev, Ye Yu, Matthew Hall, Vishnu Naresh Boddeti, Mei Chen

We present ProTeGe as the first method to perform VTG-based untrimmed pretraining to bridge the gap between trimmed pretrained backbones and downstream VTG tasks.

text similarity

Pseudo-Label Generation and Various Data Augmentation for Semi-Supervised Hyperspectral Object Detection

1 code implementation Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2022 Jun Yu, Liwen Zhang, Shenshen Du, Hao Chang, Keda Lu, Zhong Zhang, Ye Yu, Lei Wang, Qiang Ling

To overcome these difficulties, this paper first select fewer but suitable data augmentation methods to improve the accuracy of the supervised model based on the labeled training set, which is suitable for the characteristics of hyperspectral images.

Data Augmentation object-detection +3

BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation

no code implementations1 Aug 2022 Ye Yu, Jialin Yuan, Gaurav Mittal, Li Fuxin, Mei Chen

It captures object motion in the video via a novel optical flow calibration module that fuses the segmentation mask with optical flow estimation to improve within-object optical flow smoothness and reduce noise at object boundaries.

 Ranked #1 on Video Object Segmentation on DAVIS 2017 (test-dev) (using extra training data)

Object Optical Flow Estimation +6

A viable framework for semi-supervised learning on realistic dataset

2 code implementations Machine Learning 2022 Hao Chang, Guochen Xie, Jun Yu, Qiang Ling, Fang Gao, Ye Yu

Semi-supervised Fine-Grained Recognition is a challenging task due to the difficulty of data imbalance, high inter-class similarity and domain mismatch.

GateHUB: Gated History Unit with Background Suppression for Online Action Detection

no code implementations CVPR 2022 Junwen Chen, Gaurav Mittal, Ye Yu, Yu Kong, Mei Chen

We present GateHUB, Gated History Unit with Background Suppression, that comprises a novel position-guided gated cross-attention mechanism to enhance or suppress parts of the history as per how informative they are for current frame prediction.

Online Action Detection Optical Flow Estimation

MUSE: Feature Self-Distillation with Mutual Information and Self-Information

no code implementations25 Oct 2021 Yu Gong, Ye Yu, Gaurav Mittal, Greg Mori, Mei Chen

Importantly, we argue and empirically demonstrate that MUSE, compared to other feature discrepancy functions, is a more functional proxy to introduce dependency and effectively improve the expressivity of all features in the knowledge distillation framework.

Image Classification Knowledge Distillation +2

Self-supervised Outdoor Scene Relighting

no code implementations ECCV 2020 Ye Yu, Abhimitra Meka, Mohamed Elgharib, Hans-Peter Seidel, Christian Theobalt, William A. P. Smith

Outdoor scene relighting is a challenging problem that requires good understanding of the scene geometry, illumination and albedo.

Revisiting Dynamic Convolution via Matrix Decomposition

1 code implementation ICLR 2021 Yunsheng Li, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Dongdong Chen, Ye Yu, Lu Yuan, Zicheng Liu, Mei Chen, Nuno Vasconcelos

It has two limitations: (a) it increases the number of convolutional weights by K-times, and (b) the joint optimization of dynamic attention and static convolution kernels is challenging.

Dimensionality Reduction

Stronger NAS with Weaker Predictors

1 code implementation NeurIPS 2021 Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang Wang, Zicheng Liu, Mei Chen, Lu Yuan

We propose a paradigm shift from fitting the whole architecture space using one strong predictor, to progressively fitting a search path towards the high-performance sub-space through a set of weaker predictors.

Neural Architecture Search

Outdoor inverse rendering from a single image using multiview self-supervision

1 code implementation12 Feb 2021 Ye Yu, William A. P. Smith

In this paper we show how to perform scene-level inverse rendering to recover shape, reflectance and lighting from a single, uncontrolled image using a fully convolutional neural network.

Intrinsic Image Decomposition Inverse Rendering

Weak NAS Predictor Is All You Need

no code implementations1 Jan 2021 Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang Wang, Zicheng Liu, Mei Chen, Lu Yuan

Rather than expecting a single strong predictor to model the whole space, we seek a progressive line of weak predictors that can connect a path to the best architecture, thus greatly simplifying the learning task of each predictor.

Neural Architecture Search

SPRING: A Sparsity-Aware Reduced-Precision Monolithic 3D CNN Accelerator Architecture for Training and Inference

no code implementations2 Sep 2019 Ye Yu, Niraj K. Jha

To take advantage of sparsity, some accelerator designs explore sparsity encoding and evaluation on CNN accelerators.

Hardware Architecture

InverseRenderNet: Learning single image inverse rendering

1 code implementation CVPR 2019 Ye Yu, William A. P. Smith

By incorporating a differentiable renderer, our network can learn from self-supervision.

Inverse Rendering

Cannot find the paper you are looking for? You can Submit a new open access paper.