Search Results for author: Jianing Yang

Found 6 papers, 5 papers with code

SUOD: Toward Scalable Unsupervised Outlier Detection

2 code implementations8 Feb 2020 Yue Zhao, Xueying Ding, Jianing Yang, Haoping Bai

In this study, we propose a three-module acceleration framework called SUOD to expedite the training and prediction with a large number of unsupervised detection models.

Knowledge Distillation Outlier Detection +1

SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier Detection

1 code implementation11 Mar 2020 Yue Zhao, Xiyang Hu, Cheng Cheng, Cong Wang, Changlin Wan, Wen Wang, Jianing Yang, Haoping Bai, Zheng Li, Cao Xiao, Yunlong Wang, Zhi Qiao, Jimeng Sun, Leman Akoglu

Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples with numerous high-stake applications including fraud detection and intrusion detection.

Dimensionality Reduction Fraud Detection +2

LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language Model as an Agent

1 code implementation21 Sep 2023 Jianing Yang, Xuweiyi Chen, Shengyi Qian, Nikhil Madaan, Madhavan Iyengar, David F. Fouhey, Joyce Chai

While existing approaches often rely on extensive labeled data or exhibit limitations in handling complex language queries, we propose LLM-Grounder, a novel zero-shot, open-vocabulary, Large Language Model (LLM)-based 3D visual grounding pipeline.

Language Modelling Large Language Model +3

MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences

1 code implementation NAACL 2021 Jianing Yang, Yongxin Wang, Ruitao Yi, Yuying Zhu, Azaan Rehman, Amir Zadeh, Soujanya Poria, Louis-Philippe Morency

Human communication is multimodal in nature; it is through multiple modalities such as language, voice, and facial expressions, that opinions and emotions are expressed.

Emotion Recognition Multimodal Sentiment Analysis

What Gives the Answer Away? Question Answering Bias Analysis on Video QA Datasets

no code implementations7 Jul 2020 Jianing Yang, Yuying Zhu, Yongxin Wang, Ruitao Yi, Amir Zadeh, Louis-Philippe Morency

In this paper, we analyze QA biases in popular video question answering datasets and discover pretrained language models can answer 37-48% questions correctly without using any multimodal context information, far exceeding the 20% random guess baseline for 5-choose-1 multiple-choice questions.

Multiple-choice Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.