Search Results for author: Haohan Wang

Found 88 papers, 40 papers with code

InfoFlood: Jailbreaking Large Language Models with Information Overload

no code implementations13 Jun 2025 Advait Yadav, Haibo Jin, Man Luo, Jun Zhuang, Haohan Wang

Large Language Models (LLMs) have demonstrated remarkable capabilities across various domains.

PREMISE: Scalable and Strategic Prompt Optimization for Efficient Mathematical Reasoning in Large Models

no code implementations12 Jun 2025 Ye Yu, Yaoning Yu, Haohan Wang

Large reasoning models (LRMs) such as Claude 3. 7 Sonnet and OpenAI o1 achieve strong performance on mathematical benchmarks using lengthy chain-of-thought (CoT) reasoning, but the resulting traces are often unnecessarily verbose.

GSM8K Mathematical Reasoning

Reasoning Can Hurt the Inductive Abilities of Large Language Models

no code implementations30 May 2025 Haibo Jin, Peiyan Zhang, Man Luo, Haohan Wang

Large Language Models (LLMs) have shown remarkable progress across domains, yet their ability to perform inductive reasoning - inferring latent rules from sparse examples - remains limited.

Diagnostic

From Hallucinations to Jailbreaks: Rethinking the Vulnerability of Large Foundation Models

no code implementations30 May 2025 Haibo Jin, Peiyan Zhang, Peiran Wang, Man Luo, Haohan Wang

Large foundation models (LFMs) are susceptible to two distinct vulnerabilities: hallucinations and jailbreak attacks.

SIPDO: Closed-Loop Prompt Optimization via Synthetic Data Feedback

no code implementations26 May 2025 Yaoning Yu, Ye Yu, Kai Wei, Haojing Luo, Haohan Wang

Prompt quality plays a critical role in the performance of large language models (LLMs), motivating a growing body of work on prompt optimization.

Prompt Learning Question Answering +1

Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation

no code implementations24 May 2025 Jun Zhuang, Haibo Jin, Ye Zhang, Zhengjian Kang, Wenbin Zhang, Gaby G. Dagher, Haohan Wang

While prior work has applied intent detection to enhance LLMs' moderation guardrails, showing a significant success against content-level jailbreaks, the robustness of these intent-aware guardrails under malicious manipulations remains under-explored.

Intent Detection Natural Language Understanding +1

Beamforming-Codebook-Aware Channel Knowledge Map Construction for Multi-Antenna Systems

1 code implementation22 May 2025 Haohan Wang, Xu Shi, Hengyu Zhang, Yashuai Cao, Jintao Wang

Channel knowledge map (CKM) has emerged as a crucial technology for next-generation communication, enabling the construction of high-fidelity mappings between spatial environments and channel parameters via electromagnetic information analysis.

Prompt Stability Matters: Evaluating and Optimizing Auto-Generated Prompt in General-Purpose Systems

no code implementations19 May 2025 Ke Chen, Yufei Zhou, Xitong Zhang, Haohan Wang

In this work, we bring attention to prompt stability-the consistency of model responses across repeated executions-as a key factor for building robust and effective prompt generation systems.

IMPROVE: Iterative Model Pipeline Refinement and Optimization Leveraging LLM Agents

no code implementations25 Feb 2025 Eric Xue, Zeyi Huang, Yuyang Ji, Haohan Wang

These findings establish Iterative Refinement as an effective new strategy for LLM-driven ML automation and position IMPROVE as an accessible solution for building high-quality computer vision models without requiring ML expertise.

Attribute Large Language Model

Examining Alignment of Large Language Models through Representative Heuristics: The Case of Political Stereotypes

1 code implementation24 Jan 2025 Sullam Jeoung, Yubin Ge, Haohan Wang, Jana Diesner

Drawing on cognitive science findings related to representativeness heuristics -- where individuals readily recall the representative attribute of a target group in a way that leads to exaggerated beliefs -- we scrutinize LLM responses through this heuristics lens.

Attribute

Revolve: Optimizing AI Systems by Tracking Response Evolution in Textual Optimization

1 code implementation4 Dec 2024 Peiyan Zhang, Haibo Jin, Leyang Hu, Xinnuo Li, Liying Kang, Man Luo, Yangqiu Song, Haohan Wang

However, relying solely on such feedback can be limited when the adjustments made in response to this feedback are either too small or fluctuate irregularly, potentially slowing down or even stalling the optimization process.

Prompt Engineering

Conflict-Aware Adversarial Training

no code implementations21 Oct 2024 Zhiyu Xue, Haohan Wang, Yao Qin, Ramtin Pedarsani

Adversarial training is the most effective method to obtain adversarial robustness for deep neural networks by directly involving adversarial samples in the training procedure.

Adversarial Robustness

DistDD: Distributed Data Distillation Aggregation through Gradient Matching

no code implementations11 Oct 2024 Peiran Wang, Haohan Wang

In this paper, we introduce DistDD, a novel approach within the federated learning framework that reduces the need for repetitive communication by distilling data directly on clients' devices.

Federated Learning Neural Architecture Search

Simple Unsupervised Knowledge Distillation With Space Similarity

no code implementations20 Sep 2024 Aditya Singh, Haohan Wang

In this paper, instead of heuristically constructing preservation worthy relationships between samples, we directly motivate the student to model the teacher's embedding manifold.

Knowledge Distillation Self-Supervised Learning

A Quantitative Approach for Evaluating Disease Focus and Interpretability of Deep Learning Models for Alzheimer's Disease Classification

no code implementations7 Sep 2024 Thomas Yu CHow Tam, Litian Liang, Ke Chen, Haohan Wang, Wei Wu

To bridge such gap, in this study, we developed a quantitative disease-focusing strategy to first enhance the interpretability of DL models using saliency maps and brain segmentations; then we propose a disease-focus (DF) score that quantifies how much a DL model focuses on brain areas relevant to AD pathology based on clinically known MRI-based pathological regions of AD.

Data Augmentation

Quantitative Evaluation of the Saliency Map for Alzheimer's Disease Classifier with Anatomical Segmentation

1 code implementation11 Jul 2024 Yihan Zhang, Xuanshuo Zhang, Wei Wu, Haohan Wang

In order to leverage the fact that the brain volume shrinkage happens in AD patients during disease progression, we define a new evaluation metric, brain volume change score (VCS), by computing the average Pearson correlation of the brain volume changes and the saliency values of a model in different brain regions for each patient.

JailbreakZoo: Survey, Landscapes, and Horizons in Jailbreaking Large Language and Vision-Language Models

1 code implementation26 Jun 2024 Haibo Jin, Leyang Hu, Xinuo Li, Peiyan Zhang, Chonghan Chen, Jun Zhuang, Haohan Wang

The rapid evolution of artificial intelligence (AI) through developments in Large Language Models (LLMs) and Vision-Language Models (VLMs) has brought significant advancements across various technological domains.

LLM Jailbreak Survey

GenoTEX: An LLM Agent Benchmark for Automated Gene Expression Data Analysis

1 code implementation21 Jun 2024 Haoyang Liu, ShuYu Chen, Ye Zhang, Haohan Wang

To support the evaluation and development of such methods, we introduce GenoTEX, a benchmark dataset for the automated analysis of gene expression data.

AI Agent AutoML +2

From Tissue Plane to Organ World: A Benchmark Dataset for Multimodal Biomedical Image Registration using Deep Co-Attention Networks

1 code implementation6 Jun 2024 Yifeng Wang, Weipeng Li, Thomas Pearce, Haohan Wang

To gain the most information from this multimodal, multiscale approach, it is desirable to identify precisely where a histologic tissue section was taken from within the organ in order to correlate with the tissue features in exactly the same organ region.

Image Registration

Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters

no code implementations30 May 2024 Haibo Jin, Andy Zhou, Joe D. Menke, Haohan Wang

To address this issue, we introduce JAMBench, a harmful behavior benchmark designed to trigger and evaluate moderation guardrails.

Red Teaming

Approximate Nullspace Augmented Finetuning for Robust Vision Transformers

1 code implementation15 Mar 2024 Haoyang Liu, Aditya Singh, Yijiang Li, Haohan Wang

Enhancing the robustness of deep learning models, particularly in the realm of vision transformers (ViTs), is crucial for their real-world deployment.

Towards Adversarially Robust Dataset Distillation by Curvature Regularization

1 code implementation15 Mar 2024 Eric Xue, Yijiang Li, Haoyang Liu, Peiran Wang, Yifan Shen, Haohan Wang

Extensive empirical experiments suggest that our method not only outperforms standard adversarial training on both accuracy and robustness with less computation overhead but is also capable of generating robust distilled datasets that can withstand various adversarial attacks.

Adversarial Robustness Dataset Distillation

GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models

no code implementations5 Feb 2024 Haibo Jin, Ruoxi Chen, Andy Zhou, Yang Zhang, Haohan Wang

Our system of different roles will leverage this knowledge graph to generate new jailbreaks, which have proved effective in inducing LLMs to generate unethical or guideline-violating responses.

Sentence

Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks

1 code implementation30 Jan 2024 Andy Zhou, Bo Li, Haohan Wang

Despite advances in AI alignment, large language models (LLMs) remain vulnerable to adversarial attacks or jailbreaking, in which adversaries can modify prompts to induce unwanted behavior.

ADAPT: Alzheimer Diagnosis through Adaptive Profiling Transformers

no code implementations12 Jan 2024 Yifeng Wang, Ke Chen, Haohan Wang

Automated diagnosis of Alzheimer Disease(AD) from brain imaging, such as magnetic resonance imaging (MRI), has become increasingly important and has attracted the community to contribute many deep learning methods.

Generate E-commerce Product Background by Integrating Category Commonality and Personalized Style

1 code implementation20 Dec 2023 Haohan Wang, Wei Feng, Yaoyu Li, Zheng Zhang, Jingjing Lv, Junjie Shen, Zhangang Lin, Jingping Shao

Furthermore, for products with specific and fine-grained requirements in layout, elements, etc, a Personality-Wise Generator is devised to learn such personalized style directly from a reference image to resolve textual ambiguities, and is trained in a self-supervised manner for more efficient training data usage.

2k

Dataset Distillation via the Wasserstein Metric

no code implementations30 Nov 2023 Haoyang Liu, Yijiang Li, Tiancheng Xing, Vibhu Dalal, Luwei Li, Jingrui He, Haohan Wang

Dataset Distillation (DD) emerges as a powerful strategy to encapsulate the expansive information of large datasets into significantly smaller, synthetic equivalents, thereby preserving model performance with reduced computational overhead.

Dataset Distillation

Beyond Pixels: Exploring Human-Readable SVG Generation for Simple Images with Vision Language Models

no code implementations27 Nov 2023 Tong Zhang, Haoyang Liu, Peiyan Zhang, Yuxuan Cheng, Haohan Wang

Our method focuses on producing SVGs that are both accurate and simple, aligning with human readability and understanding.

Vector Graphics

Choosing Wisely and Learning Deeply: Selective Cross-Modality Distillation via CLIP for Domain Generalization

1 code implementation26 Nov 2023 Jixuan Leng, Yijiang Li, Haohan Wang

SCMD leverages the capabilities of large vision-language models, specifically CLIP, to train a more efficient model, ensuring it acquires robust generalization capabilities across unseen domains.

Domain Generalization

Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models

1 code implementation NeurIPS 2023 Andy Zhou, Jindong Wang, Yu-Xiong Wang, Haohan Wang

We propose a conceptually simple and lightweight framework for improving the robustness of vision models through the combination of knowledge distillation and data augmentation.

Data Augmentation Domain Generalization +2

Adaptive Test-Time Personalization for Federated Learning

1 code implementation NeurIPS 2023 Wenxuan Bao, Tianxin Wei, Haohan Wang, Jingrui He

To tackle this challenge, we propose a novel algorithm called ATP to adaptively learns the adaptation rates for each module in the model from distribution shifts among source domains.

Personalized Federated Learning Test-time Adaptation

ZooPFL: Exploring Black-box Foundation Models for Personalized Federated Learning

1 code implementation8 Oct 2023 Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yiqiang Chen, Qiang Yang, Xing Xie, Xiangyang Ji

When personalized federated learning (FL) meets large foundation models, new challenges arise from various limitations in resources.

Personalized Federated Learning

Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models

2 code implementations6 Oct 2023 Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, Yu-Xiong Wang

By leveraging the in-context learning ability of LMs, we integrate Monte Carlo Tree Search into LATS to enable LMs as agents, along with LM-powered value functions and self-reflections for proficient exploration and enhanced decision-making.

Code Generation Decision Making +5

Towards Understanding Adversarial Transferability in Federated Learning

no code implementations1 Oct 2023 Yijiang Li, Ying Gao, Haohan Wang

We investigate a specific security risk in FL: a group of malicious clients has impacted the model during training by disguising their identities and acting as benign clients but later switching to an adversarial role.

Attribute Federated Learning

A Sentence Speaks a Thousand Images: Domain Generalization through Distilling CLIP with Language Guidance

1 code implementation ICCV 2023 Zeyi Huang, Andy Zhou, Zijian Lin, Mu Cai, Haohan Wang, Yong Jae Lee

Domain generalization studies the problem of training a model with samples from several domains (or distributions) and then testing the model with samples from a new, unseen domain.

Domain Generalization Knowledge Distillation +3

Foundation Model-oriented Robustness: Robust Image Model Evaluation with Pretrained Models

no code implementations21 Aug 2023 Peiyan Zhang, Haoyang Liu, Chaozhuo Li, Xing Xie, Sunghun Kim, Haohan Wang

Machine learning has demonstrated remarkable performance over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model's performance in the real world is still in discussion.

image-classification Image Classification +1

Towards Trustworthy and Aligned Machine Learning: A Data-centric Survey with Causality Perspectives

no code implementations31 Jul 2023 Haoyang Liu, Maheep Chaudhary, Haohan Wang

Accordingly, this survey presents the background of trustworthy machine learning development using a unified set of concepts, connects this language to Pearl's causal hierarchy, and finally discusses methods explicitly inspired by causality literature.

Adversarial Robustness Fairness +2

Optimizing the Collaboration Structure in Cross-Silo Federated Learning

1 code implementation10 Jun 2023 Wenxuan Bao, Haohan Wang, Jun Wu, Jingrui He

In federated learning (FL), multiple clients collaborate to train machine learning models together while keeping their data decentralized.

Federated Learning

Leveraging Large Language Models for Scalable Vector Graphics-Driven Image Understanding

no code implementations9 Jun 2023 Mu Cai, Zeyi Huang, Yuheng Li, Utkarsh Ojha, Haohan Wang, Yong Jae Lee

To study what the LLM can do with this XML-based textual description of images, we test the LLM on three broad computer vision tasks: (i) visual reasoning and question answering, (ii) image classification under distribution shift, few-shot learning, and (iii) generating new images using visual prompting.

Few-Shot Learning image-classification +7

BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise Learning

1 code implementation28 May 2023 Jingfeng Zhang, Bo Song, Haohan Wang, Bo Han, Tongliang Liu, Lei Liu, Masashi Sugiyama

To address the challenge posed by BadLabel, we further propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.

Calibrated Teacher for Sparsely Annotated Object Detection

1 code implementation14 Mar 2023 Haohan Wang, Liang Liu, Boshen Zhang, Jiangning Zhang, Wuhao Zhang, Zhenye Gan, Yabiao Wang, Chengjie Wang, Haoqian Wang

Recent works on sparsely annotated object detection alleviate this problem by generating pseudo labels for the missing annotations.

Object object-detection +2

Toward Robust Diagnosis: A Contour Attention Preserving Adversarial Defense for COVID-19 Detection

1 code implementation30 Nov 2022 Kun Xiang, Xing Zhang, Jinwen She, Jinpeng Liu, Haohan Wang, Shiqi Deng, Shancheng Jiang

As the COVID-19 pandemic puts pressure on healthcare systems worldwide, the computed tomography image based AI diagnostic system has become a sustainable solution for early diagnosis.

Adversarial Defense Adversarial Robustness +1

A Principled Evaluation Protocol for Comparative Investigation of the Effectiveness of DNN Classification Models on Similar-but-non-identical Datasets

no code implementations5 Sep 2022 Esla Timothy Anzaku, Haohan Wang, Arnout Van Messem, Wesley De Neve

Deep Neural Network (DNN) models are increasingly evaluated using new replication test datasets, which have been carefully created to be similar to older and popular benchmark datasets.

MRCLens: an MRC Dataset Bias Detection Toolkit

no code implementations18 Jul 2022 Yifan Zhong, Haohan Wang, Eric P. Xing

Many recent neural models have shown remarkable empirical results in Machine Reading Comprehension, but evidence suggests sometimes the models take advantage of dataset biases to predict and fail to generalize on out-of-sample data.

Bias Detection Machine Reading Comprehension

Robustar: Interactive Toolbox Supporting Precise Data Annotation for Robust Vision Learning

1 code implementation18 Jul 2022 Chonghan Chen, Haohan Wang, Leyang Hu, Yuhao Zhang, Shuguang Lyu, Jingcheng Wu, Xinnuo Li, Linjing Sun, Eric P. Xing

We introduce the initial release of our software Robustar, which aims to improve the robustness of vision classification machine learning models through a data-driven perspective.

BIG-bench Machine Learning image-classification +1

Efficiently Leveraging Multi-level User Intent for Session-based Recommendation via Atten-Mixer Network

1 code implementation26 Jun 2022 Peiyan Zhang, Jiayan Guo, Chaozhuo Li, Yueqi Xie, Jaeboum Kim, Yan Zhang, Xing Xie, Haohan Wang, Sunghun Kim

Based on this observation, we intuitively propose to remove the GNN propagation part, while the readout module will take on more responsibility in the model reasoning process.

Session-Based Recommendations

Bear the Query in Mind: Visual Grounding with Query-conditioned Convolution

no code implementations18 Jun 2022 Chonghan Chen, Qi Jiang, Chih-Hao Wang, Noel Chen, Haohan Wang, Xiang Li, Bhiksha Raj

With our proposed QCM, the downstream fusion module receives visual features that are more discriminative and focused on the desired object described in the expression, leading to more accurate predictions.

Visual Grounding

Toward Learning Robust and Invariant Representations with Alignment Regularization and Data Augmentation

1 code implementation4 Jun 2022 Haohan Wang, Zeyi Huang, Xindi Wu, Eric P. Xing

Finally, we test this simple technique we identify (worst-case data augmentation with squared l2 norm alignment regularization) and show that the benefits of this method outrun those of the specially designed methods.

Data Augmentation

The Two Dimensions of Worst-case Training and the Integrated Effect for Out-of-domain Generalization

1 code implementation9 Apr 2022 Zeyi Huang, Haohan Wang, Dong Huang, Yong Jae Lee, Eric P. Xing

Training with an emphasis on "hard-to-learn" components of the data has been proven as an effective method to improve the generalization of machine learning models, especially in the settings where robustness (e. g., generalization across distributions) is valued.

BIG-bench Machine Learning Domain Generalization

The Two Dimensions of Worst-Case Training and Their Integrated Effect for Out-of-Domain Generalization

no code implementations CVPR 2022 Zeyi Huang, Haohan Wang, Dong Huang, Yong Jae Lee, Eric P. Xing

Training with an emphasis on "hard-to-learn" components of the data has been proven as an effective method to improve the generalization of machine learning models, especially in the settings where robustness (e. g., generalization across distributions) is valued.

BIG-bench Machine Learning Domain Generalization

CatchBackdoor: Backdoor Detection via Critical Trojan Neural Path Fuzzing

no code implementations24 Dec 2021 Haibo Jin, Ruoxi Chen, Jinyin Chen, Haibin Zheng, Yang Zhang, Haohan Wang

Extensive experiments on MINST, CIFAR-10, and a-ImageNet datasets and 7 models (LeNet, ResNet, and VGG) demonstrate the superiority of CatchBackdoor over the state-of-the-art methods, in terms of (1) \emph{effective} - it shows better detection performance, especially on stealthy attacks ($\sim$ $\times$ 2 on average); (2) \emph{extensible} - it is robust to trigger size and can conduct detection without benign examples.

DNN Testing

Measure and Improve Robustness in NLP Models: A Survey

no code implementations NAACL 2022 Xuezhi Wang, Haohan Wang, Diyi Yang

Despite robustness being an increasingly studied topic, it has been separately explored in applications like vision and NLP, with various definitions, evaluation and mitigation strategies in multiple lines of research.

Survey

Tradeoffs of Linear Mixed Models in Genome-wide Association Studies

no code implementations5 Nov 2021 Haohan Wang, Bryon Aragam, Eric Xing

Motivated by empirical arguments that are well-known from the genome-wide association studies (GWAS) literature, we study the statistical properties of linear mixed models (LMMs) applied to GWAS.

Toward Learning Human-aligned Cross-domain Robust Models by Countering Misaligned Features

no code implementations5 Nov 2021 Haohan Wang, Zeyi Huang, HANLIN ZHANG, Yong Jae Lee, Eric Xing

Machine learning has demonstrated remarkable prediction accuracy over i. i. d data, but the accuracy often drops when tested with data from another distribution.

BIG-bench Machine Learning

On the Consistency Loss for Leveraging Augmented Data to Learn Robust and Invariant Representations

no code implementations1 Jan 2021 Haohan Wang, Zeyi Huang, Xindi Wu, Eric Xing

Data augmentation is one of the most popular techniques for improving the robustness of neural networks.

Data Augmentation

Learning Robust Models by Countering Spurious Correlations

no code implementations1 Jan 2021 Haohan Wang, Zeyi Huang, Eric Xing

In this paper, we formally study the generalization error bound for this setup with the knowledge of how the spurious features are associated with the label.

Domain Adaptation

Word Shape Matters: Robust Machine Translation with Visual Embedding

no code implementations20 Oct 2020 Haohan Wang, Peiyan Zhang, Eric P. Xing

Neural machine translation has achieved remarkable empirical performance over standard benchmark datasets, yet recent evidence suggests that the models can still fail easily dealing with substandard inputs such as misspelled words, To overcome this issue, we introduce a new encoding heuristic of the input symbols for character-level NLP models: it encodes the shape of each character through the images depicting the letters when printed.

Machine Translation Translation

Self-Challenging Improves Cross-Domain Generalization

8 code implementations ECCV 2020 Zeyi Huang, Haohan Wang, Eric P. Xing, Dong Huang

We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the out-of-domain data.

Domain Generalization image-classification +1

High-Frequency Component Helps Explain the Generalization of Convolutional Neural Networks

1 code implementation CVPR 2020 Haohan Wang, Xindi Wu, Zeyi Huang, Eric P. Xing

We investigate the relationship between the frequency spectrum of image data and the generalization behavior of convolutional neural networks (CNN).

Vocal Bursts Intensity Prediction

High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks

1 code implementation28 May 2019 Haohan Wang, Xindi Wu, Zeyi Huang, Eric P. Xing

We investigate the relationship between the frequency spectrum of image data and the generalization behavior of convolutional neural networks (CNN).

Adversarial Attack Vocal Bursts Intensity Prediction

Learning Robust Representations by Projecting Superficial Statistics Out

no code implementations ICLR 2019 Haohan Wang, Zexue He, Zachary C. Lipton, Eric P. Xing

We test our method on the battery of standard domain generalization data sets and, interestingly, achieve comparable or better performance as compared to other domain generalization methods that explicitly require samples from the target distribution for training.

Domain Generalization

Removing Confounding Factors Associated Weights in Deep Neural Networks Improves the Prediction Accuracy for Healthcare Applications

1 code implementation20 Mar 2018 Haohan Wang, Zhenglin Wu, Eric P. Xing

The proliferation of healthcare data has brought the opportunities of applying data-driven approaches, such as machine learning methods, to assist diagnosis.

EEG

On the Origin of Deep Learning

no code implementations24 Feb 2017 Haohan Wang, Bhiksha Raj

This paper is a review of the evolutionary history of deep learning models.

Deep Learning

SeDMiD for Confusion Detection: Uncovering Mind State from Time Series Brain Wave Data

no code implementations29 Nov 2016 Jingkang Yang, Haohan Wang, Jun Zhu, Eric P. Xing

In this paper, we propose an extension of State Space Model to work with different sources of information together with its learning and inference algorithms.

Time Series Time Series Analysis

Select-Additive Learning: Improving Generalization in Multimodal Sentiment Analysis

1 code implementation16 Sep 2016 Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, Eric P. Xing

In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis.

Multimodal Sentiment Analysis Sentiment Classification

Evaluating Protein-protein Interaction Predictors with a Novel 3-Dimensional Metric

no code implementations6 Nov 2015 Haohan Wang, Madhavi K. Ganapathiraju

In order for the predicted interactions to be directly adopted by biologists, the ma- chine learning predictions have to be of high precision, regardless of recall.

Evaluation of Protein-protein Interaction Predictors with Noisy Partially Labeled Data Sets

no code implementations18 Sep 2015 Haohan Wang, Madhavi K. Ganapathiraju

In this paper, we focused on the problem that non-availability of accurately labeled testing data sets in the domain of protein-protein interaction (PPI) prediction may lead to biased evaluation results.

Multimodal Transfer Deep Learning with Applications in Audio-Visual Recognition

no code implementations9 Dec 2014 Seungwhan Moon, Suyoun Kim, Haohan Wang

We propose a transfer deep learning (TDL) framework that can transfer the knowledge obtained from a single-modal neural network to a network with a different modality.

Deep Learning Video Recognition

Discovery of Important Crossroads in Road Network using Massive Taxi Trajectories

no code implementations9 Jul 2014 Ming Xu, Jianping Wu, Yiman Du, Haohan Wang, Geqi Qi, Kezhen Hu, Yun-Peng Xiao

However, none of existing approaches addresses the problem of identifying network-wide important crossroads in real road network.

Cannot find the paper you are looking for? You can Submit a new open access paper.