no code implementations • 13 Jun 2025 • Advait Yadav, Haibo Jin, Man Luo, Jun Zhuang, Haohan Wang
Large Language Models (LLMs) have demonstrated remarkable capabilities across various domains.
no code implementations • 12 Jun 2025 • Ye Yu, Yaoning Yu, Haohan Wang
Large reasoning models (LRMs) such as Claude 3. 7 Sonnet and OpenAI o1 achieve strong performance on mathematical benchmarks using lengthy chain-of-thought (CoT) reasoning, but the resulting traces are often unnecessarily verbose.
no code implementations • 30 May 2025 • Haibo Jin, Peiyan Zhang, Man Luo, Haohan Wang
Large Language Models (LLMs) have shown remarkable progress across domains, yet their ability to perform inductive reasoning - inferring latent rules from sparse examples - remains limited.
no code implementations • 30 May 2025 • Haibo Jin, Peiyan Zhang, Peiran Wang, Man Luo, Haohan Wang
Large foundation models (LFMs) are susceptible to two distinct vulnerabilities: hallucinations and jailbreak attacks.
no code implementations • 26 May 2025 • Yaoning Yu, Ye Yu, Kai Wei, Haojing Luo, Haohan Wang
Prompt quality plays a critical role in the performance of large language models (LLMs), motivating a growing body of work on prompt optimization.
no code implementations • 24 May 2025 • Jun Zhuang, Haibo Jin, Ye Zhang, Zhengjian Kang, Wenbin Zhang, Gaby G. Dagher, Haohan Wang
While prior work has applied intent detection to enhance LLMs' moderation guardrails, showing a significant success against content-level jailbreaks, the robustness of these intent-aware guardrails under malicious manipulations remains under-explored.
1 code implementation • 22 May 2025 • Haohan Wang, Xu Shi, Hengyu Zhang, Yashuai Cao, Jintao Wang
Channel knowledge map (CKM) has emerged as a crucial technology for next-generation communication, enabling the construction of high-fidelity mappings between spatial environments and channel parameters via electromagnetic information analysis.
no code implementations • 19 May 2025 • Ke Chen, Yufei Zhou, Xitong Zhang, Haohan Wang
In this work, we bring attention to prompt stability-the consistency of model responses across repeated executions-as a key factor for building robust and effective prompt generation systems.
1 code implementation • 31 Mar 2025 • Bang Liu, Xinfeng Li, Jiayi Zhang, Jinlin Wang, Tanjin He, Sirui Hong, Hongzhang Liu, Shaokun Zhang, Kaitao Song, Kunlun Zhu, Yuheng Cheng, Suyuchen Wang, Xiaoqiang Wang, Yuyu Luo, Haibo Jin, Peiyan Zhang, Ollie Liu, Jiaqi Chen, huan zhang, Zhaoyang Yu, Haochen Shi, Boyan Li, Dekun Wu, Fengwei Teng, Xiaojun Jia, Jiawei Xu, Jinyu Xiang, Yizhang Lin, Tianming Liu, Tongliang Liu, Yu Su, Huan Sun, Glen Berseth, Jianyun Nie, Ian Foster, Logan Ward, Qingyun Wu, Yu Gu, Mingchen Zhuge, Xiangru Tang, Haohan Wang, Jiaxuan You, Chi Wang, Jian Pei, Qiang Yang, XiaoLiang Qi, Chenglin Wu
The advent of large language models (LLMs) has catalyzed a transformative shift in artificial intelligence, paving the way for advanced intelligent agents capable of sophisticated reasoning, robust perception, and versatile action across diverse domains.
Ranked #1 on
Continual Learning
on AIDS
(using extra training data)
no code implementations • 25 Feb 2025 • Eric Xue, Zeyi Huang, Yuyang Ji, Haohan Wang
These findings establish Iterative Refinement as an effective new strategy for LLM-driven ML automation and position IMPROVE as an accessible solution for building high-quality computer vision models without requiring ML expertise.
1 code implementation • 5 Feb 2025 • Xingye Chen, Wei Feng, Zhenbang Du, Weizhen Wang, Yanyin Chen, Haohan Wang, Linkai Liu, Yaoyu Li, Jinyuan Zhao, Yu Li, Zheng Zhang, Jingjing Lv, Junjie Shen, Zhangang Lin, Jingping Shao, Yuanjie Shao, Xinge You, Changxin Gao, Nong Sang
In web data, advertising images are crucial for capturing user attention and improving advertising effectiveness.
1 code implementation • 24 Jan 2025 • Sullam Jeoung, Yubin Ge, Haohan Wang, Jana Diesner
Drawing on cognitive science findings related to representativeness heuristics -- where individuals readily recall the representative attribute of a target group in a way that leads to exaggerated beliefs -- we scrutinize LLM responses through this heuristics lens.
1 code implementation • 4 Dec 2024 • Peiyan Zhang, Haibo Jin, Leyang Hu, Xinnuo Li, Liying Kang, Man Luo, Yangqiu Song, Haohan Wang
However, relying solely on such feedback can be limited when the adjustments made in response to this feedback are either too small or fluctuate irregularly, potentially slowing down or even stalling the optimization process.
no code implementations • 21 Oct 2024 • Zhiyu Xue, Haohan Wang, Yao Qin, Ramtin Pedarsani
Adversarial training is the most effective method to obtain adversarial robustness for deep neural networks by directly involving adversarial samples in the training procedure.
no code implementations • 11 Oct 2024 • Peiran Wang, Haohan Wang
In this paper, we introduce DistDD, a novel approach within the federated learning framework that reduces the need for repetitive communication by distilling data directly on clients' devices.
1 code implementation • 23 Sep 2024 • Siddhant Bikram Shah, Shuvam Shiwakoti, Maheep Chaudhary, Haohan Wang
We further compare the performance of MemeCLIP and zero-shot GPT-4 on the hate classification task.
Ranked #4 on
Hateful Meme Classification
on PrideMM
no code implementations • 20 Sep 2024 • Aditya Singh, Haohan Wang
In this paper, instead of heuristically constructing preservation worthy relationships between samples, we directly motivate the student to model the teacher's embedding manifold.
no code implementations • 11 Sep 2024 • Ke Chen, Yifeng Wang, Yufei Zhou, Haohan Wang
In the field of Alzheimer's disease diagnosis, segmentation and classification tasks are inherently interconnected.
no code implementations • 7 Sep 2024 • Thomas Yu CHow Tam, Litian Liang, Ke Chen, Haohan Wang, Wei Wu
To bridge such gap, in this study, we developed a quantitative disease-focusing strategy to first enhance the interpretability of DL models using saliency maps and brain segmentations; then we propose a disease-focus (DF) score that quantifies how much a DL model focuses on brain areas relevant to AD pathology based on clinically known MRI-based pathological regions of AD.
1 code implementation • 1 Aug 2024 • Zhenbang Du, Wei Feng, Haohan Wang, Yaoyu Li, Jingsen Wang, Jian Li, Zheng Zhang, Jingjing Lv, Xin Zhu, Junsheng Jin, Junjie Shen, Zhangang Lin, Jingping Shao
In the e-commerce realm, compelling advertising images are pivotal for attracting customer attention.
1 code implementation • 11 Jul 2024 • Yihan Zhang, Xuanshuo Zhang, Wei Wu, Haohan Wang
In order to leverage the fact that the brain volume shrinkage happens in AD patients during disease progression, we define a new evaluation metric, brain volume change score (VCS), by computing the average Pearson correlation of the brain volume changes and the saliency values of a model in different brain regions for each patient.
1 code implementation • 26 Jun 2024 • Haibo Jin, Leyang Hu, Xinuo Li, Peiyan Zhang, Chonghan Chen, Jun Zhuang, Haohan Wang
The rapid evolution of artificial intelligence (AI) through developments in Large Language Models (LLMs) and Vision-Language Models (VLMs) has brought significant advancements across various technological domains.
1 code implementation • 21 Jun 2024 • Haoyang Liu, ShuYu Chen, Ye Zhang, Haohan Wang
To support the evaluation and development of such methods, we introduce GenoTEX, a benchmark dataset for the automated analysis of gene expression data.
no code implementations • 8 Jun 2024 • Chengyuan Deng, Yiqun Duan, Xin Jin, Heng Chang, Yijun Tian, Han Liu, Yichen Wang, Kuofeng Gao, Henry Peng Zou, Yiqiao Jin, Yijia Xiao, Shenghao Wu, Zongxing Xie, Weimin Lyu, Sihong He, Lu Cheng, Haohan Wang, Jun Zhuang
Large Language Models (LLMs) have achieved unparalleled success across diverse language modeling tasks in recent years.
1 code implementation • 6 Jun 2024 • Yifeng Wang, Weipeng Li, Thomas Pearce, Haohan Wang
To gain the most information from this multimodal, multiscale approach, it is desirable to identify precisely where a histologic tissue section was taken from within the organ in order to correlate with the tissue features in exactly the same organ region.
no code implementations • 30 May 2024 • Haibo Jin, Andy Zhou, Joe D. Menke, Haohan Wang
To address this issue, we introduce JAMBench, a harmful behavior benchmark designed to trigger and evaluate moderation guardrails.
1 code implementation • 15 Mar 2024 • Haoyang Liu, Aditya Singh, Yijiang Li, Haohan Wang
Enhancing the robustness of deep learning models, particularly in the realm of vision transformers (ViTs), is crucial for their real-world deployment.
1 code implementation • 15 Mar 2024 • Eric Xue, Yijiang Li, Haoyang Liu, Peiran Wang, Yifan Shen, Haohan Wang
Extensive empirical experiments suggest that our method not only outperforms standard adversarial training on both accuracy and robustness with less computation overhead but is also capable of generating robust distilled datasets that can withstand various adversarial attacks.
no code implementations • 8 Mar 2024 • Yijiang Li, Sucheng Ren, Weipeng Deng, Yuzhi Xu, Ying Gao, Edith Ngai, Haohan Wang
Starting with the class of interest, we query the LLMs to extract relevant knowledge for these novel domains.
no code implementations • 15 Feb 2024 • Haoyang Liu, Yijiang Li, Jinglin Jian, Yuxuan Cheng, Jianrong Lu, Shuyi Guo, Jinglei Zhu, Mianchen Zhang, Miantong Zhang, Haohan Wang
For instance, it has facilitated the identification of disease-predictive genes from gene expression data, significantly advancing healthcare.
no code implementations • 5 Feb 2024 • Haibo Jin, Ruoxi Chen, Andy Zhou, Yang Zhang, Haohan Wang
Our system of different roles will leverage this knowledge graph to generate new jailbreaks, which have proved effective in inducing LLMs to generate unethical or guideline-violating responses.
1 code implementation • 30 Jan 2024 • Andy Zhou, Bo Li, Haohan Wang
Despite advances in AI alignment, large language models (LLMs) remain vulnerable to adversarial attacks or jailbreaking, in which adversaries can modify prompts to induce unwanted behavior.
no code implementations • 12 Jan 2024 • Yifeng Wang, Ke Chen, Haohan Wang
Automated diagnosis of Alzheimer Disease(AD) from brain imaging, such as magnetic resonance imaging (MRI), has become increasingly important and has attracted the community to contribute many deep learning methods.
1 code implementation • 20 Dec 2023 • Haohan Wang, Wei Feng, Yaoyu Li, Zheng Zhang, Jingjing Lv, Junjie Shen, Zhangang Lin, Jingping Shao
Furthermore, for products with specific and fine-grained requirements in layout, elements, etc, a Personality-Wise Generator is devised to learn such personalized style directly from a reference image to resolve textual ambiguities, and is trained in a self-supervised manner for more efficient training data usage.
no code implementations • 30 Nov 2023 • Haoyang Liu, Yijiang Li, Tiancheng Xing, Vibhu Dalal, Luwei Li, Jingrui He, Haohan Wang
Dataset Distillation (DD) emerges as a powerful strategy to encapsulate the expansive information of large datasets into significantly smaller, synthetic equivalents, thereby preserving model performance with reduced computational overhead.
no code implementations • 27 Nov 2023 • Tong Zhang, Haoyang Liu, Peiyan Zhang, Yuxuan Cheng, Haohan Wang
Our method focuses on producing SVGs that are both accurate and simple, aligning with human readability and understanding.
1 code implementation • 26 Nov 2023 • Jixuan Leng, Yijiang Li, Haohan Wang
SCMD leverages the capabilities of large vision-language models, specifically CLIP, to train a more efficient model, ensuring it acquires robust generalization capabilities across unseen domains.
1 code implementation • NeurIPS 2023 • Andy Zhou, Jindong Wang, Yu-Xiong Wang, Haohan Wang
We propose a conceptually simple and lightweight framework for improving the robustness of vision models through the combination of knowledge distillation and data augmentation.
Ranked #13 on
Domain Generalization
on ImageNet-Sketch
1 code implementation • NeurIPS 2023 • Wenxuan Bao, Tianxin Wei, Haohan Wang, Jingrui He
To tackle this challenge, we propose a novel algorithm called ATP to adaptively learns the adaptation rates for each module in the model from distribution shifts among source domains.
1 code implementation • 8 Oct 2023 • Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yiqiang Chen, Qiang Yang, Xing Xie, Xiangyang Ji
When personalized federated learning (FL) meets large foundation models, new challenges arise from various limitations in resources.
2 code implementations • 6 Oct 2023 • Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, Yu-Xiong Wang
By leveraging the in-context learning ability of LMs, we integrate Monte Carlo Tree Search into LATS to enable LMs as agents, along with LM-powered value functions and self-reflections for proficient exploration and enhanced decision-making.
Ranked #16 on
Code Generation
on MBPP
no code implementations • 1 Oct 2023 • Yijiang Li, Ying Gao, Haohan Wang
We investigate a specific security risk in FL: a group of malicious clients has impacted the model during training by disguising their identities and acting as benign clients but later switching to an adversarial role.
1 code implementation • ICCV 2023 • Zeyi Huang, Andy Zhou, Zijian Lin, Mu Cai, Haohan Wang, Yong Jae Lee
Domain generalization studies the problem of training a model with samples from several domains (or distributions) and then testing the model with samples from a new, unseen domain.
Ranked #21 on
Domain Generalization
on PACS
no code implementations • 21 Aug 2023 • Peiyan Zhang, Haoyang Liu, Chaozhuo Li, Xing Xie, Sunghun Kim, Haohan Wang
Machine learning has demonstrated remarkable performance over finite datasets, yet whether the scores over the fixed benchmarks can sufficiently indicate the model's performance in the real world is still in discussion.
no code implementations • 31 Jul 2023 • Haoyang Liu, Maheep Chaudhary, Haohan Wang
Accordingly, this survey presents the background of trustworthy machine learning development using a unified set of concepts, connects this language to Pearl's causal hierarchy, and finally discusses methods explicitly inspired by causality literature.
1 code implementation • 10 Jun 2023 • Wenxuan Bao, Haohan Wang, Jun Wu, Jingrui He
In federated learning (FL), multiple clients collaborate to train machine learning models together while keeping their data decentralized.
no code implementations • 9 Jun 2023 • Mu Cai, Zeyi Huang, Yuheng Li, Utkarsh Ojha, Haohan Wang, Yong Jae Lee
To study what the LLM can do with this XML-based textual description of images, we test the LLM on three broad computer vision tasks: (i) visual reasoning and question answering, (ii) image classification under distribution shift, few-shot learning, and (iii) generating new images using visual prompting.
1 code implementation • 28 May 2023 • Jingfeng Zhang, Bo Song, Haohan Wang, Bo Han, Tongliang Liu, Lei Liu, Masashi Sugiyama
To address the challenge posed by BadLabel, we further propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.
1 code implementation • 14 Mar 2023 • Haohan Wang, Liang Liu, Boshen Zhang, Jiangning Zhang, Wuhao Zhang, Zhenye Gan, Yabiao Wang, Chengjie Wang, Haoqian Wang
Recent works on sparsely annotated object detection alleviate this problem by generating pseudo labels for the missing annotations.
1 code implementation • 10 Mar 2023 • Haohan Wang, Liang Liu, Wuhao Zhang, Jiangning Zhang, Zhenye Gan, Yabiao Wang, Chengjie Wang, Haoqian Wang
Few-shot semantic segmentation aims to learn to segment unseen class objects with the guidance of only a few support images.
Ranked #49 on
Few-Shot Semantic Segmentation
on COCO-20i (1-shot)
1 code implementation • 9 Dec 2022 • Minh-Long Luu, Zeyi Huang, Eric P. Xing, Yong Jae Lee, Haohan Wang
Mix-up training approaches have proven to be effective in improving the generalization ability of Deep Neural Networks.
1 code implementation • 30 Nov 2022 • Kun Xiang, Xing Zhang, Jinwen She, Jinpeng Liu, Haohan Wang, Shiqi Deng, Shancheng Jiang
As the COVID-19 pandemic puts pressure on healthcare systems worldwide, the computed tomography image based AI diagnostic system has become a sustainable solution for early diagnosis.
no code implementations • 5 Sep 2022 • Esla Timothy Anzaku, Haohan Wang, Arnout Van Messem, Wesley De Neve
Deep Neural Network (DNN) models are increasingly evaluated using new replication test datasets, which have been carefully created to be similar to older and popular benchmark datasets.
no code implementations • 18 Jul 2022 • Yifan Zhong, Haohan Wang, Eric P. Xing
Many recent neural models have shown remarkable empirical results in Machine Reading Comprehension, but evidence suggests sometimes the models take advantage of dataset biases to predict and fail to generalize on out-of-sample data.
1 code implementation • 18 Jul 2022 • Chonghan Chen, Haohan Wang, Leyang Hu, Yuhao Zhang, Shuguang Lyu, Jingcheng Wu, Xinnuo Li, Linjing Sun, Eric P. Xing
We introduce the initial release of our software Robustar, which aims to improve the robustness of vision classification machine learning models through a data-driven perspective.
1 code implementation • 26 Jun 2022 • Peiyan Zhang, Jiayan Guo, Chaozhuo Li, Yueqi Xie, Jaeboum Kim, Yan Zhang, Xing Xie, Haohan Wang, Sunghun Kim
Based on this observation, we intuitively propose to remove the GNN propagation part, while the readout module will take on more responsibility in the model reasoning process.
no code implementations • 18 Jun 2022 • Chonghan Chen, Qi Jiang, Chih-Hao Wang, Noel Chen, Haohan Wang, Xiang Li, Bhiksha Raj
With our proposed QCM, the downstream fusion module receives visual features that are more discriminative and focused on the desired object described in the expression, leading to more accurate predictions.
1 code implementation • 4 Jun 2022 • Haohan Wang, Zeyi Huang, Xindi Wu, Eric P. Xing
Finally, we test this simple technique we identify (worst-case data augmentation with squared l2 norm alignment regularization) and show that the benefits of this method outrun those of the specially designed methods.
1 code implementation • 9 Apr 2022 • Zeyi Huang, Haohan Wang, Dong Huang, Yong Jae Lee, Eric P. Xing
Training with an emphasis on "hard-to-learn" components of the data has been proven as an effective method to improve the generalization of machine learning models, especially in the settings where robustness (e. g., generalization across distributions) is valued.
no code implementations • CVPR 2022 • Zeyi Huang, Haohan Wang, Dong Huang, Yong Jae Lee, Eric P. Xing
Training with an emphasis on "hard-to-learn" components of the data has been proven as an effective method to improve the generalization of machine learning models, especially in the settings where robustness (e. g., generalization across distributions) is valued.
no code implementations • 24 Dec 2021 • Haibo Jin, Ruoxi Chen, Jinyin Chen, Haibin Zheng, Yang Zhang, Haohan Wang
Extensive experiments on MINST, CIFAR-10, and a-ImageNet datasets and 7 models (LeNet, ResNet, and VGG) demonstrate the superiority of CatchBackdoor over the state-of-the-art methods, in terms of (1) \emph{effective} - it shows better detection performance, especially on stealthy attacks ($\sim$ $\times$ 2 on average); (2) \emph{extensible} - it is robust to trigger size and can conduct detection without benign examples.
no code implementations • NAACL 2022 • Xuezhi Wang, Haohan Wang, Diyi Yang
Despite robustness being an increasingly studied topic, it has been separately explored in applications like vision and NLP, with various definitions, evaluation and mitigation strategies in multiple lines of research.
no code implementations • 5 Nov 2021 • Haohan Wang, Bryon Aragam, Eric Xing
Motivated by empirical arguments that are well-known from the genome-wide association studies (GWAS) literature, we study the statistical properties of linear mixed models (LMMs) applied to GWAS.
no code implementations • 5 Nov 2021 • Haohan Wang, Zeyi Huang, HANLIN ZHANG, Yong Jae Lee, Eric Xing
Machine learning has demonstrated remarkable prediction accuracy over i. i. d data, but the accuracy often drops when tested with data from another distribution.
1 code implementation • NeurIPS 2021 • Songwei Ge, Shlok Mishra, Haohan Wang, Chun-Liang Li, David Jacobs
We also show that model bias favors texture and shape features differently under different test settings.
no code implementations • 24 Feb 2021 • Zhuoling Li, Haohan Wang, Tymoteusz Swistek, Weixin Chen, Yuanzheng Li, Haoqian Wang
Few-shot learning is challenging due to the limited data and labels.
no code implementations • 24 Feb 2021 • Xuefeng Du, Haohan Wang, Zhenxi Zhu, Xiangrui Zeng, Yi-Wei Chang, Jing Zhang, Min Xu
Deep learning based subtomogram classification have played critical roles for such tasks.
no code implementations • 1 Jan 2021 • Haohan Wang, Zeyi Huang, Xindi Wu, Eric Xing
Data augmentation is one of the most popular techniques for improving the robustness of neural networks.
no code implementations • 1 Jan 2021 • Haohan Wang, Zeyi Huang, Eric Xing
In this paper, we formally study the generalization error bound for this setup with the knowledge of how the spurious features are associated with the label.
1 code implementation • 25 Nov 2020 • Haohan Wang, Zeyi Huang, Xindi Wu, Eric P. Xing
Data augmentation is one of the most popular techniques for improving the robustness of neural networks.
no code implementations • 20 Oct 2020 • Haohan Wang, Peiyan Zhang, Eric P. Xing
Neural machine translation has achieved remarkable empirical performance over standard benchmark datasets, yet recent evidence suggests that the models can still fail easily dealing with substandard inputs such as misspelled words, To overcome this issue, we introduce a new encoding heuristic of the input symbols for character-level NLP models: it encodes the shape of each character through the images depicting the letters when printed.
8 code implementations • ECCV 2020 • Zeyi Huang, Haohan Wang, Eric P. Xing, Dong Huang
We introduce a simple training heuristic, Representation Self-Challenging (RSC), that significantly improves the generalization of CNN to the out-of-domain data.
Ranked #35 on
Domain Generalization
on PACS
1 code implementation • CVPR 2020 • Haohan Wang, Xindi Wu, Zeyi Huang, Eric P. Xing
We investigate the relationship between the frequency spectrum of image data and the generalization behavior of convolutional neural networks (CNN).
no code implementations • ICLR 2020 • Haohan Wang, Xindi Wu, Songwei Ge, Zachary C. Lipton, Eric P. Xing
Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns.
1 code implementation • WS 2019 • He He, Sheng Zha, Haohan Wang
We first learn a biased model that only uses features that are known to relate to dataset bias.
4 code implementations • NeurIPS 2019 • Haohan Wang, Songwei Ge, Eric P. Xing, Zachary C. Lipton
Despite their renowned predictive power on i. i. d.
Ranked #114 on
Domain Generalization
on PACS
1 code implementation • 28 May 2019 • Haohan Wang, Xindi Wu, Zeyi Huang, Eric P. Xing
We investigate the relationship between the frequency spectrum of image data and the generalization behavior of convolutional neural networks (CNN).
no code implementations • ICLR 2019 • Haohan Wang, Zexue He, Zachary C. Lipton, Eric P. Xing
We test our method on the battery of standard domain generalization data sets and, interestingly, achieve comparable or better performance as compared to other domain generalization methods that explicitly require samples from the target distribution for training.
Ranked #123 on
Domain Generalization
on PACS
no code implementations • 7 Sep 2018 • Haohan Wang, Da Sun, Eric P. Xing
In this paper, we further investigate the statistical irregularities, what we refer as confounding factors, of the NLI data sets.
1 code implementation • 20 Mar 2018 • Haohan Wang, Zhenglin Wu, Eric P. Xing
The proliferation of healthcare data has brought the opportunities of applying data-driven approaches, such as machine learning methods, to assist diagnosis.
no code implementations • 24 Feb 2017 • Haohan Wang, Bhiksha Raj
This paper is a review of the evolutionary history of deep learning models.
no code implementations • 29 Nov 2016 • Jingkang Yang, Haohan Wang, Jun Zhu, Eric P. Xing
In this paper, we propose an extension of State Space Model to work with different sources of information together with its learning and inference algorithms.
1 code implementation • 16 Sep 2016 • Haohan Wang, Aaksha Meghawat, Louis-Philippe Morency, Eric P. Xing
In this paper, we propose a Select-Additive Learning (SAL) procedure that improves the generalizability of trained neural networks for multimodal sentiment analysis.
no code implementations • 6 Nov 2015 • Haohan Wang, Madhavi K. Ganapathiraju
In order for the predicted interactions to be directly adopted by biologists, the ma- chine learning predictions have to be of high precision, regardless of recall.
no code implementations • 16 Oct 2015 • Haohan Wang, Bhiksha Raj
Further, we will also look into the development history of modelling time series data with neural networks.
no code implementations • 18 Sep 2015 • Haohan Wang, Madhavi K. Ganapathiraju
In this paper, we focused on the problem that non-availability of accurately labeled testing data sets in the domain of protein-protein interaction (PPI) prediction may lead to biased evaluation results.
no code implementations • 9 Dec 2014 • Seungwhan Moon, Suyoun Kim, Haohan Wang
We propose a transfer deep learning (TDL) framework that can transfer the knowledge obtained from a single-modal neural network to a network with a different modality.
no code implementations • 9 Jul 2014 • Ming Xu, Jianping Wu, Yiman Du, Haohan Wang, Geqi Qi, Kezhen Hu, Yun-Peng Xiao
However, none of existing approaches addresses the problem of identifying network-wide important crossroads in real road network.