Search Results for author: Hao He

Found 67 papers, 21 papers with code

Exploring Logically Dependent Multi-task Learning with Causal Inference

no code implementations EMNLP 2020 Wenqing Chen, Jidong Tian, Liqiang Xiao, Hao He, Yaohui Jin

In the field of causal inference, GS in our model is essentially a counterfactual reasoning process, trying to estimate the causal effect between tasks and utilize it to improve MTL.

Causal Inference counterfactual +2

Modeling Content Importance for Summarization with Pre-trained Language Models

no code implementations EMNLP 2020 Liqiang Xiao, Lu Wang, Hao He, Yaohui Jin

Previous work is mostly based on statistical methods that estimate word-level salience, which does not consider semantics and larger context when quantifying importance.

Diagnosing the First-Order Logical Reasoning Ability Through LogicNLI

no code implementations EMNLP 2021 Jidong Tian, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, Yaohui Jin

Recently, language models (LMs) have achieved significant performance on many NLU tasks, which has spurred widespread interest for their possible applications in the scientific and social area.

Logical Reasoning

To What Extent Do Natural Language Understanding Datasets Correlate to Logical Reasoning? A Method for Diagnosing Logical Reasoning.

no code implementations COLING 2022 Yitian Li, Jidong Tian, Wenqing Chen, Caoyun Fan, Hao He, Yaohui Jin

In this paper, we propose a systematic method to diagnose the correlations between an NLU dataset and a specific skill, and then take a fundamental reasoning skill, logical reasoning, as an example for analysis.

Logical Reasoning Machine Reading Comprehension +2

LucidFusion: Generating 3D Gaussians with Arbitrary Unposed Images

no code implementations21 Oct 2024 Hao He, Yixun Liang, Luozhou Wang, Yuanhao Cai, Xinli Xu, Hao-Xiang Guo, Xiang Wen, Yingcong Chen

Recent large reconstruction models have made notable progress in generating high-quality 3D objects from single images.

3D Generation Image to 3D

Scaling Laws For Diffusion Transformers

no code implementations10 Oct 2024 Zhengyang Liang, Hao He, Ceyuan Yang, Bo Dai

Diffusion transformers (DiT) have already achieved appealing synthesis and scaling properties in content recreation, e. g., image and video generation.

Text-to-Image Generation Video Generation

Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control

no code implementations27 May 2024 Zhengfei Kuang, Shengqu Cai, Hao He, Yinghao Xu, Hongsheng Li, Leonidas Guibas, Gordon Wetzstein

Research on video generation has recently made tremendous progress, enabling high-quality videos to be generated from text prompts or images.

Scene Generation Video Generation +1

What Radio Waves Tell Us about Sleep

no code implementations20 May 2024 Hao He, Chao Li, Wolfgang Ganglberger, Kaileigh Gallagher, Rumen Hristov, Michail Ouroutzoglou, Haoqi Sun, Jimeng Sun, Brandon Westover, Dina Katabi

The ability to assess sleep at home, capture sleep stages, and detect the occurrence of apnea (without on-body sensors) simply by analyzing the radio waves bouncing off people's bodies while they sleep is quite powerful.

Hypothesis Testing Prompting Improves Deductive Reasoning in Large Language Models

no code implementations9 May 2024 Yitian Li, Jidong Tian, Hao He, Yaohui Jin

Combining different forms of prompts with pre-trained large language models has yielded remarkable results on reasoning tasks (e. g. Chain-of-Thought prompting).

Fact Verification

Logical Negation Augmenting and Debiasing for Prompt-based Methods

no code implementations8 May 2024 Yitian Li, Jidong Tian, Hao He, Yaohui Jin

To solve the problem, we propose a simple but effective method, Negation Augmenting and Negation Debiasing (NAND), which introduces negative propositions to prompt-based methods without updating parameters.

Logical Reasoning Negation

Modeling Complex Mathematical Reasoning via Large Language Model based MathAgent

1 code implementation14 Dec 2023 Haoran Liao, Qinyi Du, Shaohua Hu, Hao He, Yanyan Xu, Jidong Tian, Yaohui Jin

Large language models (LLMs) face challenges in solving complex mathematical problems that require comprehensive capacities to parse the statements, associate domain knowledge, perform compound logical reasoning, and integrate the intermediate rationales.

Language Modelling Large Language Model +3

Comparable Demonstrations are Important in In-Context Learning: A Novel Perspective on Demonstration Selection

no code implementations12 Dec 2023 Caoyun Fan, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

In-Context Learning (ICL) is an important paradigm for adapting Large Language Models (LLMs) to downstream tasks through a few demonstrations.

In-Context Learning

Can Large Language Models Serve as Rational Players in Game Theory? A Systematic Analysis

no code implementations9 Dec 2023 Caoyun Fan, Jindou Chen, Yaohui Jin, Hao He

With the high alignment between the behavior of Large Language Models (LLMs) and humans, a promising research direction is to employ LLMs as substitutes for humans in game experiments, enabling social science research.

Chain-of-Thought Tuning: Masked Language Models can also Think Step By Step in Natural Language Understanding

no code implementations18 Oct 2023 Caoyun Fan, Jidong Tian, Yitian Li, Wenqing Chen, Hao He, Yaohui Jin

From the perspective of CoT, CoTT's two-step framework enables MLMs to implement task decomposition; CoTT's prompt tuning allows intermediate steps to be used in natural language form.

Natural Language Understanding Relation Extraction

Accurate Use of Label Dependency in Multi-Label Text Classification Through the Lens of Causality

no code implementations11 Oct 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

In this study, we attribute the bias to the model's misuse of label dependency, i. e., the model tends to utilize the correlation shortcut in label dependency rather than fusing text information and label dependency for prediction.

Attribute Causal Inference +4

Unlock the Potential of Counterfactually-Augmented Data in Out-Of-Distribution Generalization

no code implementations10 Oct 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

Counterfactually-Augmented Data (CAD) -- minimal editing of sentences to flip the corresponding labels -- has the potential to improve the Out-Of-Distribution (OOD) generalization capability of language models, as CAD induces language models to exploit domain-independent causal features and exclude spurious correlations.

Attribute Natural Language Inference +3

Randomized algorithms for precise measurement of differentially-private, personalized recommendations

1 code implementation7 Aug 2023 Allegra Laro, Yanqing Chen, Hao He, Babak Aghazadeh

Personalized recommendations form an important part of today's internet ecosystem, helping artists and creators to reach interested users, and helping users to discover new and engaging content.

Privacy Preserving

Taxonomy-Structured Domain Adaptation

2 code implementations13 Jun 2023 Tianyi Liu, Zihao Xu, Hao He, Guang-Yuan Hao, Guang-He Lee, Hao Wang

Domain adaptation aims to mitigate distribution shifts among different domains.

Domain Adaptation

Summary of ChatGPT-Related Research and Perspective Towards the Future of Large Language Models

no code implementations4 Apr 2023 Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li, Mengshen He, Zhengliang Liu, Zihao Wu, Lin Zhao, Dajiang Zhu, Xiang Li, Ning Qiang, Dingang Shen, Tianming Liu, Bao Ge

This paper presents a comprehensive survey of ChatGPT-related (GPT-3. 5 and GPT-4) research, state-of-the-art large language models (LLM) from the GPT series, and their prospective applications across diverse domains.

Label Name is Mantra: Unifying Point Cloud Segmentation across Heterogeneous Datasets

no code implementations19 Mar 2023 Yixun Liang, Hao He, Shishi Xiao, Hao Lu, Yingcong Chen

In this paper, we propose a principled approach that supports learning from heterogeneous datasets with different label sets.

Decoder Language Modelling +1

Run, Don't Walk: Chasing Higher FLOPS for Faster Neural Networks

2 code implementations CVPR 2023 Jierun Chen, Shiu-hong Kao, Hao He, Weipeng Zhuo, Song Wen, Chul-Ho Lee, S. -H. Gary Chan

To achieve faster networks, we revisit popular operators and demonstrate that such low FLOPS is mainly due to frequent memory access of the operators, especially the depthwise convolution.

Improving the Out-Of-Distribution Generalization Capability of Language Models: Counterfactually-Augmented Data is not Enough

no code implementations18 Feb 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

Counterfactually-Augmented Data (CAD) has the potential to improve language models' Out-Of-Distribution (OOD) generalization capability, as CAD induces language models to exploit causal features and exclude spurious correlations.

Attribute Natural Language Inference +2

MaxGNR: A Dynamic Weight Strategy via Maximizing Gradient-to-Noise Ratio for Multi-Task Learning

no code implementations18 Feb 2023 Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

A series of studies point out that too much gradient noise would lead to performance degradation in STL, however, in the MTL scenario, Inter-Task Gradient Noise (ITGN) is an additional source of gradient noise for each task, which can also affect the optimization process.

Multi-Task Learning

Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation

4 code implementations6 Feb 2023 Zihao Xu, Guang-Yuan Hao, Hao He, Hao Wang

To address this challenge, we first provide a formal definition of domain index from the probabilistic perspective, and then propose an adversarial variational Bayesian framework that infers domain indices from multi-domain data, thereby providing additional insight on domain relations and improving domain adaptation performance.

Domain Adaptation

Contrast with Major Classifier Vectors for Federated Medical Relation Extraction with Heterogeneous Label Distribution

no code implementations13 Jan 2023 Chunhui Du, Hao He, Yaohui Jin

Federated medical relation extraction enables multiple clients to train a deep network collaboratively without sharing their raw medical data.

Medical Relation Extraction Relation +1

Contactless Oxygen Monitoring with Gated Transformer

no code implementations6 Dec 2022 Hao He, Yuan Yuan, Ying-Cong Chen, Peng Cao, Dina Katabi

With the increasing popularity of telehealth, it becomes critical to ensure that basic physiological signals can be monitored accurately at home, with minimal patient overhead.

1st ICLR International Workshop on Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data (PAIR^2Struct)

no code implementations7 Oct 2022 Hao Wang, WanYu Lin, Hao He, Di Wang, Chengzhi Mao, Muhan Zhang

Recent years have seen advances on principles and guidance relating to accountable and ethical use of artificial intelligence (AI) spring up around the globe.

FedDAR: Federated Domain-Aware Representation Learning

no code implementations8 Sep 2022 Aoxiao Zhong, Hao He, Zhaolin Ren, Na Li, Quanzheng Li

To make sure the FL model is robust when facing heterogeneous data among FL clients, most efforts focus on personalizing models for clients.

Diversity Federated Learning +1

Domain Adaptation with Factorizable Joint Shift

no code implementations6 Mar 2022 Hao He, Yuzhe Yang, Hao Wang

In this paper, we propose a new assumption, Factorizable Joint Shift (FJS), to handle the co-existence of sampling bias in covariates and labels.

Unsupervised Domain Adaptation

Graph-Relational Domain Adaptation

1 code implementation ICLR 2022 Zihao Xu, Hao He, Guang-He Lee, Yuyang Wang, Hao Wang

In this work, we relax such uniform alignment by using a domain graph to encode domain adjacency, e. g., a graph of states in the US with each state as a domain and each edge indicating adjacency, thereby allowing domains to align flexibly based on the graph structure.

Domain Adaptation

Controlling Directions Orthogonal to a Classifier

1 code implementation ICLR 2022 Yilun Xu, Hao He, Tianxiao Shen, Tommi Jaakkola

We propose to identify directions invariant to a given classifier so that these directions can be controlled in tasks such as style transfer.

Domain Adaptation Fairness +1

EFMVFL: An Efficient and Flexible Multi-party Vertical Federated Learning without a Third Party

no code implementations17 Jan 2022 Yimin Huang, Xinyu Feng, Wanwan Wang, Hao He, Yukun Wang, Ming Yao

In most VFL frameworks, to protect the security and privacy of the participants' local data, a third party is needed to generate homomorphic encryption key pairs and perform decryption operations.

regression Vertical Federated Learning

De-Confounded Variational Encoder-Decoder for Logical Table-to-Text Generation

no code implementations ACL 2021 Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin

The task remains challenging where deep learning models often generated linguistically fluent but logically inconsistent text.

Decoder Sentence +2

Improving Video Instance Segmentation via Temporal Pyramid Routing

1 code implementation28 Jul 2021 Xiangtai Li, Hao He, Yibo Yang, Henghui Ding, Kuiyuan Yang, Guangliang Cheng, Yunhai Tong, DaCheng Tao

To incorporate both temporal and scale information, we propose a Temporal Pyramid Routing (TPR) strategy to conditionally align and conduct pixel-level aggregation from a feature pyramid pair of two adjacent frames.

Instance Segmentation Panoptic Segmentation +2

Mid-flight Forecasting for CPA Lines in Online Advertising

no code implementations15 Jul 2021 Hao He, Tian Zhou, Lihua Ren, Niklas Karlsson, Aaron Flores

For Verizon MediaDemand Side Platform(DSP), forecasting of ad campaign performance not only feeds key information to the optimization server to allow the system to operate on a high-performance mode, but also produces actionable insights to the advertisers.

Management

An Efficient Deep Distribution Network for Bid Shading in First-Price Auctions

no code implementations12 Jul 2021 Tian Zhou, Hao He, Shengjun Pan, Niklas Karlsson, Bharatbhushan Shetty, Brendan Kitts, Djordje Gligorijevic, San Gultekin, Tingyu Mao, Junwei Pan, Jianlong Zhang, Aaron Flores

Since 2019, most ad exchanges and sell-side platforms (SSPs), in the online advertising industry, shifted from second to first price auctions.

BoundarySqueeze: Image Segmentation as Boundary Squeezing

1 code implementation25 May 2021 Hao He, Xiangtai Li, Yibo Yang, Guangliang Cheng, Yunhai Tong, Lubin Weng, Zhouchen Lin, Shiming Xiang

This module is used to squeeze the object boundary from both inner and outer directions, which contributes to precise mask representation.

Image Segmentation Instance Segmentation +2

Dependent Multi-Task Learning with Causal Intervention for Image Captioning

no code implementations18 May 2021 Wenqing Chen, Jidong Tian, Caoyun Fan, Hao He, Yaohui Jin

The intermediate task would help the model better understand the visual features and thus alleviate the content inconsistency problem.

Image Captioning Multi-agent Reinforcement Learning +1

Action detection using a neural network elucidates the genetics of mouse grooming behavior

1 code implementation eLife 2021 Brian Q Geuther, Asaf Peer, Hao He, Gautam Sabnis, Vivek M Philip, Vivek Kumar

Automated detection of complex animal behaviors remains a challenging problem in neuroscience, particularly for behaviors that consist of disparate sequential motions.

Action Detection Diversity

PointFlow: Flowing Semantics Through Points for Aerial Image Segmentation

1 code implementation CVPR 2021 Xiangtai Li, Hao He, Xia Li, Duo Li, Guangliang Cheng, Jianping Shi, Lubin Weng, Yunhai Tong, Zhouchen Lin

Experimental results on three different aerial segmentation datasets suggest that the proposed method is more effective and efficient than state-of-the-art general semantic segmentation methods.

Image Segmentation Segmentation +1

Learning Blood Oxygen from Respiration Signals

no code implementations1 Jan 2021 Hao He, Ying-Cong Chen, Yuan Yuan, Dina Katabi

Further, since breathing can be monitored without body contact by analyzing the radio signal in the environment, we show that oxygen too can be monitored without any wearable devices.

Addressing Feature Suppression in Unsupervised Visual Representations

no code implementations17 Dec 2020 Tianhong Li, Lijie Fan, Yuan Yuan, Hao He, Yonglong Tian, Rogerio Feris, Piotr Indyk, Dina Katabi

However, contrastive learning is susceptible to feature suppression, i. e., it may discard important information relevant to the task of interest, and learn irrelevant features.

Attribute Contrastive Learning +1

A Semantically Consistent and Syntactically Variational Encoder-Decoder Framework for Paraphrase Generation

no code implementations COLING 2020 Wenqing Chen, Jidong Tian, Liqiang Xiao, Hao He, Yaohui Jin

In this paper, we propose a semantically consistent and syntactically variational encoder-decoder framework, which uses adversarial learning to ensure the syntactic latent variable be semantic-free.

Decoder Diversity +5

Bid Shading by Win-Rate Estimation and Surplus Maximization

no code implementations19 Sep 2020 Shengjun Pan, Brendan Kitts, Tian Zhou, Hao He, Bharatbhushan Shetty, Aaron Flores, Djordje Gligorijevic, Junwei Pan, Tingyu Mao, San Gultekin, Jianlong Zhang

We found that bid shading, in general, can deliver significant value to advertisers, reducing price per impression to about 55% of the unshaded cost.

Attribute

Continuously Indexed Domain Adaptation

1 code implementation ICML 2020 Hao Wang, Hao He, Dina Katabi

Our empirical results show that our approach outperforms the state-of-the-art domain adaption methods on both synthetic and real-world medical datasets.

Continuously Indexed Domain Adaptation

Continuously Index Domain Adaptation

1 code implementation ICML 2020 Hao Wang, Hao He, Dina Katabi

Our empirical results show that our approach outperforms the state-of-the-art domain adaption methods on both synthetic and real-world medical datasets.

Continuously Indexed Domain Adaptation

UST: Unifying Spatio-Temporal Context for Trajectory Prediction in Autonomous Driving

no code implementations6 May 2020 Hao He, Hengchen Dai, Naiyan Wang

In contrast to existing methods which heavily rely on recurrent neural network for temporal context and hand-crafted structure for spatial context, our method could automatically partition the spatio-temporal space to adapt the data.

Autonomous Driving Trajectory Prediction

Learning Compositional Koopman Operators for Model-Based Control

no code implementations ICLR 2020 Yunzhu Li, Hao He, Jiajun Wu, Dina Katabi, Antonio Torralba

Finding an embedding space for a linear approximation of a nonlinear dynamical system enables efficient system identification and control synthesis.

Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate

no code implementations28 Sep 2019 Lu Mi, Hao Wang, Yonglong Tian, Hao He, Nir Shavit

Uncertainty estimation is an essential step in the evaluation of the robustness for deep learning models in computer vision, especially when applied in risk-sensitive areas.

regression

Disentangled Makeup Transfer with Generative Adversarial Network

1 code implementation2 Jul 2019 Honglun Zhang, Wenqing Chen, Hao He, Yaohui Jin

Facial makeup transfer is a widely-used technology that aims to transfer the makeup style from a reference face image to a non-makeup face.

Decoder Facial Makeup Transfer +2

Road-network-based Rapid Geolocalization

no code implementations25 Jun 2019 Yongfei Li, Dongfang Yang, Shicheng Wang, Hao He

We test all the candidate matching tuples under a hypothesise-and-test framework to search for the best match.

Point Cloud Registration

ProbGAN: Towards Probabilistic GAN with Theoretical Guarantees

1 code implementation ICLR 2019 Hao He, Hao Wang, Guang-He Lee, Yonglong Tian

Probabilistic modelling is a principled framework to perform model aggregation, which has been a primary mechanism to combat mode collapse in the context of Generative Adversarial Networks (GAN).

Image Generation

Truly Proximal Policy Optimization

1 code implementation19 Mar 2019 Yuhui Wang, Hao He, Chao Wen, Xiaoyang Tan

Proximal policy optimization (PPO) is one of the most successful deep reinforcement-learning methods, achieving state-of-the-art performance across a wide range of challenging tasks.

Reinforcement Learning

Robust Reinforcement Learning in POMDPs with Incomplete and Noisy Observations

no code implementations15 Feb 2019 Yuhui Wang, Hao He, Xiaoyang Tan

In real-world scenarios, the observation data for reinforcement learning with continuous control is commonly noisy and part of it may be dynamically missing over time, which violates the assumption of many current methods developed for this.

continuous-control Continuous Control +4

Bidirectional Inference Networks: A Class of Deep Bayesian Networks for Health Profiling

no code implementations6 Feb 2019 Hao Wang, Chengzhi Mao, Hao He, Ming-Min Zhao, Tommi S. Jaakkola, Dina Katabi

We consider the problem of inferring the values of an arbitrary set of variables (e. g., risk of diseases) given other observed variables (e. g., symptoms and diagnosed diseases) and high-dimensional signals (e. g., MRI images or EEG).

Computational Efficiency EEG +2

Trust Region-Guided Proximal Policy Optimization

2 code implementations NeurIPS 2019 Yuhui Wang, Hao He, Xiaoyang Tan, Yaozhong Gan

We formally show that this method not only improves the exploration ability within the trust region but enjoys a better performance bound compared to the original PPO as well.

Reinforcement Learning Reinforcement Learning (RL)

Exposure: A White-Box Photo Post-Processing Framework

1 code implementation27 Sep 2017 Yuanming Hu, Hao He, Chenxi Xu, Baoyuan Wang, Stephen Lin

Retouching can significantly elevate the visual appeal of photos, but many casual photographers lack the expertise to do this well.

Reinforcement Learning

From Bayesian Sparsity to Gated Recurrent Nets

no code implementations NeurIPS 2017 Hao He, Bo Xin, David Wipf

The iterations of many first-order algorithms, when applied to minimizing common regularized regression functions, often resemble neural network layers with pre-specified weights.

3D geometry

Cannot find the paper you are looking for? You can Submit a new open access paper.