Search Results for author: Haoran Tang

Found 14 papers, 4 papers with code

#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning

3 code implementations NeurIPS 2017 Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, Xi Chen, Yan Duan, John Schulman, Filip De Turck, Pieter Abbeel

In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various high-dimensional and/or continuous deep RL benchmarks.

Atari Games Continuous Control +2

Reinforcement Learning with Deep Energy-Based Policies

3 code implementations ICML 2017 Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, Sergey Levine

We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before.

Q-Learning reinforcement-learning +1

Shuffle Augmentation of Features from Unlabeled Data for Unsupervised Domain Adaptation

no code implementations28 Jan 2022 Changwei Xu, Jianfei Yang, Haoran Tang, Han Zou, Cheng Lu, Tianshuo Zhang

Unsupervised Domain Adaptation (UDA), a branch of transfer learning where labels for target samples are unavailable, has been widely researched and developed in recent years with the help of adversarially trained models.

Transfer Learning Unsupervised Domain Adaptation

WEKA-Based: Key Features and Classifier for French of Five Countries

no code implementations10 Nov 2022 Zeqian Li, Keyu Qiu, Chenxu Jiao, Wen Zhu, Haoran Tang

This paper describes a French dialect recognition system that will appropriately distinguish between different regional French dialects.

HashEncoding: Autoencoding with Multiscale Coordinate Hashing

no code implementations29 Nov 2022 Lukas Zhornyak, Zhengjie Xu, Haoran Tang, Jianbo Shi

We present HashEncoding, a novel autoencoding architecture that leverages a non-parametric multiscale coordinate hash function to facilitate a per-pixel decoder without convolutions.

Optical Flow Estimation

Contrastive Learning Relies More on Spatial Inductive Bias Than Supervised Learning: An Empirical Study

no code implementations ICCV 2023 Yuanyi Zhong, Haoran Tang, Jun-Kun Chen, Yu-Xiong Wang

Though self-supervised contrastive learning (CL) has shown its potential to achieve state-of-the-art accuracy without any supervision, its behavior still remains under investigated by academia.

Contrastive Learning Inductive Bias

Weighted Joint Maximum Mean Discrepancy Enabled Multi-Source-Multi-Target Unsupervised Domain Adaptation Fault Diagnosis

no code implementations20 Oct 2023 Zixuan Wang, Haoran Tang, Haibo Wang, Bo Qin, Mark D. Butala, Weiming Shen, Hongwei Wang

Despite the remarkable results that can be achieved by data-driven intelligent fault diagnosis techniques, they presuppose the same distribution of training and test data as well as sufficient labeled data.

Unsupervised Domain Adaptation

Retrieving Conditions from Reference Images for Diffusion Models

no code implementations5 Dec 2023 Haoran Tang, Xin Zhou, Jieren Deng, Zhihong Pan, Hao Tian, Pratik Chaudhari

Newly developed diffusion-based techniques have showcased phenomenal abilities in producing a wide range of high-quality images, sparking considerable interest in various applications.

Face Generation Retrieval +1

Context Matters: Data-Efficient Augmentation of Large Language Models for Scientific Applications

1 code implementation12 Dec 2023 Xiang Li, Haoran Tang, Siyu Chen, Ziwei Wang, Anurag Maravi, Marcin Abram

In this paper, we explore the challenges inherent to Large Language Models (LLMs) like GPT-4, particularly their propensity for hallucinations, logic mistakes, and incorrect conclusions when tasked with answering complex questions.

ST-LLM: Large Language Models Are Effective Temporal Learners

1 code implementation30 Mar 2024 Ruyang Liu, Chen Li, Haoran Tang, Yixiao Ge, Ying Shan, Ge Li

In this paper, we investigate a straightforward yet unexplored question: Can we feed all spatial-temporal tokens into the LLM, thus delegating the task of video sequence modeling to the LLMs?

Reading Comprehension Video Understanding

Cannot find the paper you are looking for? You can Submit a new open access paper.