Search Results for author: Huisheng Wang

Found 14 papers, 4 papers with code

Optimal Investment under the Influence of Decision-changing Imitation

no code implementations17 Sep 2024 Huisheng Wang, H. Vicky Zhao

Decision-changing imitation is a prevalent phenomenon in financial markets, where investors imitate others' decision-changing rates when making their own investment decisions.

Improving Subject-Driven Image Synthesis with Subject-Agnostic Guidance

no code implementations CVPR 2024 Kelvin C. K. Chan, Yang Zhao, Xuhui Jia, Ming-Hsuan Yang, Huisheng Wang

In subject-driven text-to-image synthesis, the synthesis process tends to be heavily influenced by the reference images provided by users, often overlooking crucial attributes detailed in the text prompt.

Image Generation

Message-Enhanced DeGroot Model

no code implementations29 Feb 2024 Huisheng Wang, Zhanjiang Chen, H. Vicky Zhao

The Message-Enhanced DeGroot model, combining the Bounded Brownian Message model with the traditional DeGroot model, quantitatively describes the evolution of agents' opinions under the influence of messages.

Optimal Investment with Herd Behaviour Using Rational Decision Decomposition

no code implementations14 Jan 2024 Huisheng Wang, H. Vicky Zhao

Furthermore, we define the weight function in the rational decision decomposition as the following agent's investment opinion to measure the preference of his/her own rational decision over that of the leading expert.

Decision Making

PolyMaX: General Dense Prediction with Mask Transformer

1 code implementation9 Nov 2023 Xuan Yang, Liangzhe Yuan, Kimberly Wilber, Astuti Sharma, Xiuye Gu, Siyuan Qiao, Stephanie Debats, Huisheng Wang, Hartwig Adam, Mikhail Sirotenko, Liang-Chieh Chen

Despite this shift, methods based on the per-pixel prediction paradigm still dominate the benchmarks on the other dense prediction tasks that require continuous outputs, such as depth estimation and surface normal prediction.

Monocular Depth Estimation Semantic Segmentation +2

VideoGLUE: Video General Understanding Evaluation of Foundation Models

1 code implementation6 Jul 2023 Liangzhe Yuan, Nitesh Bharadwaj Gundavarapu, Long Zhao, Hao Zhou, Yin Cui, Lu Jiang, Xuan Yang, Menglin Jia, Tobias Weyand, Luke Friedman, Mikhail Sirotenko, Huisheng Wang, Florian Schroff, Hartwig Adam, Ming-Hsuan Yang, Ting Liu, Boqing Gong

We evaluate the video understanding capabilities of existing foundation models (FMs) using a carefully designed experiment protocol consisting of three hallmark tasks (action recognition, temporal localization, and spatiotemporal localization), eight datasets well received by the community, and four adaptation methods tailoring an FM for downstream tasks.

Action Recognition Temporal Localization +1

Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception

no code implementations NeurIPS 2023 Hassan Akbari, Dan Kondratyuk, Yin Cui, Rachel Hornung, Huisheng Wang, Hartwig Adam

We conduct extensive empirical studies and reveal the following key insights: 1) Performing gradient descent updates by alternating on diverse modalities, loss functions, and tasks, with varying input resolutions, efficiently improves the model.

Ranked #2 on Zero-Shot Action Recognition on UCF101 (using extra training data)

Classification Image Classification +7

Identity Encoder for Personalized Diffusion

no code implementations14 Apr 2023 Yu-Chuan Su, Kelvin C. K. Chan, Yandong Li, Yang Zhao, Han Zhang, Boqing Gong, Huisheng Wang, Xuhui Jia

Our approach greatly reduces the overhead for personalized image generation and is more applicable in many potential applications.

Image Enhancement Image Generation +1

MovieCLIP: Visual Scene Recognition in Movies

1 code implementation20 Oct 2022 Digbalay Bose, Rajat Hebbar, Krishna Somandepalli, Haoyang Zhang, Yin Cui, Kree Cole-McLaughlin, Huisheng Wang, Shrikanth Narayanan

Longform media such as movies have complex narrative structures, with events spanning a rich variety of ambient visual scenes.

Genre classification Scene Recognition

Spatiotemporal Contrastive Video Representation Learning

4 code implementations CVPR 2021 Rui Qian, Tianjian Meng, Boqing Gong, Ming-Hsuan Yang, Huisheng Wang, Serge Belongie, Yin Cui

Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away.

Contrastive Learning Data Augmentation +5

Cannot find the paper you are looking for? You can Submit a new open access paper.