1 code implementation • 31 Dec 2023 • Samarth Mishra, Carlos D. Castillo, Hongcheng Wang, Kate Saenko, Venkatesh Saligrama
In cross-domain retrieval, a model is required to identify images from the same semantic category across two visual domains.
1 code implementation • NeurIPS 2023 • Hongcheng Wang, Andy Guan Hong Chen, Xiaoqi Li, Mingdong Wu, Hao Dong
The task of Visual Object Navigation (VON) involves an agent's ability to locate a particular object within a given scene.
no code implementations • 13 May 2023 • Pirazh Khorramshahi, Zhe Wu, Tianchen Wang, Luke DeLuccia, Hongcheng Wang
Despite recent advances in video-based action recognition and robust spatio-temporal modeling, most of the proposed approaches rely on the abundance of computational resources to afford running huge and computation-intensive convolutional or transformer-based neural networks to obtain satisfactory results.
no code implementations • 29 Mar 2023 • Haoqi Yuan, Chi Zhang, Hongcheng Wang, Feiyang Xie, Penglin Cai, Hao Dong, Zongqing Lu
Our method outperforms baselines by a large margin and is the most sample-efficient demonstration-free RL method to solve Minecraft Tech Tree tasks.
no code implementations • 7 Nov 2022 • Yonatan Vaizman, Hongcheng Wang
We formulate an optimization problem - finding a channel allocation (which channel each home should use) that minimizes the total Wi-Fi-pain in the neighborhood.
no code implementations • 10 Nov 2021 • Navdeep Jain, Hongcheng Wang
We also propose a concatenated gender/age detection model to algorithmically derive the context in absence of such prior information.
no code implementations • 19 Sep 2020 • Xiaoqing Zheng, Jie Chen, Hongcheng Wang, Song Zheng, Yaguang Kong
A machine vision-based surface quality inspection system is usually composed of two processes: image acquisition and automatic defect detection.
1 code implementation • 19 Aug 2020 • Kehua Chen, Hongcheng Wang, Borja Valverde-Perezc, Siyuan Zhai, Luca Vezzaro, Aijie Wang
The result shows that optimization based on LCA has lower environmental impacts compared to baseline scenario, as cost, energy consumption and greenhouse gas emissions reduce to 0. 890 CNY/m3-ww, 0. 530 kWh/m3-ww, 2. 491 kg CO2-eq/m3-ww respectively.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 20 May 2020 • Edward ODwyer, Kehua Chen, Hongcheng Wang, Aijie Wang, Nilay Shah, Miao Guo
The expanding population and rapid urbanisation, in particular in the Global South, are leading to global challenges on resource supply stress and rising waste generation.
Optimization and Control
no code implementations • 19 Mar 2020 • Toufiq Parag, Hongcheng Wang
Classification is a pivotal function for many computer vision tasks such as object classification, detection, scene segmentation.
no code implementations • 29 Feb 2020 • Longlong Jing, Toufiq Parag, Zhe Wu, YingLi Tian, Hongcheng Wang
To minimize the dependence on a large annotated dataset, our proposed semi-supervised method trains from a small number of labeled examples and exploits two regulatory signals from unlabeled data.
no code implementations • 27 Feb 2020 • Usman Sajid, Hasan Sajid, Hongcheng Wang, Guanghui Wang
This module also provides a count for each label, which is then analyzed via a specifically devised novel decision module to decide whether the image belongs to any of the two extreme cases (very low or very high density) or a normal case.
no code implementations • ICCV 2019 • Ruichi Yu, Hongcheng Wang, Ang Li, Jingxiao Zheng, Vlad I. Morariu, Larry S. Davis
We address the recognition of agent-in-place actions, which are associated with agents who perform them and places where they occur, in the context of outdoor home surveillance.
no code implementations • 6 Jan 2018 • Ruichi Yu, Hongcheng Wang, Larry S. Davis
To dramatically speedup relevant motion event detection and improve its performance, we propose a novel network for relevant motion event detection, ReMotENet, which is a unified, end-to-end data-driven method using spatial-temporal attention-based 3D ConvNets to jointly model the appearance and motion of objects-of-interest in a video.