Search Results for author: Haoji Hu

Found 20 papers, 8 papers with code

UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing

no code implementations20 Feb 2024 Jianhong Bai, Tianyu He, Yuchi Wang, Junliang Guo, Haoji Hu, Zuozhu Liu, Jiang Bian

Recent advances in text-guided video editing have showcased promising results in appearance editing (e. g., stylization).

Video Editing

Unified Medical Image Pre-training in Language-Guided Common Semantic Space

no code implementations24 Nov 2023 Xiaoxuan He, Yifan Yang, Xinyang Jiang, Xufang Luo, Haoji Hu, Siyun Zhao, Dongsheng Li, Yuqing Yang, Lili Qiu

To overcome the aforementioned challenges, we propose an Unified Medical Image Pre-training framework, namely UniMedI, which utilizes diagnostic reports as common semantic space to create unified representations for diverse modalities of medical images (especially for 2D and 3D images).

Towards Distribution-Agnostic Generalized Category Discovery

1 code implementation NeurIPS 2023 Jianhong Bai, Zuozhu Liu, Hualiang Wang, Ruizhe Chen, Lianrui Mu, Xiaomeng Li, Joey Tianyi Zhou, Yang Feng, Jian Wu, Haoji Hu

In this paper, we formally define a more realistic task as distribution-agnostic generalized category discovery (DA-GCD): generating fine-grained predictions for both close- and open-set classes in a long-tailed open-world setting.

Contrastive Learning Transfer Learning

Learning Dynamic Graphs from All Contextual Information for Accurate Point-of-Interest Visit Forecasting

no code implementations28 Jun 2023 Arash Hajisafi, Haowen Lin, Sina Shaham, Haoji Hu, Maria Despoina Siampou, Yao-Yi Chiang, Cyrus Shahabi

Forecasting the number of visits to Points-of-Interest (POI) in an urban area is critical for planning and decision-making for various application domains, from urban planning and transportation management to public health and social studies.

Decision Making Management +2

On the Effectiveness of Out-of-Distribution Data in Self-Supervised Long-Tail Learning

2 code implementations8 Jun 2023 Jianhong Bai, Zuozhu Liu, Hualiang Wang, Jin Hao, Yang Feng, Huanpeng Chu, Haoji Hu

Recent work shows that the long-tailed learning performance could be boosted by sampling extra in-domain (ID) data for self-supervised training, however, large-scale ID data which can rebalance the minority classes are expensive to collect.

Long-tail Learning Representation Learning +1

Clustering Human Mobility with Multiple Spaces

no code implementations20 Jan 2023 Haoji Hu, Haowen Lin, Yao-Yi Chiang

Human mobility clustering is an important problem for understanding human mobility behaviors (e. g., work and school commutes).

Clustering Trajectory Clustering

Towards Calibrated Hyper-Sphere Representation via Distribution Overlap Coefficient for Long-tailed Learning

1 code implementation22 Aug 2022 Hualiang Wang, Siming Fu, Xiaoxuan He, Hangxiang Fang, Zuozhu Liu, Haoji Hu

To our knowledge, this is the first work to measure representation quality of classifiers and features from the perspective of distribution overlap coefficient.

Image Classification Instance Segmentation +1

AI-enabled Automatic Multimodal Fusion of Cone-Beam CT and Intraoral Scans for Intelligent 3D Tooth-Bone Reconstruction and Clinical Applications

no code implementations11 Mar 2022 Jin Hao, Jiaxiang Liu, Jin Li, Wei Pan, Ruizhe Chen, Huimin Xiong, Kaiwei Sun, Hangzheng Lin, Wanlu Liu, Wanghui Ding, Jianfei Yang, Haoji Hu, Yueling Zhang, Yang Feng, Zeyu Zhao, Huikai Wu, Youyi Zheng, Bing Fang, Zuozhu Liu, Zhihe Zhao

Here, we present a Deep Dental Multimodal Analysis (DDMA) framework consisting of a CBCT segmentation model, an intraoral scan (IOS) segmentation model (the most accurate digital dental model), and a fusion model to generate 3D fused crown-root-bone structures with high fidelity and accurate occlusal and dentition information.

Segmentation

UWC: Unit-wise Calibration Towards Rapid Network Compression

no code implementations17 Jan 2022 Chen Lin, Zheyang Li, Bo Peng, Haoji Hu, Wenming Tan, Ye Ren, ShiLiang Pu

This paper introduces a post-training quantization~(PTQ) method achieving highly efficient Convolutional Neural Network~ (CNN) quantization with high performance.

Quantization

Compressing Facial Makeup Transfer Networks by Collaborative Distillation and Kernel Decomposition

1 code implementation16 Sep 2020 Bianjiang Yang, Zi Hui, Haoji Hu, Xinyi Hu, Lu Yu

Although the facial makeup transfer network has achieved high-quality performance in generating perceptually pleasing makeup images, its capability is still restricted by the massive computation and storage of the network architecture.

Facial Makeup Transfer

Modeling Personalized Item Frequency Information for Next-basket Recommendation

2 code implementations31 May 2020 Haoji Hu, Xiangnan He, Jinyang Gao, Zhi-Li Zhang

NBR is in general more complex than the widely studied sequential (session-based) recommendation which recommends the next item based on a sequence of items.

Next-basket recommendation Session-Based Recommendations

Collaborative Distillation for Ultra-Resolution Universal Style Transfer

1 code implementation CVPR 2020 Huan Wang, Yijun Li, Yuehai Wang, Haoji Hu, Ming-Hsuan Yang

In this work, we present a new knowledge distillation method (named Collaborative Distillation) for encoder-decoder based neural style transfer to reduce the convolutional filters.

Knowledge Distillation Style Transfer

Physics-Guided Deep Neural Networks for Power Flow Analysis

no code implementations31 Jan 2020 Xinyue Hu, Haoji Hu, Saurabh Verma, Zhi-Li Zhang

Nevertheless, prior data-driven approaches suffer from poor performance and generalizability, due to overly simplified assumptions of the PF problem or ignorance of physical laws governing power systems.

Structured Pruning for Efficient ConvNets via Incremental Regularization

no code implementations NIPS Workshop CDNNRIA 2018 Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu

Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance loss.

Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method

no code implementations NIPS Workshop CDNNRIA 2018 Yuxin Zhang, Huan Wang, Yang Luo, Lu Yu, Haoji Hu, Hangguan Shan, Tony Q. S. Quek

Despite enjoying extensive applications in video analysis, three-dimensional convolutional neural networks (3D CNNs)are restricted by their massive computation and storage consumption.

Model Compression Network Pruning

Structured Pruning for Efficient ConvNets via Incremental Regularization

1 code implementation25 Apr 2018 Huan Wang, Qiming Zhang, Yuehai Wang, Yu Lu, Haoji Hu

Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance degrade.

Network Pruning

Structured Probabilistic Pruning for Convolutional Neural Network Acceleration

2 code implementations20 Sep 2017 Huan Wang, Qiming Zhang, Yuehai Wang, Haoji Hu

Unlike existing deterministic pruning approaches, where unimportant weights are permanently eliminated, SPP introduces a pruning probability for each weight, and pruning is guided by sampling from the pruning probabilities.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.