Search Results for author: Jianlong Chang

Found 28 papers, 8 papers with code

A Survey of Generative Techniques for Spatial-Temporal Data Mining

no code implementations15 May 2024 Qianru Zhang, Haixin Wang, Cheng Long, Liangcai Su, Xingwei He, Jianlong Chang, Tailin Wu, Hongzhi Yin, Siu-Ming Yiu, Qi Tian, Christian S. Jensen

By integrating generative techniques and providing a standardized framework, the paper contributes to advancing the field and encourages researchers to explore the vast potential of generative techniques in spatial-temporal data mining.

When Parameter-efficient Tuning Meets General-purpose Vision-language Models

1 code implementation16 Dec 2023 Yihang Zhai, Haixin Wang, Jianlong Chang, Xinlong Yang, Jinan Sun, Shikun Zhang, Qi Tian

Instruction tuning has shown promising potential for developing general-purpose AI capabilities by using large-scale pre-trained models and boosts growing research to integrate multimodal information for creative applications.

Towards AGI in Computer Vision: Lessons Learned from GPT and Large Language Models

no code implementations14 Jun 2023 Lingxi Xie, Longhui Wei, Xiaopeng Zhang, Kaifeng Bi, Xiaotao Gu, Jianlong Chang, Qi Tian

In this paper, we start with a conceptual definition of AGI and briefly review how NLP solves a wide range of tasks via a chat system.

Visual Tuning

no code implementations10 May 2023 Bruce X. B. Yu, Jianlong Chang, Haixin Wang, Lingbo Liu, Shijie Wang, Zhiyu Wang, Junfan Lin, Lingxi Xie, Haojie Li, Zhouchen Lin, Qi Tian, Chang Wen Chen

With the surprising development of pre-trained visual foundation models, visual tuning jumped out of the standard modus operandi that fine-tunes the whole pre-trained model or just the fully connected layer.

LION: Implicit Vision Prompt Tuning

no code implementations17 Mar 2023 Haixin Wang, Jianlong Chang, Xiao Luo, Jinan Sun, Zhouchen Lin, Qi Tian

Despite recent competitive performance across a range of vision tasks, vision Transformers still have an issue of heavy computational costs.

Transfer Learning

Constraint and Union for Partially-Supervised Temporal Sentence Grounding

no code implementations20 Feb 2023 Chen Ju, Haicheng Wang, Jinxiang Liu, Chaofan Ma, Ya zhang, Peisen Zhao, Jianlong Chang, Qi Tian

Temporal sentence grounding aims to detect the event timestamps described by the natural language query from given untrimmed videos.

Sentence Temporal Sentence Grounding

Open-Set Fine-Grained Retrieval via Prompting Vision-Language Evaluator

no code implementations CVPR 2023 Shijie Wang, Jianlong Chang, Haojie Li, Zhihui Wang, Wanli Ouyang, Qi Tian

PLEor could leverage pre-trained CLIP model to infer the discrepancies encompassing both pre-defined and unknown subcategories, called category-specific discrepancies, and transfer them to the backbone network trained in the close-set scenarios.

Knowledge Distillation Retrieval +1

Being Comes from Not-being: Open-vocabulary Text-to-Motion Generation with Wordless Training

1 code implementation CVPR 2023 Junfan Lin, Jianlong Chang, Lingbo Liu, Guanbin Li, Liang Lin, Qi Tian, Chang Wen Chen

During inference, instead of changing the motion generator, our method reformulates the input text into a masked motion as the prompt for the motion generator to ``reconstruct'' the motion.

Language Modelling Motion Generation +1

Towards a Unified View on Visual Parameter-Efficient Transfer Learning

1 code implementation3 Oct 2022 Bruce X. B. Yu, Jianlong Chang, Lingbo Liu, Qi Tian, Chang Wen Chen

Towards this goal, we propose a framework with a unified view of PETL called visual-PETL (V-PETL) to investigate the effects of different PETL techniques, data scales of downstream domains, positions of trainable parameters, and other aspects affecting the trade-off.

Action Recognition Image Classification +2

Prompt-Matched Semantic Segmentation

no code implementations22 Aug 2022 Lingbo Liu, Jianlong Chang, Bruce X. B. Yu, Liang Lin, Qi Tian, Chang-Wen Chen

Previous methods usually fine-tuned the entire networks for each specific dataset, which will be burdensome to store massive parameters of these networks.

Representation Learning Segmentation +2

Fine-grained Retrieval Prompt Tuning

no code implementations29 Jul 2022 Shijie Wang, Jianlong Chang, Zhihui Wang, Haojie Li, Wanli Ouyang, Qi Tian

In this paper, we develop Fine-grained Retrieval Prompt Tuning (FRPT), which steers a frozen pre-trained model to perform the fine-grained retrieval task from the perspectives of sample prompting and feature adaptation.

Retrieval

Pro-tuning: Unified Prompt Tuning for Vision Tasks

no code implementations28 Jul 2022 Xing Nie, Bolin Ni, Jianlong Chang, Gaomeng Meng, Chunlei Huo, Zhaoxiang Zhang, Shiming Xiang, Qi Tian, Chunhong Pan

To this end, we propose parameter-efficient Prompt tuning (Pro-tuning) to adapt frozen vision models to various downstream vision tasks.

Adversarial Robustness Image Classification +4

HiVLP: Hierarchical Vision-Language Pre-Training for Fast Image-Text Retrieval

no code implementations24 May 2022 Feilong Chen, Xiuyi Chen, Jiaxin Shi, Duzhen Zhang, Jianlong Chang, Qi Tian

It also achieves about +4. 9 AR on COCO and +3. 8 AR on Flickr30K than LightingDot and achieves comparable performance with the state-of-the-art (SOTA) fusion-based model METER.

Cross-Modal Retrieval Image-text Retrieval +1

AME: Attention and Memory Enhancement in Hyper-Parameter Optimization

no code implementations CVPR 2022 Nuo Xu, Jianlong Chang, Xing Nie, Chunlei Huo, Shiming Xiang, Chunhong Pan

Training Deep Neural Networks (DNNs) is inherently subject to sensitive hyper-parameters and untimely feedbacks of performance evaluation.

Image Classification object-detection +2

Deep Encryption: Protecting Pre-Trained Neural Networks with Confusion Neurons

no code implementations29 Sep 2021 Mengbiao Zhao, Shixiong Xu, Jianlong Chang, Lingxi Xie, Jie Chen, Qi Tian

Having consumed huge amounts of training data and computational resource, large-scale pre-trained models are often considered key assets of AI service providers.

Position

Differentiable Convolution Search for Point Cloud Processing

no code implementations ICCV 2021 Xing Nie, Yongcheng Liu, Shaohong Chen, Jianlong Chang, Chunlei Huo, Gaofeng Meng, Qi Tian, Weiming Hu, Chunhong Pan

It can work in a purely data-driven manner and thus is capable of auto-creating a group of suitable convolutions for geometric shape modeling.

Spatio-Temporal Graph Structure Learning for Traffic Forecasting

no code implementations AAAI 2020 Qi Zhang, Jianlong Chang, Gaofeng Meng, Shiming Xiang, Chunhong Pan

To address these issues, we propose a novel framework named Structure Learning Convolution (SLC) that enables to extend the traditional convolutional neural network (CNN) to graph domains and learn the graph structure for traffic forecasting.

Graph structure learning Time Series +2

Cross-Modality Paired-Images Generation for RGB-Infrared Person Re-Identification

2 code implementations10 Feb 2020 Guan-An Wang, Tianzhu Zhang. Yang Yang, Jian Cheng, Jianlong Chang, Xu Liang, Zeng-Guang Hou

Second, given cross-modality unpaired-images of a person, our method can generate cross-modality paired images from exchanged images.

Person Re-Identification

Differentiable Architecture Search with Ensemble Gumbel-Softmax

no code implementations6 May 2019 Jianlong Chang, Xinbang Zhang, Yiwen Guo, Gaofeng Meng, Shiming Xiang, Chunhong Pan

For network architecture search (NAS), it is crucial but challenging to simultaneously guarantee both effectiveness and efficiency.

Neural Architecture Search

Deep Discriminative Clustering Analysis

no code implementations5 May 2019 Jianlong Chang, Yiwen Guo, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, Chunhong Pan

Traditional clustering methods often perform clustering with low-level indiscriminative representations and ignore relationships between patterns, resulting in slight achievements in the era of deep learning.

Clustering

Structure-Aware Convolutional Neural Networks

1 code implementation NeurIPS 2018 Jianlong Chang, Jie Gu, Lingfeng Wang, Gaofeng Meng, Shiming Xiang, Chunhong Pan

Convolutional neural networks (CNNs) are inherently subject to invariable filters that can only aggregate local inputs with the same topological structures.

Action Recognition Activity Detection +5

Cannot find the paper you are looking for? You can Submit a new open access paper.