1 code implementation • ECCV 2020 • Xuefeng Hu, Zhihan Zhang, Zhenye Jiang, Syomantak Chaudhuri, Zhenheng Yang, Ram Nevatia
Tehchniques for manipulating images are advancing rapidly; while these are helpful for many useful tasks, they also pose a threat to society with their ability to create believable misinformation.
Ranked #7 on
Image Manipulation Detection
on CocoGlide
no code implementations • 19 Sep 2024 • Xiaotian Han, Yiren Jian, Xuefeng Hu, Haogeng Liu, Yiqi Wang, Qihang Fan, Yuang Ai, Huaibo Huang, Ran He, Zhenheng Yang, Quanzeng You
Pre-training on large-scale, high-quality datasets is crucial for enhancing the reasoning capabilities of Large Language Models (LLMs), especially in specialized domains such as mathematics.
no code implementations • 17 Jun 2024 • Xuefeng Hu, Ke Zhang, Min Sun, Albert Chen, Cheng-Hao Kuo, Ram Nevatia
Large-scale pretrained vision-language models like CLIP have demonstrated remarkable zero-shot image classification capabilities across diverse domains.
1 code implementation • CVPR 2024 • Zhaoheng Zheng, Jingmin Wei, Xuefeng Hu, Haidong Zhu, Ram Nevatia
Thus, we propose LLaMP, Large Language Models as Prompt learners, that produces adaptive prompts for the CLIP text encoder, establishing it as the connecting bridge.
1 code implementation • 4 Aug 2023 • Xuefeng Hu, Ke Zhang, Lu Xia, Albert Chen, Jiajia Luo, Yuyin Sun, Ken Wang, Nan Qiao, Xiao Zeng, Min Sun, Cheng-Hao Kuo, Ram Nevatia
Large-scale Pre-Training Vision-Language Model such as CLIP has demonstrated outstanding performance in zero-shot classification, e. g. achieving 76. 3% top-1 accuracy on ImageNet without seeing any example, which leads to potential benefits to many tasks that have no labeled data.
2 code implementations • 21 Mar 2023 • Zhuoming Liu, Xuefeng Hu, Ram Nevatia
We propose a new setting for detecting unseen objects called Zero-shot Annotation object Detection (ZAD).
no code implementations • 21 Oct 2021 • Xuefeng Hu, Gokhan Uzunbas, Sirius Chen, Rui Wang, Ashish Shah, Ram Nevatia, Ser-Nam Lim
We present a simple and effective way to estimate the batch-norm statistics during test time, to fast adapt a source model to target test samples.
no code implementations • 29 Sep 2021 • Xuefeng Hu, Mustafa Uzunbas, Bor-Chun Chen, Rui Wang, Ashish Shah, Ram Nevatia, Ser-Nam Lim
We present a simple and effective way to estimate the batch-norm statistics during test time, to fast adapt a source model to target test samples.
no code implementations • 29 Sep 2021 • Zhengyu Yang, Zijian Hu, Xuefeng Hu, Ram Nevatia
With both entropy and rank maximization, our method surpasses the state-of-the-art on CIFAR-10 and Mini-ImageNet under the standard linear evaluation protocol.
1 code implementation • CVPR 2021 • Zijian Hu, Zhengyu Yang, Xuefeng Hu, Ram Nevatia
Combining the Pair Loss with the techniques developed by the MixMatch family, our proposed SimPLE algorithm shows significant performance gains over previous algorithms on CIFAR-100 and Mini-ImageNet, and is on par with the state-of-the-art methods on CIFAR-10 and SVHN.
no code implementations • 1 Sep 2020 • Xuefeng Hu, Zhihan Zhang, Zhenye Jiang, Syomantak Chaudhuri, Zhenheng Yang, Ram Nevatia
We present a novel framework, Spatial Pyramid Attention Network (SPAN) for detection and localization of multiple types of image manipulations.
no code implementations • 4 Mar 2019 • Svebor Karaman, Xudong Lin, Xuefeng Hu, Shih-Fu Chang
We propose an unsupervised hashing method which aims to produce binary codes that preserve the ranking induced by a real-valued representation.