no code implementations • CCL 2021 • Ning Yu, Jiangping Wang, Yu Shi, Jianyi Liu
“本文利用知网(HowNet)中的知识, 并将Word2vec模型的结构和思想迁移至义原表示学习过程中, 提出了一个基于义原表示学习的词向量表示方法。首先, 本文利用OpenHowNet获取义原知识库中的所有义原、所有中文词汇以及所有中文词汇和其对应的义原集合, 作为实验的数据集。然后, 基于Skip-gram模型, 训练义原表示学习模型, 进而获得词向量。最后, 通过词相似度任务、词义消歧任务、词汇类比和观察最近邻义原, 来评价本文提出的方法获取的词向量的效果。通过和基线模型比较, 发现本文提出的方法既高效又准确, 不依赖大规模语料也不需要复杂的网络结构和繁多的参数, 也能提升各种自然语言处理任务的准确率。”
no code implementations • 10 Oct 2024 • Mingming He, Pascal Clausen, Ahmet Levent Taşel, Li Ma, Oliver Pilarski, Wenqi Xian, Laszlo Rikker, Xueming Yu, Ryan Burgert, Ning Yu, Paul Debevec
We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation.
1 code implementation • 29 Sep 2024 • Chen Yeh, You-Ming Chang, Wei-Chen Chiu, Ning Yu
To address the risks of encountering inappropriate or harmful content, researchers managed to incorporate several harmful contents datasets with machine learning methods to detect harmful concepts.
no code implementations • 20 Aug 2024 • Yuan Xin, Zheng Li, Ning Yu, Dingfan Chen, Mario Fritz, Michael Backes, Yang Zhang
Despite being prevalent in the general field of Natural Language Processing (NLP), pre-trained language models inherently carry privacy and copyright concerns due to their nature of training on large-scale web-scraped data.
1 code implementation • 16 Aug 2024 • Le Xue, Manli Shu, Anas Awadalla, Jun Wang, An Yan, Senthil Purushwalkam, Honglu Zhou, Viraj Prabhu, Yutong Dai, Michael S Ryoo, Shrikant Kendre, Jieyu Zhang, Can Qin, Shu Zhang, Chia-Chih Chen, Ning Yu, Juntao Tan, Tulika Manoj Awalgaonkar, Shelby Heinecke, Huan Wang, Yejin Choi, Ludwig Schmidt, Zeyuan Chen, Silvio Savarese, Juan Carlos Niebles, Caiming Xiong, ran Xu
The framework comprises meticulously curated datasets, a training recipe, model architectures, and a resulting suite of LMMs.
no code implementations • 13 Aug 2024 • Zheng Li, Xinlei He, Ning Yu, Yang Zhang
Masked Image Modeling (MIM) has achieved significant success in the realm of self-supervised learning (SSL) for visual recognition.
no code implementations • 1 Aug 2024 • Yingkai Dong, Zheng Li, Xiangtao Meng, Ning Yu, Shanqing Guo
Atlas consists of two agents, namely the mutation agent and the selection agent, each comprising four key modules: a vision-language model (VLM) or LLM brain, planning, memory, and tool usage.
1 code implementation • 6 Jun 2024 • Minzhou Pan, Yi Zeng, Xue Lin, Ning Yu, Cho-Jui Hsieh, Peter Henderson, Ruoxi Jia
In this study, we investigate the vulnerability of image watermarks to diffusion-model-based image editing, a challenge exacerbated by the computational cost of accessing gradient information and the closed-source nature of many diffusion models.
1 code implementation • 25 May 2024 • Qian Wang, Chen Li, Yuchen Luo, Hefei Ling, Ping Li, Jiazhong Chen, Shijuan Huang, Ning Yu
By learning to distinguish this open covering from the distribution of natural data, we can develop a detector with strong generalization capabilities against all types of adversarial attacks.
no code implementations • 22 Apr 2024 • Si Chen, Feiyang Kang, Ning Yu, Ruoxi Jia
Existing approaches to fact tracing rely on assessing the similarity between each training sample and the query along a certain dimension, such as lexical similarity, gradient, or embedding space.
no code implementations • 7 Apr 2024 • Yimu Wang, Shuai Yuan, Xiangru Jian, Wei Pang, Mushi Wang, Ning Yu
While recent progress in video-text retrieval has been driven by the exploration of powerful model architectures and training strategies, the representation learning ability of video-text retrieval models is still limited due to low-quality and scarce training data annotations.
no code implementations • 4 Apr 2024 • Bahri Batuhan Bilecen, Yigit Yalin, Ning Yu, Aysegul Dundar
Generative Adversarial Networks (GANs) have emerged as powerful tools for high-quality image generation and real image editing by manipulating their latent spaces.
1 code implementation • 19 Mar 2024 • Zhuowen Yuan, Zidi Xiong, Yi Zeng, Ning Yu, Ruoxi Jia, Dawn Song, Bo Li
The innovative use of constrained optimization and a fusion-based guardrail approach represents a significant step forward in developing more secure and reliable LLMs, setting a new standard for content moderation frameworks in the face of evolving digital threats.
1 code implementation • 5 Feb 2024 • Mintong Kang, Nezihe Merve Gürel, Ning Yu, Dawn Song, Bo Li
Specifically, we provide conformal risk analysis for RAG models and certify an upper confidence bound of generation risks, which we refer to as conformal generation risk.
1 code implementation • 5 Feb 2024 • Yuancheng Xu, Jiarui Yao, Manli Shu, Yanchao Sun, Zichu Wu, Ning Yu, Tom Goldstein, Furong Huang
We show that Shadowcast are highly effective in achieving attacker's intentions using as few as 50 poison samples.
1 code implementation • 15 Dec 2023 • Qian Wang, Yaoyao Liu, Hefei Ling, Yingwei Li, Qihao Liu, Ping Li, Jiazhong Chen, Alan Yuille, Ning Yu
In response to the rapidly evolving nature of adversarial attacks against visual classifiers on a monthly basis, numerous defenses have been proposed to generalize against as many known attacks as possible.
2 code implementations • 30 Nov 2023 • Artemis Panagopoulou, Le Xue, Ning Yu, Junnan Li, Dongxu Li, Shafiq Joty, ran Xu, Silvio Savarese, Caiming Xiong, Juan Carlos Niebles
To enable this framework, we devise a scalable pipeline that automatically generates high-quality, instruction-tuning datasets from readily available captioning data across different modalities, and contribute 24K QA data for audio and 250K QA data for 3D.
1 code implementation • 30 Oct 2023 • Minxing Zhang, Ning Yu, Rui Wen, Michael Backes, Yang Zhang
Several membership inference attacks (MIAs) have been proposed to exhibit the privacy vulnerability of generative models by classifying a query image as a training dataset member or nonmember.
1 code implementation • 26 Oct 2023 • You-Ming Chang, Chen Yeh, Wei-Chen Chiu, Ning Yu
Moreover, results demonstrate that (1) the deepfake detection accuracy can be significantly and consistently improved (from 71. 06% to 92. 11%, in average accuracy over unseen domains) using pretrained vision-language models with prompt tuning; (2) our superior performance is at less cost of training data and trainable parameters, resulting in an effective and efficient solution for deepfake detection.
1 code implementation • 13 Jun 2023 • Yihan Ma, Zhikun Zhang, Ning Yu, Xinlei He, Michael Backes, Yun Shen, Yang Zhang
Graph generative models become increasingly effective for data distribution approximation and data augmentation.
1 code implementation • NeurIPS 2023 • Can Qin, Shu Zhang, Ning Yu, Yihao Feng, Xinyi Yang, Yingbo Zhou, Huan Wang, Juan Carlos Niebles, Caiming Xiong, Silvio Savarese, Stefano Ermon, Yun Fu, ran Xu
Visual generative foundation models such as Stable Diffusion show promise in navigating these goals, especially when prompted with arbitrary languages.
no code implementations • 18 May 2023 • Peihua Ma, Yixin Wu, Ning Yu, Yang Zhang, Michael Backes, Qin Wang, Cheng-I Wei
Nutrition information is crucial in precision nutrition and the food industry.
1 code implementation • CVPR 2024 • Le Xue, Ning Yu, Shu Zhang, Artemis Panagopoulou, Junnan Li, Roberto Martín-Martín, Jiajun Wu, Caiming Xiong, ran Xu, Juan Carlos Niebles, Silvio Savarese
It achieves a new SOTA of 50. 6% (top-1) on Objaverse-LVIS and 84. 7% (top-1) on ModelNet40 in zero-shot classification.
Ranked #9 on 3D Point Cloud Classification on ScanObjectNN (using extra training data)
1 code implementation • 22 Apr 2023 • Qian Wang, Yongqin Xian, Hefei Ling, Jinyuan Zhang, Xiaorui Lin, Ping Li, Jiazhong Chen, Ning Yu
Adversarial attacks aim to disturb the functionality of a target system by adding specific noise to the input samples, bringing potential threats to security and robustness when applied to facial recognition systems.
1 code implementation • 6 Apr 2023 • Tu Bui, Shruti Agarwal, Ning Yu, John Collomosse
Data hiding such as steganography and invisible watermarking has important applications in copyright protection, privacy-preserved communication and content provenance.
no code implementations • CVPR 2023 • Vibashan VS, Ning Yu, Chen Xing, Can Qin, Mingfei Gao, Juan Carlos Niebles, Vishal M. Patel, ran Xu
In summary, an OV method learns task-specific information using strong supervision from base annotations and novel category information using weak supervision from image-captions pairs.
1 code implementation • ICCV 2023 • Can Qin, Ning Yu, Chen Xing, Shu Zhang, Zeyuan Chen, Stefano Ermon, Yun Fu, Caiming Xiong, ran Xu
Empirical results show that GlueNet can be trained efficiently and enables various capabilities beyond previous state-of-the-art models: 1) multilingual language models such as XLM-Roberta can be aligned with existing T2I models, allowing for the generation of high-quality images from captions beyond English; 2) GlueNet can align multi-modal encoders such as AudioCLIP with the Stable Diffusion model, enabling sound-to-image generation; 3) it can also upgrade the current text encoder of the latent diffusion model for challenging case generation.
1 code implementation • CVPR 2024 • Shu Zhang, Xinyi Yang, Yihao Feng, Can Qin, Chia-Chih Chen, Ning Yu, Zeyuan Chen, Huan Wang, Silvio Savarese, Stefano Ermon, Caiming Xiong, ran Xu
Incorporating human feedback has been shown to be crucial to align text generated by large language models to human preferences.
1 code implementation • 1 Feb 2023 • Saurabh Sharma, Yongqin Xian, Ning Yu, Ambuj Singh
In this work, we show that learning prototype classifiers addresses the biased softmax problem in LTR.
Ranked #8 on Long-tail Learning on CIFAR-100-LT (ρ=10)
no code implementations • 6 Jan 2023 • Manli Shu, Le Xue, Ning Yu, Roberto Martín-Martín, Caiming Xiong, Tom Goldstein, Juan Carlos Niebles, ran Xu
By plugging our proposed modules into the state-of-the-art transformer-based 3D detectors, we improve the previous best results on both benchmarks, with more significant improvements on smaller objects.
1 code implementation • 19 Dec 2022 • Ning Yu, Chia-Chih Chen, Zeyuan Chen, Rui Meng, Gang Wu, Paul Josel, Juan Carlos Niebles, Caiming Xiong, ran Xu
Graphic layout designs play an essential role in visual communication.
1 code implementation • 17 Dec 2022 • Rui Meng, Ye Liu, Semih Yavuz, Divyansh Agarwal, Lifu Tu, Ning Yu, JianGuo Zhang, Meghana Bhat, Yingbo Zhou
In this study, we aim to develop unsupervised methods for improving dense retrieval models.
no code implementations • 13 Oct 2022 • Zeyang Sha, Zheng Li, Ning Yu, Yang Zhang
To tackle this problem, we pioneer a systematic study on the detection and attribution of fake images generated by text-to-image generation models.
1 code implementation • 10 Oct 2022 • Hossein Hajipour, Ning Yu, Cristian-Alexandru Staicu, Mario Fritz
In this paper, we contribute the first systematic approach that simulates various OOD scenarios along different dimensions of source code data properties and study the fine-tuned model behaviors in such scenarios.
no code implementations • 3 Oct 2022 • Yixin Wu, Ning Yu, Zheng Li, Michael Backes, Yang Zhang
The empirical results show that all of the proposed attacks can achieve significant performance, in some cases even close to an accuracy of 1, and thus the corresponding risk is much more severe than that shown by existing membership inference attacks.
1 code implementation • 3 Oct 2022 • Zheng Li, Ning Yu, Ahmed Salem, Michael Backes, Mario Fritz, Yang Zhang
Extensive experiments on four popular GAN models trained on two benchmark face datasets show that UnGANable achieves remarkable effectiveness and utility performance, and outperforms multiple baseline methods.
no code implementations • 23 Aug 2022 • Zheng Li, Yiyong Liu, Xinlei He, Ning Yu, Michael Backes, Yang Zhang
Furthermore, we propose a hybrid attack that exploits the exit information to improve the performance of existing attacks.
1 code implementation • 5 Aug 2022 • Jitesh Jain, Yuqian Zhou, Ning Yu, Humphrey Shi
We claim that the performance of inpainting algorithms can be better judged by the generated structures and textures.
1 code implementation • ICLR 2022 • Dingfan Chen, Ning Yu, Mario Fritz
As a long-term threat to the privacy of training data, membership inference attacks (MIAs) emerge ubiquitously in machine learning models.
1 code implementation • 5 Jul 2022 • Tu Bui, Ning Yu, John Collomosse
Uniquely, we present a solution to this task capable of 1) matching images invariant to their semantic content; 2) robust to benign transformations (changes in quality, resolution, shape, etc.)
1 code implementation • CVPR 2023 • Zeyang Sha, Xinlei He, Ning Yu, Michael Backes, Yang Zhang
Self-supervised representation learning techniques have been developing rapidly to make full use of unlabeled images.
1 code implementation • 29 May 2021 • Yang He, Ning Yu, Margret Keuper, Mario Fritz
The rapid advances in deep generative models over the past years have led to highly {realistic media, known as deepfakes,} that are commonly indistinguishable from real to human eyes.
1 code implementation • ICCV 2021 • Ning Yu, Guilin Liu, Aysegul Dundar, Andrew Tao, Bryan Catanzaro, Larry Davis, Mario Fritz
Lastly, we study different attention architectures in the discriminator, and propose a reference attention mechanism.
no code implementations • 28 Mar 2021 • Ning Yu, Timothy Haskins
Regional rainfall forecasting is an important issue in hydrology and meteorology.
no code implementations • 26 Jan 2021 • Peng Zhou, Ning Yu, Zuxuan Wu, Larry S. Davis, Abhinav Shrivastava, Ser-Nam Lim
This paper studies video inpainting detection, which localizes an inpainted region in a video both spatially and temporally.
1 code implementation • ICLR 2022 • Ning Yu, Vladislav Skripniuk, Dingfan Chen, Larry Davis, Mario Fritz
Over the past years, deep generative models have achieved a new level of performance.
1 code implementation • CVPR 2021 • Hui-Po Wang, Ning Yu, Mario Fritz
While Generative Adversarial Networks (GANs) show increasing performance and the level of realism is becoming indistinguishable from natural images, this also comes with high demands on data and computation.
1 code implementation • ICCV 2021 • Ning Yu, Vladislav Skripniuk, Sahar Abdelnabi, Mario Fritz
Thus, we seek a proactive and sustainable solution on deepfake detection, that is agnostic to the evolution of generative models, by introducing artificial fingerprints into the models.
no code implementations • 8 Jun 2020 • Xuezhi Ma, Qiushi Liu, Ning Yu, Da Xu, Sanggon Kim, Zebin Liu, Kaili Jiang, Bryan M. Wong, Ruoxue Yan, Ming Liu
Optical hyperspectral imaging based on absorption and scattering of photons at the visible and adjacent frequencies denotes one of the most informative and inclusive characterization methods in material research.
Super-Resolution Optics Materials Science
no code implementations • LREC 2020 • Mack Blackburn, Ning Yu, John Berrie, Brian Gordon, David Longfellow, William Tirrell, Mark Williams
Instead, we label the narrative and stance of tweets and YouTube comments about White Helmets.
1 code implementation • ECCV 2020 • Ning Yu, Ke Li, Peng Zhou, Jitendra Malik, Larry Davis, Mario Fritz
Generative Adversarial Networks (GANs) have brought about rapid progress towards generating photorealistic images.
1 code implementation • 7 Apr 2020 • Saurabh Sharma, Ning Yu, Mario Fritz, Bernt Schiele
Deep learning enables impressive performance in image recognition using large-scale artificially-balanced datasets.
Ranked #21 on Long-tail Learning on Places-LT
no code implementations • 26 Jan 2020 • Ning Yu, Zachary Tuttle, Carl Jake Thurnau, Emmanuel Mireku
Since the first Graphical User Interface (GUI) prototype was invented in the 1970s, GUI systems have been deployed into various personal computer systems and server platforms.
no code implementations • EMNLP 2019 • Graham Horwood, Ning Yu, Thomas Boggs, Changjiang Yang, Chad Holvenstot
Online Social Networks (OSNs) provide a wealth of intelligence to analysts in assisting tasks such as tracking cyber-attacks, human trafficking activities, and misinformation campaigns.
1 code implementation • 9 Sep 2019 • Dingfan Chen, Ning Yu, Yang Zhang, Mario Fritz
In addition, we propose the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models.
1 code implementation • CVPR 2019 • Ning Yu, Connelly Barnes, Eli Shechtman, Sohrab Amirghodsi, Michal Lukac
This paper addresses the problem of interpolating visual textures.
2 code implementations • ICCV 2019 • Ning Yu, Larry Davis, Mario Fritz
Our experiments show that (1) GANs carry distinct model fingerprints and leave stable fingerprints in their generated images, which support image attribution; (2) even minor differences in GAN training can result in different fingerprints, which enables fine-grained model authentication; (3) fingerprints persist across different image frequencies and patches and are not biased by GAN artifacts; (4) fingerprint finetuning is effective in immunizing against five types of adversarial image perturbations; and (5) comparisons also show our learned fingerprints consistently outperform several baselines in a variety of setups.
no code implementations • 22 Nov 2017 • Zeng Yu, Tianrui Li, Ning Yu, Xun Gong, Ke Chen, Yi Pan
This paper aims to develop a new architecture that can make full use of the feature maps of convolutional networks.
no code implementations • 8 Oct 2017 • Zeng Yu, Tianrui Li, Ning Yu, Yi Pan, Hongmei Chen, Bing Liu
We believe that minimizing the reconstruction error of the hidden representation is more robust than minimizing the Frobenius norm of the Jacobian matrix of the hidden representation.
1 code implementation • 6 Dec 2016 • Ning Yu, Xiaohui Shen, Zhe Lin, Radomir Mech, Connelly Barnes
Our new dataset enables us to formulate the problem as a multi-task learning problem and train a multi-column deep convolutional neural network (CNN) to simultaneously predict the severity of all the defects.