no code implementations • ECCV 2020 • Yueru Li, Shuyu Cheng, Hang Su, Jun Zhu
Based on our investigation, we further present a new robust learning algorithm which encourages a larger gradient component in the tangent space of data manifold, suppressing the gradient leaking phenomenon consequently.
2 code implementations • EMNLP 2021 • Bin Liang, Hang Su, Rongdi Yin, Lin Gui, Min Yang, Qin Zhao, Xiaoqi Yu, Ruifeng Xu
To be specific, we first regard each aspect as a pivot to derive aspect-aware words that are highly related to the aspect from external affective commonsense knowledge.
1 code implementation • 30 Apr 2024 • Luxi Chen, Zhengyi Wang, Chongxuan Li, Tingting Gao, Hang Su, Jun Zhu
In this paper, we introduce score-based iterative reconstruction (SIR), an efficient and general algorithm for 3D generation with a multi-view score-based diffusion model.
no code implementations • 18 Apr 2024 • Shouwei Ruan, Yinpeng Dong, Hanqing Liu, Yao Huang, Hang Su, Xingxing Wei
Vision-Language Pre-training (VLP) models like CLIP have achieved remarkable success in computer vision and particularly demonstrated superior robustness to distribution shifts of 2D images.
1 code implementation • 17 Apr 2024 • Yichi Zhang, Yinpeng Dong, Siyuan Zhang, Tianzan Min, Hang Su, Jun Zhu
To achieve this, we propose Transferable Visual Prompting (TVP), a simple and effective approach to generate visual prompts that can transfer to different models and improve their performance on downstream tasks after trained on only one model.
no code implementations • 14 Apr 2024 • Jiawei Chen, Xiao Yang, Yinpeng Dong, Hang Su, Jianteng Peng, Zhaoxia Yin
Motivated by the rich structural and detailed features of face generative models, we propose FaceCat which utilizes the face generative model as a pre-trained model to improve the performance of FAS and FAD.
no code implementations • 1 Apr 2024 • Ling Gao, Daniel Gehrig, Hang Su, Davide Scaramuzza, Laurent Kneip
To recover the full linear camera velocity we fuse observations from multiple lines with a novel velocity averaging scheme that relies on a geometrically-motivated residual, and thus solves the problem more efficiently than previous schemes which minimize an algebraic residual.
no code implementations • 31 Mar 2024 • Lingxuan Wu, Xiao Yang, Yinpeng Dong, Liuwei Xie, Hang Su, Jun Zhu
The vulnerability of deep neural networks to adversarial patches has motivated numerous defense strategies for boosting model robustness.
no code implementations • 8 Mar 2024 • Zhengyi Wang, Yikai Wang, Yifei Chen, Chendong Xiang, Shuo Chen, Dajiang Yu, Chongxuan Li, Hang Su, Jun Zhu
In this work, we present the Convolutional Reconstruction Model (CRM), a high-fidelity feed-forward single image-to-3D generative model.
no code implementations • 7 Mar 2024 • Yuwei Zhang, Siffi Singh, Sailik Sengupta, Igor Shalyminov, Hang Su, Hwanjun Song, Saab Mansour
The triplet task gauges the model's understanding of two semantic concepts paramount in real-world conversational systems-- negation and implicature.
1 code implementation • 6 Mar 2024 • Jianfeng He, Hang Su, Jason Cai, Igor Shalyminov, Hwanjun Song, Saab Mansour
Semi-supervised dialogue summarization (SSDS) leverages model-generated summaries to reduce reliance on human-labeled data and improve the performance of summarization models.
Abstractive Text Summarization Natural Language Understanding
1 code implementation • 6 Mar 2024 • Zhongkai Hao, Chang Su, Songming Liu, Julius Berner, Chengyang Ying, Hang Su, Anima Anandkumar, Jian Song, Jun Zhu
Pre-training has been investigated to improve the efficiency and performance of training neural operators in data-scarce settings.
no code implementations • 5 Mar 2024 • Hossein Aboutalebi, Hwanjun Song, Yusheng Xie, Arshit Gupta, Justin Sun, Hang Su, Igor Shalyminov, Nikolaos Pappas, Siffi Singh, Saab Mansour
Development of multimodal interactive systems is hindered by the lack of rich, multimodal (text, images) conversational data, which is needed in large quantities for LLMs.
no code implementations • 23 Feb 2024 • Yu Tian, Xiao Yang, Yinpeng Dong, Heming Yang, Hang Su, Jun Zhu
It allows users to design specific prompts to generate realistic images through some black-box APIs.
1 code implementation • 20 Feb 2024 • Liyan Tang, Igor Shalyminov, Amy Wing-mei Wong, Jon Burnsky, Jake W. Vincent, Yu'an Yang, Siffi Singh, Song Feng, Hwanjun Song, Hang Su, Lijia Sun, Yi Zhang, Saab Mansour, Kathleen McKeown
We find that there are diverse errors and error distributions in model-generated summaries and that non-LLM based metrics can capture all error types better than LLM-based evaluators.
2 code implementations • 8 Feb 2024 • Huayu Chen, Guande He, Hang Su, Jun Zhu
Existing alignment methods, such as Direct Preference Optimization (DPO), are mainly tailored for pairwise preference data where rewards are implicitly defined rather than explicitly given.
1 code implementation • 4 Feb 2024 • Huanran Chen, Yinpeng Dong, Shitong Shao, Zhongkai Hao, Xiao Yang, Hang Su, Jun Zhu
Diffusion models are recently employed as generative classifiers for robust classification.
no code implementations • 1 Feb 2024 • Songming Liu, Chang Su, Jiachen Yao, Zhongkai Hao, Hang Su, Youjia Wu, Jun Zhu
Physics-informed neural networks (PINNs) have shown promise in solving various partial differential equations (PDEs).
no code implementations • 24 Jan 2024 • Daniel Lichy, Hang Su, Abhishek Badki, Jan Kautz, Orazio Gallo
Unfortunately, most of the GT data is for pinhole cameras, making it impossible to properly train depth estimation models for large-FoV cameras.
no code implementations • 15 Dec 2023 • Yao Huang, Yinpeng Dong, Shouwei Ruan, Xiao Yang, Hang Su, Xingxing Wei
However, the field of transferable targeted 3D adversarial attacks remains vacant.
1 code implementation • 5 Dec 2023 • Zhuo Huang, Chang Liu, Yinpeng Dong, Hang Su, Shibao Zheng, Tongliang Liu
Concretely, by estimating a transition matrix that captures the probability of one class being confused with another, an instruction containing a correct exemplar and an erroneous one from the most probable noisy class can be constructed.
1 code implementation • 20 Nov 2023 • Yu Tian, Xiao Yang, Jingyuan Zhang, Yinpeng Dong, Hang Su
Rapid advancements in large language models (LLMs) have revitalized in LLM-based agents, exhibiting impressive human-like behaviors and cooperative capabilities in various scenarios.
1 code implementation • 9 Nov 2023 • Shilong Liu, Hao Cheng, Haotian Liu, Hao Zhang, Feng Li, Tianhe Ren, Xueyan Zou, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang, Jianfeng Gao, Chunyuan Li
LLaVA-Plus is a general-purpose multimodal assistant that expands the capabilities of large multimodal models.
Ranked #1 on LMM real-life tasks on Leaderboard
no code implementations • 6 Nov 2023 • Qiuju Yang, Hang Su, Lili Liu, YiXuan Wang, Ze-Jun Hu
Finally, to highlight the discriminative information between auroral classes, we propose a lightweight attention feature enhancement module called LAFE.
no code implementations • 22 Oct 2023 • Ashkan Ganj, Yiqin Zhao, Hang Su, Tian Guo
In this paper, we investigate the challenges and opportunities of achieving accurate metric depth estimation in mobile AR.
1 code implementation • 21 Oct 2023 • Liyuan Wang, Jingyi Xie, Xingxing Zhang, Hang Su, Jun Zhu
In this work, we present a general framework for continual learning of sequentially arrived tasks with the use of pre-training, which has emerged as a promising direction for artificial intelligence systems to accommodate real-world dynamics.
no code implementations • 20 Oct 2023 • Hwanjun Song, Igor Shalyminov, Hang Su, Siffi Singh, Kaisheng Yao, Saab Mansour
Our experiments show that DisCal outperforms prior methods in abstractive summarization distillation, producing highly abstractive and informative summaries.
1 code implementation • 19 Oct 2023 • Zipeng Xiao, Zhongkai Hao, Bokai Lin, Zhijie Deng, Hang Su
Neural operators, as an efficient surrogate model for learning the solutions of PDEs, have received extensive attention in the field of scientific machine learning.
1 code implementation • NeurIPS 2023 • Yilin Lyu, Liyuan Wang, Xingxing Zhang, Zicheng Sun, Hang Su, Jun Zhu, Liping Jing
Continual learning entails learning a sequence of tasks and balancing their knowledge appropriately.
1 code implementation • NeurIPS 2023 • Liyuan Wang, Jingyi Xie, Xingxing Zhang, Mingyi Huang, Hang Su, Jun Zhu
Following these empirical and theoretical insights, we propose Hierarchical Decomposition (HiDe-)Prompt, an innovative approach that explicitly optimizes the hierarchical components with an ensemble of task-specific prompts and statistics of both uninstructed and instructed representations, further with the coordination of a contrastive regularization strategy.
1 code implementation • 11 Oct 2023 • Huayu Chen, Cheng Lu, Zhengyi Wang, Hang Su, Jun Zhu
Recent developments in offline reinforcement learning have uncovered the immense potential of diffusion modeling, which excels at representing heterogeneous behavior policies.
no code implementations • ICCV 2023 • Ling Gao, Hang Su, Daniel Gehrig, Marco Cannici, Davide Scaramuzza, Laurent Kneip
Event-based cameras are ideal for line-based motion estimation, since they predominantly respond to edges in the scene.
1 code implementation • 21 Sep 2023 • Yinpeng Dong, Huanran Chen, Jiawei Chen, Zhengwei Fang, Xiao Yang, Yichi Zhang, Yu Tian, Hang Su, Jun Zhu
By attacking white-box surrogate vision encoders or MLLMs, the generated adversarial examples can mislead Bard to output wrong image descriptions with a 22% success rate based solely on the transferability.
1 code implementation • 29 Aug 2023 • Liyuan Wang, Xingxing Zhang, Qian Li, Mingtian Zhang, Hang Su, Jun Zhu, Yi Zhong
Continual learning aims to empower artificial intelligence (AI) with strong adaptability to the real world.
no code implementations • 9 Aug 2023 • Changjian Chen, Yukai Guo, Fengyuan Tian, Shilong Liu, Weikai Yang, Zhaowei Wang, Jing Wu, Hang Su, Hanspeter Pfister, Shixia Liu
Existing model evaluation tools mainly focus on evaluating classification models, leaving a gap in evaluating more complex models, such as object detection.
no code implementations • 4 Aug 2023 • Jiawei Chen, Xiao Yang, Heng Yin, Mingzhi Ma, Bihui Chen, Jianteng Peng, Yandong Guo, Zhaoxia Yin, Hang Su
Ensuring the reliability of face recognition systems against presentation attacks necessitates the deployment of face anti-spoofing techniques.
1 code implementation • ICCV 2023 • Xiaofeng Mao, Yuefeng Chen, Yao Zhu, Da Chen, Hang Su, Rong Zhang, Hui Xue
To give a more comprehensive robustness assessment, we introduce COCO-O(ut-of-distribution), a test dataset based on COCO with 6 types of natural distribution shifts.
1 code implementation • 21 Jul 2023 • Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei
Experimental results show that VIAT significantly improves the viewpoint robustness of various image classifiers based on the diversity of adversarial viewpoints generated by GMVFool.
1 code implementation • ICCV 2023 • Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei
Visual recognition models are not invariant to viewpoint changes in the 3D world, as different viewing directions can dramatically affect the predictions given the same object.
1 code implementation • 28 Jun 2023 • Xingxing Wei, Shouwei Ruan, Yinpeng Dong, Hang Su
In this paper, we propose the Distribution-Optimized Adversarial Patch (DOPatch), a novel method that optimizes a multimodal distribution of adversarial locations instead of individual ones.
no code implementations • 15 Jun 2023 • Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Yubo Chen, Hang Su, Xingxing Wei
In this paper, we propose DIFFender, a novel defense method that leverages a text-guided diffusion model to defend against adversarial patches.
1 code implementation • 15 Jun 2023 • Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming Liu, Lu Lu, Jun Zhu
In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry.
no code implementations • 5 Jun 2023 • Jiachen Yao, Chang Su, Zhongkai Hao, Songming Liu, Hang Su, Jun Zhu
Physics-informed Neural Networks (PINNs) have recently achieved remarkable progress in solving Partial Differential Equations (PDEs) in various fields by minimizing a weighted sum of PDE loss and boundary loss.
1 code implementation • 30 May 2023 • Songming Liu, Zhongkai Hao, Chengyang Ying, Hang Su, Ze Cheng, Jun Zhu
The neural operator has emerged as a powerful tool in learning mappings between function spaces in PDEs.
2 code implementations • NeurIPS 2023 • Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, Jun Zhu
In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight (i. e., $7. 5$).
3 code implementations • 24 May 2023 • Huanran Chen, Yinpeng Dong, Zhengyi Wang, Xiao Yang, Chengqi Duan, Hang Su, Jun Zhu
As RDC does not require training on particular adversarial attacks, we demonstrate that it is more generalizable to defend against multiple unseen threats.
Ranked #2 on Adversarial Defense on CIFAR-10
1 code implementation • 23 May 2023 • Zhenshan Bing, Yuan Meng, Yuqi Yun, Hang Su, Xiaojie Su, Kai Huang, Alois Knoll
Generative model-based deep clustering frameworks excel in classifying complex data, but are limited in handling dynamic and complex features because they require prior knowledge of the number of clusters.
no code implementations • 15 May 2023 • Danni Yu, Luyang Li, Hang Su, Matteo Fuoli
We find that the Bing chatbot outperformed ChatGPT, with accuracy approaching that of a human coder.
no code implementations • 13 May 2023 • Zhaoxia Yin, Heng Yin, Hang Su, Xinpeng Zhang, Zhenzhe Gao
Our method has some advantages: (1) the iterative update of samples is done in a decision-based black-box manner, relying solely on the predicted probability distribution of the target model, which reduces the risk of exposure to adversarial attacks, (2) the small-amplitude multiple iterations approach allows the fragile samples to perform well visually, with a PSNR of 55 dB in TinyImageNet compared to the original samples, (3) even with changes in the overall parameters of the model of magnitude 1e-4, the fragile samples can detect such changes, and (4) the method is independent of the specific model structure and dataset.
no code implementations • 8 May 2023 • Zhaoxia Yin, Shaowei Zhu, Hang Su, Jianteng Peng, Wanli Lyu, Bin Luo
However, numerous studies have proven that previous methods create detection or defense against certain attacks, which renders the method ineffective in the face of the latest unknown attack methods.
1 code implementation • 7 May 2023 • Shengfang Zhai, Yinpeng Dong, Qingni Shen, Shi Pu, Yuejian Fang, Hang Su
To gain a better understanding of the training process and potential risks of text-to-image synthesis, we perform a systematic investigation of backdoor attack on text-to-image diffusion models and propose BadT2I, a general multimodal backdoor attack framework that tampers with image synthesis in diverse semantic levels.
no code implementations • 29 Apr 2023 • Mingyang Wang, Zhenshan Bing, Xiangtong Yao, Shuai Wang, Hang Su, Chenguang Yang, Kai Huang, Alois Knoll
On MuJoCo and Meta-World benchmarks, MoSS outperforms prior works in terms of asymptotic performance, sample efficiency (3-50x faster), adaptation efficiency, and generalization robustness on broad and diverse task distributions.
3 code implementations • 25 Apr 2023 • Cheng Lu, Huayu Chen, Jianfei Chen, Hang Su, Chongxuan Li, Jun Zhu
The main challenge for this setting is that the intermediate guidance during the diffusion sampling procedure, which is jointly defined by the sampling distribution and the energy function, is unknown and is hard to estimate.
2 code implementations • ICCV 2023 • Shilong Liu, Tianhe Ren, Jiayu Chen, Zhaoyang Zeng, Hao Zhang, Feng Li, Hongyang Li, Jun Huang, Hang Su, Jun Zhu, Lei Zhang
We point out that the unstable matching in DETR is caused by a multi-optimization path problem, which is highlighted by the one-to-one matching design in DETR.
1 code implementation • 31 Mar 2023 • Chendong Xiang, Fan Bao, Chongxuan Li, Hang Su, Jun Zhu
Large-scale diffusion models like Stable Diffusion are powerful and find various real-world applications while customizing such models by fine-tuning is both memory and time inefficient.
1 code implementation • CVPR 2023 • Xiao Yang, Chang Liu, Longlong Xu, Yikai Wang, Yinpeng Dong, Ning Chen, Hang Su, Jun Zhu
The goal of this work is to develop a more reliable technique that can carry out an end-to-end evaluation of adversarial robustness for commercial systems.
no code implementations • 20 Mar 2023 • Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao Yang, Hang Su, Xingxing Wei, Jun Zhu
3D object detection is an important task in autonomous driving to perceive the surroundings.
2 code implementations • 16 Mar 2023 • Huanran Chen, Yichi Zhang, Yinpeng Dong, Xiao Yang, Hang Su, Jun Zhu
It is widely recognized that deep learning models lack robustness to adversarial examples.
3 code implementations • 12 Mar 2023 • Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu
Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model -- perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality.
7 code implementations • 9 Mar 2023 • Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang
To effectively fuse language and vision modalities, we conceptually divide a closed-set detector into three phases and propose a tight fusion solution, which includes a feature enhancer, a language-guided query selection, and a cross-modality decoder for cross-modality fusion.
Ranked #1 on Zero-Shot Object Detection on MSCOCO
no code implementations • 9 Mar 2023 • Chengyang Ying, Zhongkai Hao, Xinning Zhou, Hang Su, Songming Liu, Dong Yan, Jun Zhu
Extensive experiments in both image-based and state-based tasks show that TAD can significantly improve the performance of handling different tasks simultaneously, especially for those with high TDR, and display a strong generalization ability to unseen tasks.
no code implementations • 1 Mar 2023 • Yichi Zhang, Zijian Zhu, Hang Su, Jun Zhu, Shibao Zheng, Yuan He, Hui Xue
In this paper, we propose Adversarial Semantic Contour (ASC), an MAP estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
no code implementations • 28 Feb 2023 • Chang Liu, Yinpeng Dong, Wenzhao Xiang, Xiao Yang, Hang Su, Jun Zhu, Yuefeng Chen, Yuan He, Hui Xue, Shibao Zheng
In our benchmark, we evaluate the robustness of 55 typical deep learning models on ImageNet with diverse architectures (e. g., CNNs, Transformers) and learning algorithms (e. g., normal supervised training, pre-training, adversarial training) under numerous adversarial attacks and out-of-distribution (OOD) datasets.
2 code implementations • 28 Feb 2023 • Zhongkai Hao, Zhengyi Wang, Hang Su, Chengyang Ying, Yinpeng Dong, Songming Liu, Ze Cheng, Jian Song, Jun Zhu
However, there are several challenges for learning operators in practical applications like the irregular mesh, multiple input functions, and complexity of the PDEs' solution.
no code implementations • 28 Feb 2023 • Chang Liu, Wenzhao Xiang, Yuan He, Hui Xue, Shibao Zheng, Hang Su
To address this issue, we proposed a novel method of Augmenting data with Adversarial examples via a Wavelet module (AdvWavAug), an on-manifold adversarial data augmentation technique that is simple to implement.
1 code implementation • 31 Jan 2023 • Liyuan Wang, Xingxing Zhang, Hang Su, Jun Zhu
To cope with real-world dynamics, an intelligent system needs to incrementally acquire, update, accumulate, and exploit knowledge throughout its lifetime.
1 code implementation • CVPR 2023 • Yinpeng Dong, Caixin Kang, Jinlai Zhang, Zijian Zhu, Yikai Wang, Xiao Yang, Hang Su, Xingxing Wei, Jun Zhu
3D object detection is an important task in autonomous driving to perceive the surroundings.
1 code implementation • 28 Nov 2022 • Shilong Liu, Yaoyuan Liang, Feng Li, Shijia Huang, Hao Zhang, Hang Su, Jun Zhu, Lei Zhang
As phrase extraction can be regarded as a $1$D text segmentation problem, we formulate PEG as a dual detection problem and propose a novel DQ-DETR model, which introduces dual queries to probe different features from image and text for object prediction and phrase mask prediction.
Ranked #7 on Referring Expression Comprehension on RefCOCO
1 code implementation • 15 Nov 2022 • Zhongkai Hao, Songming Liu, Yichi Zhang, Chengyang Ying, Yao Feng, Hang Su, Jun Zhu
Recent work shows that it provides potential benefits for machine learning models by incorporating the physical prior and collected data, which makes the intersection of machine learning and physics become a prevailing paradigm.
no code implementations • 2 Nov 2022 • Yao Feng, Yuhong Jiang, Hang Su, Dong Yan, Jun Zhu
Model-based reinforcement learning usually suffers from a high sample complexity in training the world model, especially for the environments with complex dynamics.
Model-based Reinforcement Learning reinforcement-learning +1
1 code implementation • 8 Oct 2022 • Yinpeng Dong, Shouwei Ruan, Hang Su, Caixin Kang, Xingxing Wei, Jun Zhu
Recent studies have demonstrated that visual recognition models lack robustness to distribution shift.
1 code implementation • 6 Oct 2022 • Songming Liu, Zhongkai Hao, Chengyang Ying, Hang Su, Jun Zhu, Ze Cheng
We present a unified hard-constraint framework for solving geometrically complex PDEs with neural networks, where the most commonly used Dirichlet, Neumann, and Robin boundary conditions (BCs) are considered.
1 code implementation • 29 Sep 2022 • Huayu Chen, Cheng Lu, Chengyang Ying, Hang Su, Jun Zhu
To address this problem, we adopt a generative approach by decoupling the learned policy into two parts: an expressive generative behavior model and an action evaluation model.
3 code implementations • CVPR 2023 • Fan Bao, Shen Nie, Kaiwen Xue, Yue Cao, Chongxuan Li, Hang Su, Jun Zhu
We evaluate U-ViT in unconditional and class-conditional image generation, as well as text-to-image generation tasks, where U-ViT is comparable if not superior to a CNN-based U-Net of a similar size.
Ranked #4 on Text-to-Image Generation on MS COCO
1 code implementation • 15 Sep 2022 • Chengyang Ying, Zhongkai Hao, Xinning Zhou, Hang Su, Dong Yan, Jun Zhu
In this paper, we reveal that the instability is also related to a new notion of Reuse Bias of IS -- the bias in off-policy evaluation caused by the reuse of the replay buffer for evaluation and optimization.
no code implementations • 15 Sep 2022 • Zhongkai Hao, Chengyang Ying, Hang Su, Jun Zhu, Jian Song, Ze Cheng
In this paper, we present a novel bi-level optimization framework to resolve the challenge by decoupling the optimization of the targets and constraints.
1 code implementation • 12 Jun 2022 • Chengyang Ying, You Qiaoben, Xinning Zhou, Hang Su, Wenbo Ding, Jianyong Ai
Among different adversarial noises, universal adversarial perturbations (UAP), i. e., a constant image-agnostic perturbation applied on every input frame of the agent, play a critical role in Embodied Vision Navigation since they are computation-efficient and application-practical during the attack.
no code implementations • 9 Jun 2022 • Zhongkai Hao, Chengyang Ying, Yinpeng Dong, Hang Su, Jun Zhu, Jian Song
Under the GSmooth framework, we present a scalable algorithm that uses a surrogate image-to-image network to approximate the complex transformation.
1 code implementation • 9 Jun 2022 • Chengyang Ying, Xinning Zhou, Hang Su, Dong Yan, Ning Chen, Jun Zhu
Though deep reinforcement learning (DRL) has obtained substantial success, it may encounter catastrophic failures due to the intrinsic uncertainty of both transition and observation.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
1 code implementation • 13 Mar 2022 • Yinpeng Dong, Shuyu Cheng, Tianyu Pang, Hang Su, Jun Zhu
However, the existing methods inevitably suffer from low attack success rates or poor query efficiency since it is difficult to estimate the gradient in a high-dimensional input space with limited information.
no code implementations • 13 Mar 2022 • Jialian Li, Tongzheng Ren, Dong Yan, Hang Su, Jun Zhu
Our goal is to identify a near-optimal robust policy for the perturbed testing environment, which introduces additional technical difficulties as we need to simultaneously estimate the training environment uncertainty from samples and find the worst-case perturbation for testing.
no code implementations • 9 Mar 2022 • Xiao Yang, Yinpeng Dong, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu
It is therefore imperative to develop a framework that can enable a comprehensive evaluation of the vulnerability of face recognition in the physical world.
15 code implementations • 7 Mar 2022 • Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel M. Ni, Heung-Yeung Shum
Compared to other models on the leaderboard, DINO significantly reduces its model size and pre-training data size while achieving better results.
Ranked #1 on Real-Time Object Detection on COCO 2017 val
7 code implementations • ICLR 2022 • Shilong Liu, Feng Li, Hao Zhang, Xiao Yang, Xianbiao Qi, Hang Su, Jun Zhu, Lei Zhang
We present in this paper a novel query formulation using dynamic anchor boxes for DETR (DEtection TRansformer) and offer a deeper understanding of the role of queries in DETR.
Ranked #11 on 2D Object Detection on SARDet-100K
no code implementations • 21 Nov 2021 • Kaiyuan Liu, Xingyu Li, Yurui Lai, Ge Zhang, Hang Su, Jiachen Wang, Chunxu Guo, Jisong Guan, Yi Zhou
Despite its great success, deep learning severely suffers from robustness; that is, deep neural networks are very vulnerable to adversarial attacks, even the simplest ones.
1 code implementation • 17 Oct 2021 • Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input.
1 code implementation • 15 Oct 2021 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan Lin, Jiadong Lin, Chuanbiao Song, ZiHao Wang, Zhennan Wu, Yang Guo, Jiequan Cui, Xiaogang Xu, Pengguang Chen
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years.
no code implementations • 13 Oct 2021 • Xiao Yang, Yinpeng Dong, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu
The vulnerability of deep neural networks to adversarial examples has motivated an increasing number of defense strategies for promoting model robustness.
no code implementations • 30 Sep 2021 • Zijian Zhu, Hang Su, Chang Liu, Wenzhao Xiang, Shibao Zheng
Fortunately, most existing adversarial patches can be outwitted, disabled and rejected by a simple classification network called an adversarial patch detector, which distinguishes adversarial patches from original images.
1 code implementation • ICML Workshop AML 2021 • Zhengyi Wang, Zhongkai Hao, Ziqiao Wang, Hang Su, Jun Zhu
In this work, we propose Cluster Attack -- a Graph Injection Attack (GIA) on node classification, which injects fake nodes into the original graph to degenerate the performance of graph neural networks (GNNs) on certain victim nodes while affecting the other nodes as little as possible.
no code implementations • 13 Sep 2021 • Wenzhao Xiang, Hang Su, Chang Liu, Yandong Guo, Shibao Zheng
As designers of artificial intelligence try to outwit hackers, both sides continue to hone in on AI's inherent vulnerabilities.
1 code implementation • 29 Jul 2021 • Jiayi Weng, Huayu Chen, Dong Yan, Kaichao You, Alexis Duburcq, Minghao Zhang, Yi Su, Hang Su, Jun Zhu
In this paper, we present Tianshou, a highly modularized Python library for deep reinforcement learning (DRL) that uses PyTorch as its backend.
2 code implementations • 22 Jul 2021 • Shilong Liu, Lei Zhang, Xiao Yang, Hang Su, Jun Zhu
The use of Transformer is rooted in the need of extracting local discriminative features adaptively for different labels, which is a strongly desired property due to the existence of multiple objects in one image.
Ranked #1 on Multi-Label Classification on PASCAL VOC 2012
no code implementations • 16 Jul 2021 • Quanshi Zhang, Tian Han, Lixin Fan, Zhanxing Zhu, Hang Su, Ying Nian Wu, Jie Ren, Hao Zhang
This workshop pays a special interest in theoretic foundations, limitations, and new application trends in the scope of XAI.
1 code implementation • ICML Workshop AML 2021 • Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu
Transfer-based adversarial attacks can evaluate model robustness in the black-box setting.
no code implementations • 30 Jun 2021 • You Qiaoben, Chengyang Ying, Xinning Zhou, Hang Su, Jun Zhu, Bo Zhang
In this paper, we provide a framework to better understand the existing methods by reformulating the problem of adversarial attacks on reinforcement learning in the function space.
1 code implementation • NeurIPS 2021 • Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu
Collecting training data from untrusted sources exposes machine learning services to poisoning adversaries, who maliciously manipulate training data to degrade the model accuracy.
1 code implementation • ICLR 2022 • Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, Jun Zhu
In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of the adversarially trained models.
1 code implementation • CVPR 2022 • Tianyu Pang, Huishuai Zhang, Di He, Yinpeng Dong, Hang Su, Wei Chen, Jun Zhu, Tie-Yan Liu
Along with this routine, we find that confidence and a rectified confidence (R-Con) can form two coupled rejection metrics, which could provably distinguish wrongly classified inputs from correctly classified ones.
no code implementations • CVPR 2021 • Shilong Liu, Lei Zhang, Xiao Yang, Hang Su, Jun Zhu
We study the problem of unsupervised discovery and segmentation of object parts, which, as an intermediate local representation, are capable of finding intrinsic object structure and providing more explainable recognition results.
no code implementations • 9 May 2021 • Qi-An Fu, Yinpeng Dong, Hang Su, Jun Zhu
Deep learning models are vulnerable to adversarial examples, which can fool a target classifier by imposing imperceptible perturbations onto natural examples.
no code implementations • 6 Apr 2021 • Jay Mahadeokar, Yangyang Shi, Yuan Shangguan, Chunyang Wu, Alex Xiao, Hang Su, Duc Le, Ozlem Kalinli, Christian Fuegen, Michael L. Seltzer
In order to achieve flexible and better accuracy and latency trade-offs, the following techniques are used.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 6 Apr 2021 • Yuan Shangguan, Rohit Prabhavalkar, Hang Su, Jay Mahadeokar, Yangyang Shi, Jiatong Zhou, Chunyang Wu, Duc Le, Ozlem Kalinli, Christian Fuegen, Michael L. Seltzer
As speech-enabled devices such as smartphones and smart speakers become increasingly ubiquitous, there is growing interest in building automatic speech recognition (ASR) systems that can run directly on-device; end-to-end (E2E) speech recognition models such as recurrent neural network transducers and their variants have recently emerged as prime candidates for this task.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • CVPR 2021 • Zhijie Deng, Xiao Yang, Shizhen Xu, Hang Su, Jun Zhu
Despite their appealing flexibility, deep neural networks (DNNs) are vulnerable against adversarial examples.
no code implementations • ICCV 2021 • Yinpeng Dong, Xiao Yang, Zhijie Deng, Tianyu Pang, Zihao Xiao, Hang Su, Jun Zhu
Although deep neural networks (DNNs) have made rapid progress in recent years, they are vulnerable in adversarial environments.
no code implementations • CVPR 2021 • Xiaodan Li, Jinfeng Li, Yuefeng Chen, Shaokai Ye, Yuan He, Shuhui Wang, Hang Su, Hui Xue
Comprehensive experiments show that the proposed attack achieves a high attack success rate with few queries against the image retrieval systems under the black-box setting.
no code implementations • 21 Jan 2021 • Yuan Fang, Ding Wang, Peng Li, Hang Su, Tian Le, Yi Wu, Guo-Wei Yang, Hua-Li Zhang, Zhi-Guang Xiao, Yan-Qiu Sun, Si-Yuan Hong, Yan-Wu Xie, Huan-Hua Wang, Chao Cao, Xin Lu, Hui-Qiu Yuan, Yang Liu
We report growth, electronic structure and superconductivity of ultrathin epitaxial CoSi2 films on Si(111).
Mesoscale and Nanoscale Physics
no code implementations • 1 Jan 2021 • Guan Wang, Dong Yan, Hang Su, Jun Zhu
In this work, we point out that the optimal value of n actually differs on each data point, while the fixed value n is a rough average of them.
1 code implementation • 10 Dec 2020 • Xiaofeng Mao, Yuefeng Chen, Shuhui Wang, Hang Su, Yuan He, Hui Xue
Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness.
1 code implementation • 22 Nov 2020 • Xinzheng Zhang, Hang Su, Ce Zhang, Xiaowei Gu, Xiaoheng Tan, Peter M. Atkinson
In this paper, a robust unsupervised approach is proposed for small area change detection from multi-temporal SAR images using deep learning.
no code implementations • 5 Nov 2020 • Jay Mahadeokar, Yuan Shangguan, Duc Le, Gil Keren, Hang Su, Thong Le, Ching-Feng Yeh, Christian Fuegen, Michael L. Seltzer
There is a growing interest in the speech community in developing Recurrent Neural Network Transducer (RNN-T) models for automatic speech recognition (ASR) applications.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • NeurIPS 2020 • Fan Bao, Chongxuan Li, Kun Xu, Hang Su, Jun Zhu, Bo Zhang
This paper presents a bi-level score matching (BiSM) method to learn EBLVMs with general structures by reformulating SM as a bi-level optimization problem.
2 code implementations • ICLR 2021 • Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu
Adversarial training (AT) is one of the most effective strategies for promoting model robustness.
1 code implementation • ECCV 2020 • Haoyu Liang, Zhihao Ouyang, Yuyuan Zeng, Hang Su, Zihao He, Shu-Tao Xia, Jun Zhu, Bo Zhang
Most existing works attempt post-hoc interpretation on a pre-trained model, while neglecting to reduce the entanglement underlying the model.
2 code implementations • 8 Jul 2020 • Xiao Yang, Dingcheng Yang, Yinpeng Dong, Hang Su, Wenjian Yu, Jun Zhu
Based on large-scale evaluations, the commercial FR API services fail to exhibit acceptable performance on robustness evaluation, and we also draw several important conclusions for understanding the adversarial robustness of FR models and providing insights for the design of robust FR models.
1 code implementation • ICCV 2021 • Xiao Yang, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu, Yuefeng Chen, Hui Xue
As billions of personal data being shared through social media and network, the data privacy and security have drawn an increasing attention.
1 code implementation • 6 Mar 2020 • Liyuan Wang, Bo Lei, Qian Li, Hang Su, Jun Zhu, Yi Zhong
Continual acquisition of novel experience without interfering previously learned knowledge, i. e. continual learning, is critical for artificial neural networks, but limited by catastrophic forgetting.
no code implementations • 3 Mar 2020 • Xinzheng Zhang, Hang Su, Ce Zhang, Peter M. Atkinson, Xiaoheng Tan, Xiaoping Zeng, Xin Jian
Parallel FCM are utilized on these two mapped DDIs to obtain three types of pseudo-label pixels, namely, changed pixels, unchanged pixels, and intermediate pixels.
no code implementations • 29 Feb 2020 • Kang Wei, Jun Li, Ming Ding, Chuan Ma, Hang Su, Bo Zhang, H. Vincent Poor
According to our analysis, the UDP framework can realize $(\epsilon_{i}, \delta_{i})$-LDP for the $i$-th MT with adjustable privacy protection levels by varying the variances of the artificial noise processes.
1 code implementation • NeurIPS 2020 • Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Jun Zhu, Hang Su
Adversarial training (AT) is one of the most effective defenses against adversarial attacks for deep learning models.
1 code implementation • NeurIPS 2020 • Yinpeng Dong, Zhijie Deng, Tianyu Pang, Hang Su, Jun Zhu
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
no code implementations • 8 Feb 2020 • Changjian Chen, Jun Yuan, Yafeng Lu, Yang Liu, Hang Su, Songtao Yuan, Shixia Liu
To better analyze and understand the OoD samples in context, we have developed a novel kNN-based grid layout algorithm motivated by Hall's theorem.
no code implementations • 26 Jan 2020 • Kelei Cao, Mengchen Liu, Hang Su, Jing Wu, Jun Zhu, Shixia Liu
The key is to compare and analyze the datapaths of both the adversarial and normal examples.
no code implementations • ICLR 2020 • Zhiyang Chen, Hang Su
From this viewpoint, training neural nets is equivalent to finding an optimal control of the discrete dynamical system, which allows one to utilize methods of successive approximations, an optimal control algorithm based on Pontryagin's maximum principle, to train neural nets.
no code implementations • ICLR 2020 • Shiyu Huang, Hang Su, Jun Zhu, Ting Chen
Partially Observable Markov Decision Processes (POMDPs) are popular and flexible models for real-world decision-making applications that demand the information from past observations to make optimal decisions.
no code implementations • 26 Dec 2019 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, Jun Zhu
Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.
1 code implementation • 30 Nov 2019 • Shervin Minaee, Amirali Abdolrashidi, Hang Su, Mohammed Bennamoun, David Zhang
Deep learning-based models have been very successful in achieving state-of-the-art results in many of the computer vision, speech recognition, and natural language processing tasks in the last few years.
no code implementations • 7 Oct 2019 • Yulong Wang, Xiaolin Hu, Hang Su
We also apply extracted subnetworks in visual explanation and adversarial example detection tasks by merely replacing the original full model with class-specific subnetworks.
1 code implementation • 27 Sep 2019 • Yulong Wang, Xiaolu Zhang, Lingxi Xie, Jun Zhou, Hang Su, Bo Zhang, Xiaolin Hu
Network pruning is an important research field aiming at reducing computational costs of neural networks.
no code implementations • 25 Sep 2019 • Haoyu Liang, Zhihao Ouyang, Hang Su, Yuyuan Zeng, Zihao He, Shu-Tao Xia, Jun Zhu, Bo Zhang
Convolutional neural networks (CNNs) have often been treated as “black-box” and successfully used in a range of tasks.
no code implementations • 5 Sep 2019 • Dekai Zhu, Jinhu Dong, Zhongcong Xu, Canbo Ye, Yinbai Hu, Hang Su, Zhengfa Liu, Guang Chen
The neuromorphic camera is a brand new vision sensor that has emerged in recent years.
Robotics
2 code implementations • NeurIPS 2019 • Shuyu Cheng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu
We consider the black-box adversarial setting, where the adversary has to generate adversarial perturbations without access to the target models to compute gradients.
1 code implementation • 11 May 2019 • Fan Bao, Hang Su, Jun Zhu
Besides, our framework can be extended to semi-supervised boosting, where the boosted model learns a joint distribution of data and labels.
no code implementations • ICLR 2019 • Jialian Li, Hang Su, Jun Zhu
We can solve these tasks by first building models for other agents and then finding the optimal policy with these models.
2 code implementations • CVPR 2019 • Hang Su, Varun Jampani, Deqing Sun, Orazio Gallo, Erik Learned-Miller, Jan Kautz
In addition, we also demonstrate that PAC can be used as a drop-in replacement for convolution layers in pre-trained networks, resulting in consistent performance improvements.
no code implementations • CVPR 2019 • Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, Jun Zhu
In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model.
2 code implementations • CVPR 2019 • Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu
In this paper, we propose a translation-invariant attack method to generate more transferable adversarial examples against the defense models.
no code implementations • 27 Jan 2019 • Haosheng Zou, Tongzheng Ren, Dong Yan, Hang Su, Jun Zhu
Reward shaping is one of the most effective methods to tackle the crucial yet challenging problem of credit assignment in Reinforcement Learning (RL).
no code implementations • 25 Jan 2019 • Yinpeng Dong, Fan Bao, Hang Su, Jun Zhu
3) We propose to improve the consistency of neurons on adversarial example subset by an adversarial training algorithm with a consistent loss.
no code implementations • 9 Oct 2018 • Mengchen Liu, Shixia Liu, Hang Su, Kelei Cao, Jun Zhu
Deep neural networks (DNNs) are vulnerable to maliciously generated adversarial examples.
no code implementations • 10 Jul 2018 • Kun Xu, Haoyu Liang, Jun Zhu, Hang Su, Bo Zhang
Deep generative models have shown promising results in generating realistic images, but it is still non-trivial to generate images with complicated structures.
2 code implementations • 5 Jul 2018 • Hang Su, Xiatian Zhu, Shaogang Gong
In this work, we introduce a more realistic and challenging logo detection setting, called Open Logo Detection.
1 code implementation • CVPR 2018 • Juzheng Li, Hang Su, Jun Zhu, Siyu Wang, Bo Zhang
The machine thus performs as an instructor to extract the essay-level contradictions as the Guidance.
no code implementations • CVPR 2018 • Yulong Wang, Hang Su, Bo Zhang, Xiaolin Hu
Interpretability of a deep neural network aims to explain the rationale behind its decisions and enable the users to understand the intelligent agents, which has become an important issue due to its importance in practical applications.
no code implementations • 15 May 2018 • Qin Zhou, Heng Fan, Hua Yang, Hang Su, Shibao Zheng, Shuang Wu, Haibin Ling
To address this problem, in this paper, we present a robust and efficient graph correspondence transfer (REGCT) approach for explicit spatial alignment in Re-ID.
no code implementations • 1 Apr 2018 • Qin Zhou, Heng Fan, Shibao Zheng, Hang Su, Xinzhe Li, Shuang Wu, Haibin Ling
In this paper, we propose a graph correspondence transfer (GCT) approach for person re-identification.
2 code implementations • 30 Mar 2018 • Hang Su, Shaogang Gong, Xiatian Zhu
Existing logo detection methods usually consider a small number of logo classes and limited images per class with a strong assumption of requiring tedious object bounding box annotations, therefore not scalable to real-world dynamic applications.
no code implementations • 22 Mar 2018 • Zhigang Chang, Qin Zhou, Heng Fan, Hang Su, Hua Yang, Shibao Zheng, Haibin Ling
Meanwhile, a weighting scheme is applied on the bilinear coding to adaptively adjust the weights of local features at different locations based on their importance in recognition, further improving the discriminability of feature aggregation.
1 code implementation • 7 Mar 2018 • Xingxing Wei, Jun Zhu, Hang Su
Although adversarial samples of deep neural networks (DNNs) have been intensively studied on static images, their extensions in videos are never explored.
2 code implementations • CVPR 2018 • Hang Su, Varun Jampani, Deqing Sun, Subhransu Maji, Evangelos Kalogerakis, Ming-Hsuan Yang, Jan Kautz
We present a network architecture for processing point clouds that directly operates on a collection of points represented as a sparse set of samples in a high-dimensional lattice.
Ranked #30 on Semantic Segmentation on ScanNet
no code implementations • 25 Jan 2018 • Haosheng Zou, Hang Su, Shihong Song, Jun Zhu
Crowd behavior understanding is crucial yet challenging across a wide range of applications, since crowd behavior is inherently determined by a sequential decision-making process based on various factors, such as the pedestrians' own destinations, interaction with nearby pedestrians and anticipation of upcoming events.
no code implementations • TACL 2018 • Vinodkumar Prabhakaran, Camilla Griffiths, Hang Su, Prateek Verma, Nelson Morgan, Jennifer L. Eberhardt, Dan Jurafsky
We apply computational dialog methods to police body-worn camera footage to model conversations between police officers and community members in traffic stops.
no code implementations • 6 Dec 2017 • Danyang Sun, Tongzheng Ren, Chongxun Li, Hang Su, Jun Zhu
Automatically writing stylized Chinese characters is an attractive yet challenging task due to its wide applicabilities.
no code implementations • 3 Dec 2017 • Guohao Li, Hang Su, Wenwu Zhu
To address this issue, we propose a novel framework which endows the model capabilities in answering more complex questions by leveraging massive external knowledge with dynamic memory networks.
7 code implementations • CVPR 2018 • Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li
To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks.
no code implementations • ICCV 2017 • SouYoung Jin, Hang Su, Chris Stauffer, Erik Learned-Miller
We introduce a novel verification method, rank-1 counts verification, that has this property, and use it in a link-based clustering scheme.
no code implementations • 7 Sep 2017 • SouYoung Jin, Hang Su, Chris Stauffer, Erik Learned-Miller
We introduce a novel verification method, rank-1 counts verification, that has this property, and use it in a link-based clustering scheme.
no code implementations • 18 Aug 2017 • Yinpeng Dong, Hang Su, Jun Zhu, Fan Bao
We find that: (1) the neurons in DNNs do not truly detect semantic objects/parts, but respond to objects/parts only as recurrent discriminative patches; (2) deep visual representations are not robust distributed codes of visual concepts because the representations of adversarial images are largely not consistent with those of real images, although they have similar visual appearance, both of which are different from previous findings.
1 code implementation • 3 Aug 2017 • Yinpeng Dong, Renkun Ni, Jianguo Li, Yurong Chen, Jun Zhu, Hang Su
This procedure can greatly compensate the quantization error and thus yield better accuracy for low-bit DNNs.
no code implementations • 1 Jul 2017 • Wenbo Hu, Lifeng Hua, Lei LI, Hang Su, Tian Wang, Ning Chen, Bo Zhang
This paper presents a Semantic Attribute Modulation (SAM) for language modeling and style variation.
no code implementations • CVPR 2017 • Yinpeng Dong, Hang Su, Jun Zhu, Bo Zhang
Interpretability of deep neural networks (DNNs) is essential since it enables users to understand the overall strengths and weaknesses of the models, conveys an understanding of how the models will behave in the future, and how to diagnose and correct potential problems.
no code implementations • 29 Dec 2016 • Hang Su, Xiatian Zhu, Shaogang Gong
Logo detection in unconstrained images is challenging, particularly when only very sparse labelled training images are accessible due to high labelling costs.
1 code implementation • 20 Feb 2016 • Yuyu Zhang, Mohammad Taha Bahadori, Hang Su, Jimeng Sun
To achieve the best performance, it is often critical to select optimal algorithms and to set appropriate hyperparameters, which requires large computational efforts.
1 code implementation • 5 Jul 2015 • Hang Su, Haoyu Chen
Data is partitioned and distributed to different nodes for local model updates, and model averaging across nodes is done every few minibatches.
no code implementations • CVPR 2015 • Hang Su, Zhaozheng Yin, Takeo Kanade, Seungil Huh
When data have a complex manifold structure or the characteristics of data evolve over time, it is unrealistic to expect a graph-based semi-supervised learning method to achieve flawless classification given a small number of initial annotations.
no code implementations • ICCV 2015 • Hang Su, Subhransu Maji, Evangelos Kalogerakis, Erik Learned-Miller
A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors?
Ranked #94 on 3D Point Cloud Classification on ModelNet40