no code implementations • 24 May 2018 • Burak Kakillioglu, Yantao Lu, Senem Velipasalar
Our proposed approach can be used to autonomously refine the parameters, and improve the accuracy of different deep neural network architectures.
no code implementations • 24 Sep 2018 • Yantao Lu, Burak Kakillioglu, Senem Velipasalar
The choice of parameters, and the design of the network architecture are important factors affecting the performance of deep neural networks.
1 code implementation • 8 May 2019 • Yunhan Jia, Yantao Lu, Senem Velipasalar, Zhenyu Zhong, Tao Wei
Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i. e., they maintain their effectiveness even against other models.
1 code implementation • 27 May 2019 • Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong, Tao Wei
Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.
no code implementations • 28 May 2019 • Yantao Lu, Senem Velipasalar
For instance, the sitting activity can be detected by IMU data, but it cannot be determined whether the subject has sat on a chair or a sofa, or where the subject is.
2 code implementations • CVPR 2020 • Yantao Lu, Yunhan Jia, Jian-Yu Wang, Bai Li, Weiheng Chai, Lawrence Carin, Senem Velipasalar
Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i. e., they remain adversarial even against other models.
1 code implementation • ICLR 2020 • Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Hao Chen, Zhenyu Zhong, Tao Wei
Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.
1 code implementation • 6 Mar 2020 • Bai Li, Shiqi Wang, Yunhan Jia, Yantao Lu, Zhenyu Zhong, Lawrence Carin, Suman Jana
Recent research has proposed the lottery ticket hypothesis, suggesting that for a deep neural network, there exist trainable sub-networks performing equally or better than the original model with commensurate training steps.
1 code implementation • 18 Nov 2021 • Yantao Lu, Xuetao Hao, Yilan Li, Weiheng Chai, Shiqi Sun, Senem Velipasalar
It is worth to note that our proposed RAA convolution is lightweight and compatible to be integrated into any CNN architecture used for detection from a BEV.