no code implementations • FL4NLP (ACL) 2022 • Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, Tao Zhang
Inspired by Bayesian hierarchical models, we develop ActPerFL, a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients’ training.
no code implementations • 20 May 2023 • Qimao Yang, Huili Chen, Qiwei Dong
This report presents a comprehensive study on deep learning models for brand logo classification in real-world scenarios.
no code implementations • 28 Dec 2022 • Yubin Kim, Huili Chen, Sharifa Alghowinem, Cynthia Breazeal, Hae Won Park
This work serves as the first step toward fully unlocking the potential of end-to-end video understanding models pre-trained on large public datasets and augmented with data augmentation and visualization techniques for affect recognition in the multi-person human-robot interaction in the wild.
no code implementations • 8 Aug 2022 • Diego Garcia-soto, Huili Chen, Farinaz Koushanfar
Deep Neural Networks (DNNs) have been shown to be susceptible to Trojan attacks.
no code implementations • 17 Apr 2022 • Huili Chen, Jie Ding, Eric Tramel, Shuang Wu, Anit Kumar Sahu, Salman Avestimehr, Tao Zhang
In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned.
no code implementations • 12 Apr 2022 • Huili Chen, Xinqiao Zhang, Ke Huang, Farinaz Koushanfar
This paper proposes AdaTest, a novel adaptive test pattern generation framework for efficient and reliable Hardware Trojan (HT) detection.
no code implementations • 8 Apr 2022 • Xinqiao Zhang, Huili Chen, Ke Huang, Farinaz Koushanfar
Deep Neural Networks (DNNs) have demonstrated unprecedented performance across various fields such as medical diagnosis and autonomous driving.
no code implementations • 21 Feb 2022 • Yein Kim, Huili Chen, Farinaz Koushanfar
The goal of federated learning (FL) is to train one global model by aggregating model parameters updated independently on edge devices without accessing users' private data.
no code implementations • 23 Mar 2021 • Oliver Lutz, Huili Chen, Hossein Fereidooni, Christoph Sendner, Alexandra Dmitrienko, Ahmad Reza Sadeghi, Farinaz Koushanfar
When extended to new vulnerability types, ESCORT yields an average F1-score of 93%.
no code implementations • 3 Feb 2021 • Xinqiao Zhang, Huili Chen, Farinaz Koushanfar
While DNNs are widely employed in security-sensitive fields, they are identified to be vulnerable to Neural Trojan (NT) attacks that are controlled and activated by the stealthy trigger.
no code implementations • ICCV 2021 • Huili Chen, Cheng Fu, Jishen Zhao, Farinaz Koushanfar
In this work, we present ProFlip, the first targeted Trojan attack framework that can divert the prediction of the DNN to the target class by progressively identifying and flipping a small set of bits in model parameters.
no code implementations • 20 Aug 2020 • Huili Chen, Yue Zhang, Felix Weninger, Rosalind Picard, Cynthia Breazeal, Hae Won Park
Automatic speech-based affect recognition of individuals in dyadic conversation is a challenging task, in part because of its heavy reliance on manual pre-processing.
no code implementations • 10 Aug 2020 • Rosario Cammarota, Matthias Schunter, Anand Rajan, Fabian Boemer, Ágnes Kiss, Amos Treiber, Christian Weinert, Thomas Schneider, Emmanuel Stapf, Ahmad-Reza Sadeghi, Daniel Demmler, Joshua Stock, Huili Chen, Siam Umar Hussain, Sadegh Riazi, Farinaz Koushanfar, Saransh Gupta, Tajan Simunic Rosing, Kamalika Chaudhuri, Hamid Nejatollahi, Nikil Dutt, Mohsen Imani, Kim Laine, Anuj Dubey, Aydin Aysu, Fateme Sadat Hosseini, Chengmo Yang, Eric Wallace, Pamela Norton
Additionally, such systems should also use Privacy-Enhancing Technologies (PETs) to protect customers' data at any time.
no code implementations • NeurIPS 2019 • Cheng Fu, Huili Chen, Haolan Liu, Xinyun Chen, Yuandong Tian, Farinaz Koushanfar, Jishen Zhao
Furthermore, Coda outperforms the sequence-to-sequence model with attention by a margin of 70% program accuracy.
no code implementations • 28 Jun 2019 • Cheng Fu, Huili Chen, Haolan Liu, Xinyun Chen, Yuandong Tian, Farinaz Koushanfar, Jishen Zhao
Reverse engineering of binary executables is a critical problem in the computer security domain.
no code implementations • ICLR 2019 • Huili Chen, Bita Darvish Rouhani, Farinaz Koushanfar
To extract the WM, BlackMarks queries the model with the WM key images and decodes the owner’s signature from the corresponding predictions using the designed encoding scheme.
2 code implementations • 2 Apr 2018 • Bita Darvish Rouhani, Huili Chen, Farinaz Koushanfar
The resulting models are therefore considered to be the IP of the model builder and need to be protected to preserve the owner's competitive advantage.
Cryptography and Security