1 code implementation • 7 May 2023 • Venkat Nemani, Luca Biggio, Xun Huan, Zhen Hu, Olga Fink, Anh Tran, Yan Wang, Xiaoping Du, Xiaoge Zhang, Chao Hu
In this tutorial, we aim to provide a holistic lens on emerging UQ methods for ML models with a particular focus on neural networks and the applications of these UQ methods in tackling engineering design as well as prognostics and health management problems.
no code implementations • CVPR 2023 • Bang-Dang Pham, Phong Tran, Anh Tran, Cuong Pham, Rang Nguyen, Minh Hoai
We consider the challenging task of training models for image-to-video deblurring, which aims to recover a sequence of sharp images corresponding to a given blurry image input.
1 code implementation • 27 Mar 2023 • Thanh Van Le, Hao Phung, Thuan Hoang Nguyen, Quan Dao, Ngoc Tran, Anh Tran
Despite the complicated formulation of DreamBooth and Diffusion-based text-to-image models, our methods effectively defend users from the malicious use of those models.
1 code implementation • CVPR 2023 • Thuan Hoang Nguyen, Thanh Van Le, Anh Tran
Any-scale image synthesis offers an efficient and scalable solution to synthesize photo-realistic images at any scale, even going beyond 2K resolution.
no code implementations • 14 Mar 2023 • Zhihao Chen, Yang Zhou, Anh Tran, Junting Zhao, Liang Wan, Gideon Ooi, Lionel Cheng, Choon Hua Thng, Xinxing Xu, Yong liu, Huazhu Fu
To enable MedRPG to locate nuanced medical findings with better region-phrase correspondences, we further propose Tri-attention Context contrastive alignment (TaCo).
2 code implementations • 18 Dec 2022 • Tao Yang, Zhichao Xu, Zhenduo Wang, Anh Tran, Qingyao Ai
In MCFair, we first develop a ranking objective that includes uncertainty, fairness, and user utility.
1 code implementation • CVPR 2023 • Hao Phung, Quan Dao, Anh Tran
Diffusion models are rising as a powerful solution for high-fidelity image generation, which exceeds GANs in quality in many circumstances.
Ranked #1 on
Image Generation
on CelebA-HQ 512x512
no code implementations • 31 Aug 2022 • Anh Tran, Kathryn Maupin, Theron Rodgers
Gaussian process (GP) is perhaps one of the most common methods in machine learning for small datasets.
no code implementations • 16 Aug 2022 • Zhichao Xu, Anh Tran, Tao Yang, Qingyao Ai
The results on simulated coarse-grained labeled dataset show that while using coarse-grained labels to train an RL model for LTR tasks still can not outperform traditional approaches using fine-grained labels, it still achieve somewhat promising results and is potentially helpful for future research in LTR.
1 code implementation • 10 Jun 2022 • Zhichao Xu, Yi Han, Tao Yang, Anh Tran, Qingyao Ai
Seeing this gap, we propose a model named Semantic-Enhanced Bayesian Personalized Explanation Ranking (SE-BPER) to effectively combine the interaction information and semantic information.
1 code implementation • 22 Apr 2022 • Sarthak Pati, Ujjwal Baid, Brandon Edwards, Micah Sheller, Shih-han Wang, G Anthony Reina, Patrick Foley, Alexey Gruzdev, Deepthi Karkada, Christos Davatzikos, Chiharu Sako, Satyam Ghodasara, Michel Bilello, Suyash Mohan, Philipp Vollmuth, Gianluca Brugnara, Chandrakanth J Preetha, Felix Sahm, Klaus Maier-Hein, Maximilian Zenk, Martin Bendszus, Wolfgang Wick, Evan Calabrese, Jeffrey Rudie, Javier Villanueva-Meyer, Soonmee Cha, Madhura Ingalhalikar, Manali Jadhav, Umang Pandey, Jitender Saini, John Garrett, Matthew Larson, Robert Jeraj, Stuart Currie, Russell Frood, Kavi Fatania, Raymond Y Huang, Ken Chang, Carmen Balana, Jaume Capellades, Josep Puig, Johannes Trenkler, Josef Pichler, Georg Necker, Andreas Haunschmidt, Stephan Meckel, Gaurav Shukla, Spencer Liem, Gregory S Alexander, Joseph Lombardo, Joshua D Palmer, Adam E Flanders, Adam P Dicker, Haris I Sair, Craig K Jones, Archana Venkataraman, Meirui Jiang, Tiffany Y So, Cheng Chen, Pheng Ann Heng, Qi Dou, Michal Kozubek, Filip Lux, Jan Michálek, Petr Matula, Miloš Keřkovský, Tereza Kopřivová, Marek Dostál, Václav Vybíhal, Michael A Vogelbaum, J Ross Mitchell, Joaquim Farinhas, Joseph A Maldjian, Chandan Ganesh Bangalore Yogananda, Marco C Pinho, Divya Reddy, James Holcomb, Benjamin C Wagner, Benjamin M Ellingson, Timothy F Cloughesy, Catalina Raymond, Talia Oughourlian, Akifumi Hagiwara, Chencai Wang, Minh-Son To, Sargam Bhardwaj, Chee Chong, Marc Agzarian, Alexandre Xavier Falcão, Samuel B Martins, Bernardo C A Teixeira, Flávia Sprenger, David Menotti, Diego R Lucio, Pamela Lamontagne, Daniel Marcus, Benedikt Wiestler, Florian Kofler, Ivan Ezhov, Marie Metz, Rajan Jain, Matthew Lee, Yvonne W Lui, Richard McKinley, Johannes Slotboom, Piotr Radojewski, Raphael Meier, Roland Wiest, Derrick Murcia, Eric Fu, Rourke Haas, John Thompson, David Ryan Ormond, Chaitra Badve, Andrew E Sloan, Vachan Vadmal, Kristin Waite, Rivka R Colen, Linmin Pei, Murat AK, Ashok Srinivasan, J Rajiv Bapuraj, Arvind Rao, Nicholas Wang, Ota Yoshiaki, Toshio Moritani, Sevcan Turk, Joonsang Lee, Snehal Prabhudesai, Fanny Morón, Jacob Mandel, Konstantinos Kamnitsas, Ben Glocker, Luke V M Dixon, Matthew Williams, Peter Zampakis, Vasileios Panagiotopoulos, Panagiotis Tsiganos, Sotiris Alexiou, Ilias Haliassos, Evangelia I Zacharaki, Konstantinos Moustakas, Christina Kalogeropoulou, Dimitrios M Kardamakis, Yoon Seong Choi, Seung-Koo Lee, Jong Hee Chang, Sung Soo Ahn, Bing Luo, Laila Poisson, Ning Wen, Pallavi Tiwari, Ruchika Verma, Rohan Bareja, Ipsa Yadav, Jonathan Chen, Neeraj Kumar, Marion Smits, Sebastian R van der Voort, Ahmed Alafandi, Fatih Incekara, Maarten MJ Wijnenga, Georgios Kapsas, Renske Gahrmann, Joost W Schouten, Hendrikus J Dubbink, Arnaud JPE Vincent, Martin J van den Bent, Pim J French, Stefan Klein, Yading Yuan, Sonam Sharma, Tzu-Chi Tseng, Saba Adabi, Simone P Niclou, Olivier Keunen, Ann-Christin Hau, Martin Vallières, David Fortin, Martin Lepage, Bennett Landman, Karthik Ramadass, Kaiwen Xu, Silky Chotai, Lola B Chambless, Akshitkumar Mistry, Reid C Thompson, Yuriy Gusev, Krithika Bhuvaneshwar, Anousheh Sayah, Camelia Bencheqroun, Anas Belouali, Subha Madhavan, Thomas C Booth, Alysha Chelliah, Marc Modat, Haris Shuaib, Carmen Dragos, Aly Abayazeed, Kenneth Kolodziej, Michael Hill, Ahmed Abbassy, Shady Gamal, Mahmoud Mekhaimar, Mohamed Qayati, Mauricio Reyes, Ji Eun Park, Jihye Yun, Ho Sung Kim, Abhishek Mahajan, Mark Muzi, Sean Benson, Regina G H Beets-Tan, Jonas Teuwen, Alejandro Herrera-Trujillo, Maria Trujillo, William Escobar, Ana Abello, Jose Bernal, Jhon Gómez, Joseph Choi, Stephen Baek, Yusung Kim, Heba Ismael, Bryan Allen, John M Buatti, Aikaterini Kotrotsou, Hongwei Li, Tobias Weiss, Michael Weller, Andrea Bink, Bertrand Pouymayou, Hassan F Shaykh, Joel Saltz, Prateek Prasanna, Sampurna Shrestha, Kartik M Mani, David Payne, Tahsin Kurc, Enrique Pelaez, Heydy Franco-Maldonado, Francis Loayza, Sebastian Quevedo, Pamela Guevara, Esteban Torche, Cristobal Mendoza, Franco Vera, Elvis Ríos, Eduardo López, Sergio A Velastin, Godwin Ogbole, Dotun Oyekunle, Olubunmi Odafe-Oyibotha, Babatunde Osobu, Mustapha Shu'aibu, Adeleye Dorcas, Mayowa Soneye, Farouk Dako, Amber L Simpson, Mohammad Hamghalam, Jacob J Peoples, Ricky Hu, Anh Tran, Danielle Cutler, Fabio Y Moraes, Michael A Boss, James Gimpel, Deepak Kattil Veettil, Kendall Schmidt, Brian Bialecki, Sailaja Marella, Cynthia Price, Lisa Cimino, Charles Apgar, Prashant Shah, Bjoern Menze, Jill S Barnholtz-Sloan, Jason Martin, Spyridon Bakas
Although machine learning (ML) has shown promise in numerous domains, there are concerns about generalizability to out-of-sample data.
no code implementations • NeurIPS 2021 • Trung Phung, Trung Le, Long Vuong, Toan Tran, Anh Tran, Hung Bui, Dinh Phung
Domain adaptation (DA) benefits from the rigorous theoretical works that study its insightful characteristics and various aspects, e. g., learning domain-invariant representations and its trade-off.
no code implementations • 12 Aug 2021 • Anh Tran
As a result, the proposed Scalable$^3$-BO framework is scalable in three independent perspectives: with respect to data size, dimensionality, and computational resource on HPC.
no code implementations • 11 Aug 2021 • Anh Tran, Tao Yang, Qingyao Ai
Our toolbox is an important resource for researchers to conduct experiments on ULTR algorithms with different configurations as well as testing their own algorithms with the supported features.
1 code implementation • CVPR 2021 • Chuong Huynh, Anh Tran, Khoa Luu, Minh Hoai
In this work, we present MagNet, a multi-scale framework that resolves local ambiguity by looking at the image at multiple magnification levels.
Ranked #4 on
Land Cover Classification
on DeepGlobe
1 code implementation • CVPR 2021 • Thao Nguyen, Anh Tran, Minh Hoai
However, existing works overlooked the latter components and confined makeup transfer to color manipulation, focusing only on light makeup styles.
Ranked #1 on
Facial Makeup Transfer
on CPM-Synt-2
1 code implementation • 1 Apr 2021 • Phong Tran, Anh Tran, Quynh Phung, Minh Hoai
This paper introduces a method to encode the blur operators of an arbitrary dataset of sharp-blur image pairs into a blur kernel space.
Blind Image Deblurring
Facial Expression Recognition (FER)
+1
no code implementations • 1 Mar 2021 • Phong Tran, Anh Tran, Thao Nguyen, Minh Hoai
The objective of this work is to deblur face videos.
1 code implementation • 20 Feb 2021 • Anh Nguyen, Anh Tran
With the thriving of deep learning and the widespread practice of using pre-trained networks, backdoor attacks have become an increasing security threat drawing many research interests in recent years.
1 code implementation • NeurIPS 2020 • Anh Nguyen, Anh Tran
In recent years, neural backdoor attack has been considered to be a potential security threat to deep learning systems.
no code implementations • 7 Jul 2020 • Anh Tran, Mike Eldred, Scott McCann, Yan Wang
Finally, we couple the third GP along with the classical BO framework to promote the richness and diversity of the Pareto frontier by the exploitation and exploration acquisition function.
no code implementations • 20 Mar 2020 • Anh Tran, Mike Eldred, Tim Wildey, Scott McCann, Jing Sun, Robert J. Visintainer
First, the efficiency of the Bayesian optimization is improved, where multiple input locations are evaluated massively parallel in an asynchronous manner to accelerate the optimization convergence with respect to physical runtime.
no code implementations • 2 Sep 2014 • Anh Tran, Jinyan Guan, Thanima Pilantanakitti, Paul Cohen
In this paper, we describe a simple strategy for mitigating variability in temporal data series by shifting focus onto long-term, frequency domain features that are less susceptible to variability.