1 code implementation • 10 Apr 2024 • Masoud Monajatipoor, Jiaxin Yang, Joel Stremmel, Melika Emami, Fazlolah Mohaghegh, Mozhdeh Rouhsedaghat, Kai-Wei Chang
Large Language Models (LLMs) demonstrate remarkable versatility in various NLP tasks but encounter distinct challenges in biomedical due to the complexities of language and data scarcity.
no code implementations • 2 Jun 2023 • Masoud Monajatipoor, Liunian Harold Li, Mozhdeh Rouhsedaghat, Lin F. Yang, Kai-Wei Chang
In this paper, we study an interesting hypothesis: can we transfer the in-context learning ability from the language domain to VL domain?
1 code implementation • 23 Sep 2022 • Mozhdeh Rouhsedaghat, Masoud Monajatipoor, C. -C. Jay Kuo, Iacopo Masi
We offer a method for one-shot mask-guided image synthesis that allows controlling manipulations of a single image by inverting a quasi-robust classifier equipped with strong regularizers.
1 code implementation • 10 Aug 2021 • Masoud Monajatipoor, Mozhdeh Rouhsedaghat, Liunian Harold Li, Aichi Chien, C. -C. Jay Kuo, Fabien Scalzo, Kai-Wei Chang
Vision-and-language(V&L) models take image and text as input and learn to capture the associations between them.
1 code implementation • 11 Mar 2021 • Hong-Shuo Chen, Mozhdeh Rouhsedaghat, Hamza Ghani, Shuowen Hu, Suya You, C. -C. Jay Kuo
A light-weight high-performance Deepfake detection method, called DefakeHop, is proposed in this work.
no code implementations • 27 Feb 2021 • Mozhdeh Rouhsedaghat, Masoud Monajatipoor, Zohreh Azizi, C. -C. Jay Kuo
Successive Subspace Learning (SSL) offers a light-weight unsupervised feature learning method based on inherent statistical properties of data units (e. g. image pixels and points in point cloud sets).
no code implementations • 23 Nov 2020 • Mozhdeh Rouhsedaghat, Yifan Wang, Shuowen Hu, Suya You, C. -C. Jay Kuo
A non-parametric low-resolution face recognition model for resource-constrained environments with limited networking and computing is proposed in this work.
no code implementations • 18 Jul 2020 • Mozhdeh Rouhsedaghat, Yifan Wang, Xiou Ge, Shuowen Hu, Suya You, C. -C. Jay Kuo
For gray-scale face images of resolution $32 \times 32$ in the LFW and the CMU Multi-PIE datasets, FaceHop achieves correct gender classification rates of 94. 63% and 95. 12% with model sizes of 16. 9K and 17. 6K parameters, respectively.
no code implementations • 8 Feb 2020 • Yueru Chen, Mozhdeh Rouhsedaghat, Suya You, Raghuveer Rao, C. -C. Jay Kuo
In PixelHop++, one can control the learning model size of fine-granularity, offering a flexible tradeoff between the model size and the classification performance.