no code implementations • 25 Mar 2024 • Zizhao Hu, Shaochong Jia, Mohammad Rostami
Diffusion models have been widely used for conditional data cross-modal generation tasks such as text-to-image and text-to-video.
no code implementations • 28 Nov 2023 • Zizhao Hu, Shaochong Jia, Mohammad Rostami
Recently, diffusion models have been used successfully to fit distributions for cross-modal data translation and multimodal data generation.
no code implementations • 5 Oct 2023 • Zizhao Hu, Mohammad Rostami
Learning new tasks accumulatively without forgetting remains a critical challenge in continual learning.
no code implementations • 28 May 2023 • Zizhao Hu, Mohammad Rostami
Most existing cross-modal generative methods based on diffusion models use guidance to provide control over the latent space to enable conditional generation across different modalities.
1 code implementation • 22 Mar 2023 • Zizhao Hu, Mohammad Rostami
We propose a novel binarized regularization to facilitate learning of binary concepts to improve the quality of data generation in autoencoders.
no code implementations • 3 Dec 2021 • Zizhao Hu, Ravikiran Chanumolu, Xingyu Lin, Nayela Ayaz, Vincent Chi
Cloze task is a widely used task to evaluate an NLP system's language understanding ability.
no code implementations • 29 Oct 2021 • Sumedh A Sontakke, Stephen Iota, Zizhao Hu, Arash Mehrjou, Laurent Itti, Bernhard Schölkopf
Extending the successes in supervised learning methods to the reinforcement learning (RL) setting, however, is difficult due to the data generating process - RL agents actively query their environment for data, and the data are a function of the policy followed by the agent.
Out of Distribution (OOD) Detection Reinforcement Learning (RL)