no code implementations • Findings (EMNLP) 2021 • Zhan Shi, Hui Liu, Martin Renqiang Min, Christopher Malon, Li Erran Li, Xiaodan Zhu
Image captioning systems are expected to have the ability to combine individual concepts when describing scenes with concept combinations that are not observed during training.
no code implementations • 19 Dec 2024 • Haoran Liu, Youzhi Luo, Tianxiao Li, James Caverlee, Martin Renqiang Min
We consider the conditional generation of 3D drug-like molecules with \textit{explicit control} over molecular properties such as drug-like properties (e. g., Quantitative Estimate of Druglikeness or Synthetic Accessibility score) and effectively binding to specific protein sites.
1 code implementation • 17 Nov 2024 • Wentao Bao, Kai Li, Yuxiao Chen, Deep Patel, Martin Renqiang Min, Yu Kong
Existing approaches focus on the closed-set setting where an action detector is trained and tested on videos from a fixed set of action categories.
no code implementations • 14 Nov 2024 • Jonathan Warrell, Francesco Alesiani, Cameron Smith, Anja Mösch, Martin Renqiang Min
Levels of selection and multilevel evolutionary processes are essential concepts in evolutionary theory, and yet there is a lack of common mathematical models for these core ideas.
no code implementations • 11 Oct 2024 • Zi'ou Zheng, Christopher Malon, Martin Renqiang Min, Xiaodan Zhu
When performing complex multi-step reasoning tasks, the ability of Large Language Models (LLMs) to derive structured intermediate proof steps is important for ensuring that the models truly perform the desired reasoning and for improving models' explainability.
no code implementations • 10 Oct 2024 • Xiaoxiao He, Ligong Han, Quan Dao, Song Wen, Minhao Bai, Di Liu, Han Zhang, Martin Renqiang Min, Felix Juefei-Xu, Chaowei Tan, Bo Liu, Kang Li, Hongdong Li, Junzhou Huang, Faez Ahmed, Akash Srivastava, Dimitris Metaxas
Discrete diffusion models have achieved success in tasks like image generation and masked language modeling but face limitations in controlled content editing.
no code implementations • 22 Sep 2024 • Yuxiao Chen, Kai Li, Wentao Bao, Deep Patel, Yu Kong, Martin Renqiang Min, Dimitris N. Metaxas
Learning to localize temporal boundaries of procedure steps in instructional videos is challenging due to the limited availability of annotated large-scale training videos.
no code implementations • 19 Mar 2024 • Yao Wei, Martin Renqiang Min, George Vosselman, Li Erran Li, Michael Ying Yang
Recent progresses have been made in object shape generation with generative models such as diffusion models, which increases the shape fidelity.
4 code implementations • CVPR 2024 • Kumaranage Ravindu Yasas Nagasinghe, Honglu Zhou, Malitha Gunawardhana, Martin Renqiang Min, Daniel Harari, Muhammad Haris Khan
This knowledge, sourced from training procedure plans and structured as a directed weighted graph, equips the agent to better navigate the complexities of step sequencing and its potential variations.
no code implementations • 25 Apr 2023 • Changhao Shi, Haomiao Ni, Kai Li, Shaobo Han, Mingfu Liang, Martin Renqiang Min
We show that this paradigm based on latent classifier guidance is agnostic to pre-trained generative models, and present competitive results for both image generation and sequential manipulation of real and synthetic images.
1 code implementation • CVPR 2023 • Haomiao Ni, Changhao Shi, Kai Li, Sharon X. Huang, Martin Renqiang Min
In this paper, we propose an approach for cI2V using novel latent flow diffusion models (LFDM) that synthesize an optical flow sequence in the latent space based on the given condition to warp the given image.
no code implementations • 2 Mar 2023 • Ziqi Chen, Martin Renqiang Min, Hongyu Guo, Chao Cheng, Trevor Clancy, Xia Ning
This process is known as TCR recognition and constitutes a key step for immune response.
no code implementations • 4 Jan 2023 • Yuren Cong, Martin Renqiang Min, Li Erran Li, Bodo Rosenhahn, Michael Ying Yang
We further propose an attribute-centric contrastive loss to avoid overfitting to overrepresented attribute compositions.
no code implementations • CVPR 2023 • Kai Li, Deep Patel, Erik Kruus, Martin Renqiang Min
Source-free domain adaptation (SFDA) is an emerging research topic that studies how to adapt a pretrained source model using unlabeled target data.
Source-Free Domain Adaptation
Unsupervised Domain Adaptation
no code implementations • ICCV 2023 • Haifeng Xia, Kai Li, Martin Renqiang Min, Zhengming Ding
This operation maximizes the contribution of discriminative frames to further capture the similarity of support and query samples from the same category.
1 code implementation • CVPR 2022 • Zhiheng Li, Martin Renqiang Min, Kai Li, Chenliang Xu
Based on the identified latent directions of attributes, we propose Compositional Attribute Adjustment to adjust the latent code, resulting in better compositionality of image synthesis.
1 code implementation • ICLR 2022 • Tingfeng Li, Shaobo Han, Martin Renqiang Min, Dimitris N. Metaxas
We propose a reinforcement learning based approach to query object localization, for which an agent is trained to localize objects of interest specified by a small exemplary set.
1 code implementation • 17 Oct 2021 • Ligong Han, Sri Harsha Musunuri, Martin Renqiang Min, Ruijiang Gao, Yu Tian, Dimitris Metaxas
StyleGANs have shown impressive results on data generation and manipulation in recent years, thanks to its disentangled style latent space.
1 code implementation • ICCV 2021 • Ligong Han, Martin Renqiang Min, Anastasis Stathopoulos, Yu Tian, Ruijiang Gao, Asim Kadav, Dimitris Metaxas
We then propose an improved cGAN model with Auxiliary Classification that directly aligns the fake and real conditionals $P(\text{class}|\text{image})$ by minimizing their $f$-divergence.
1 code implementation • ICLR 2021 • Honglu Zhou, Asim Kadav, Farley Lai, Alexandru Niculescu-Mizil, Martin Renqiang Min, Mubbasir Kapadia, Hans Peter Graf
We evaluate over CATER dataset and find that Hopper achieves 73. 2% Top-1 accuracy using just 1 FPS by hopping through just a few critical frames.
Ranked #5 on
Video Object Tracking
on CATER
no code implementations • ICLR 2021 • Jun Han, Martin Renqiang Min, Ligong Han, Li Erran Li, Xuan Zhang
Learning disentangled representations leads to interpretable models and facilitates data generation with style transfer, which has been extensively studied on static data such as images in an unsupervised learning framework.
no code implementations • 1 Jan 2021 • Bingyuan Liu, Yogesh Balaji, Lingzhou Xue, Martin Renqiang Min
Attention mechanisms have advanced state-of-the-art deep learning models in many machine learning tasks.
no code implementations • ICCV 2021 • Yao Li, Martin Renqiang Min, Thomas Lee, Wenchao Yu, Erik Kruus, Wei Wang, Cho-Jui Hsieh
Recent studies have demonstrated the vulnerability of deep neural networks against adversarial examples.
2 code implementations • 8 Dec 2020 • Ziqi Chen, Martin Renqiang Min, Srinivasan Parthasarathy, Xia Ning
A pipeline of multiple, identical Modof models is implemented into Modof-pipe to modify an input molecule at multiple disconnection sites.
1 code implementation • 4 Dec 2020 • Ziqi Chen, Martin Renqiang Min, Xia Ning
T-cell receptors can recognize foreign peptides bound to major histocompatibility complex (MHC) class-I proteins, and thus trigger the adaptive immune response.
no code implementations • ACL 2020 • Pengyu Cheng, Martin Renqiang Min, Dinghan Shen, Christopher Malon, Yizhe Zhang, Yitong Li, Lawrence Carin
Learning disentangled representations of natural language is essential for many NLP tasks, e. g., conditional text generation, style transfer, personalized dialogue systems, etc.
no code implementations • CVPR 2020 • Yizhe Zhu, Martin Renqiang Min, Asim Kadav, Hans Peter Graf
We propose a sequential variational autoencoder to learn disentangled representations of sequential data (e. g., videos and audios) under self-supervision.
no code implementations • 25 Sep 2019 • Bingyuan Liu, Yogesh Balaji, Lingzhou Xue, Martin Renqiang Min
Attention mechanisms have advanced the state of the art in several machine learning tasks.
no code implementations • 25 Sep 2019 • Yao Li, Martin Renqiang Min, Wenchao Yu, Cho-Jui Hsieh, Thomas Lee, Erik Kruus
Recent studies have demonstrated the vulnerability of deep convolutional neural networks against adversarial examples.
1 code implementation • ICCV 2019 • Kai Li, Martin Renqiang Min, Yun Fu
We instead reformulate ZSL as a conditioned visual classification problem, i. e., classifying visual features based on the classifiers learned from the semantic descriptions.
no code implementations • 13 May 2019 • Xiaoyuan Liang, Guiling Wang, Martin Renqiang Min, Yi Qi, Zhu Han
In spite of its importance, passenger demand prediction is a highly challenging problem, because the demand is simultaneously influenced by the complex interactions among many spatial and temporal factors and other external factors such as weather.
no code implementations • 27 Feb 2019 • Zhenyu Duan, Martin Renqiang Min, Li Erran Li, Mingbo Cai, Yi Xu, Bingbing Ni
In spite of achieving revolutionary successes in machine learning, deep convolutional neural networks have been recently found to be vulnerable to adversarial attacks and difficult to generalize to novel test images with reasonably large geometric transformations.
no code implementations • 19 Nov 2018 • Yao Li, Martin Renqiang Min, Wenchao Yu, Cho-Jui Hsieh, Thomas C. M. Lee, Erik Kruus
Recent studies have demonstrated the vulnerability of deep convolutional neural networks against adversarial examples.
1 code implementation • ICML 2018 • Ting Chen, Martin Renqiang Min, Yizhou Sun
Conventional embedding methods directly associate each symbol with a continuous embedding vector, which is equivalent to applying a linear transformation based on a "one-hot" encoding of the discrete symbols.
2 code implementations • ACL 2018 • Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, Lawrence Carin
Many deep learning architectures have been proposed to model the compositionality in text sequences, requiring a substantial number of parameters and expensive computations.
Ranked #1 on
Named Entity Recognition (NER)
on CoNLL 2000
2 code implementations • ICLR 2018 • Bo Zong, Qi Song, Martin Renqiang Min, Wei Cheng, Cristian Lumezanu, Daeki Cho, Haifeng Chen
In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection.
no code implementations • ICLR 2018 • Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Ricardo Henao, Lawrence Carin
In this paper, we conduct an extensive comparative study between Simple Word Embeddings-based Models (SWEMs), with no compositional parameters, relative to employing word embeddings within RNN/CNN-based models.
no code implementations • 8 Nov 2017 • Ting Chen, Martin Renqiang Min, Yizhou Sun
Conventional embedding methods directly associate each symbol with a continuous embedding vector, which is equivalent to applying linear transformation based on "one-hot" encoding of the discrete symbols.
no code implementations • 23 Oct 2017 • Feipeng Zhao, Martin Renqiang Min, Chen Shen, Amit Chakraborty
In this paper, we try to learn more complex connections between entities and relationships.
no code implementations • 14 Oct 2017 • Martin Renqiang Min, Hongyu Guo, Dinghan Shen
Parametric embedding methods such as parametric t-SNE (pt-SNE) have been widely adopted for data visualization and out-of-sample data embedding without further computationally expensive optimization or approximation.
no code implementations • EMNLP 2018 • Dinghan Shen, Martin Renqiang Min, Yitong Li, Lawrence Carin
The role of meta network is to abstract the contextual information of a sentence or document into a set of input-aware filters.
Ranked #13 on
Text Classification
on DBpedia
no code implementations • 21 Feb 2017 • Martin Renqiang Min, Hongyu Guo, Dongjin Song
Our strategy learns a shallow high-order parametric embedding function and compares training/test data only with learned or precomputed exemplars, resulting in a cost function with linear computational complexity for both training and testing.
no code implementations • 22 Dec 2016 • Huayu Li, Martin Renqiang Min, Yong Ge, Asim Kadav
Employing these attention mechanisms, our model accurately understands when it can output an answer or when it requires generating a supplementary question for additional input depending on different contexts.
no code implementations • 23 Nov 2016 • Yunchen Pu, Martin Renqiang Min, Zhe Gan, Lawrence Carin
Previous models for video captioning often use the output from a specific layer of a Convolutional Neural Network (CNN) as video features.
no code implementations • 16 Aug 2016 • Martin Renqiang Min, Hongyu Guo, Dongjin Song
These exemplars in combination with the feature mapping learned by HOPE effectively capture essential data variations.
no code implementations • 17 Mar 2016 • Linnan Wang, Yi Yang, Martin Renqiang Min, Srimat Chakradhar
Then we present the study of ISGD batch size to the learning rate, parallelism, synchronization cost, system saturation and scalability.
no code implementations • 29 Apr 2015 • Hongyu Guo, Xiaodan Zhu, Martin Renqiang Min
Many real-world applications are associated with structured data, where not only input but also output has interplay.