1 code implementation • ACL 2022 • Mu-Chun Wang, Zixuan Liu, Sheng Wang
We further illustrate how Textomics can be used to advance other applications, including evaluating scientific paper embeddings and generating masked templates for scientific paper understanding.
1 code implementation • 1 Oct 2024 • Can Sam Chen, Christopher Beckham, Zixuan Liu, Xue Liu, Christopher Pal
Offline black-box optimization aims to maximize a black-box function using an offline dataset of designs and their measured properties.
no code implementations • 1 Oct 2024 • Siavash H. Khajavi, Mehdi Moshtaghi, Dikai Yu, Zixuan Liu, Kary Främling, Jan Holmström
Our findings illustrates that when synthetic imagery and real imagery is utilized in a mixed training set the resulting ML model outperforms models trained on real imagery as well as models trained on synthetic imagery for detection of a broad spectrum of fires.
no code implementations • 20 Aug 2024 • Zixuan Liu, Hanwen Xu, Addie Woicik, Linda G. Shapiro, Marian Blazes, Yue Wu, Verena Steffen, Catherine Cukras, Cecilia S. Lee, Miao Zhang, Aaron Y. Lee, Sheng Wang
It then exploits a novel multi-modal contrastive learning framework COEP to integrate other retinal imaging modalities, such as fundus autofluorescence and infrared retinal imaging, into OCTCube, efficiently extending it into multi-modal foundation models.
no code implementations • 11 May 2024 • Zhixuan Xu, Chongkai Gao, Zixuan Liu, Gang Yang, Chenrui Tie, Haozhuo Zheng, Haoyu Zhou, Weikun Peng, Debang Wang, Tianrun Hu, Tianyi Chen, Zhouliang Yu, Lin Shao
Our work introduces a comprehensive framework to develop a foundation model for general robotic manipulation that formalizes a manipulation task as contact synthesis.
no code implementations • 4 Mar 2024 • Zixuan Liu, Xiaolin Sun, Zizhan Zheng
Empirically, our approach provides a safety guarantee to LLMs that is missing in DPO while achieving significantly higher rewards under the same safety constraint compared to a recently proposed safe RLHF approach.
1 code implementation • 26 Jan 2024 • Yifeng Liu, Hanwen Xu, Tangqi Fang, Haocheng Xi, Zixuan Liu, Sheng Zhang, Hoifung Poon, Sheng Wang
As a fundamental task in computational chemistry, retrosynthesis prediction aims to identify a set of reactants to synthesize a target molecule.
1 code implementation • 11 Oct 2023 • Guozheng Ma, Lu Li, Sen Zhang, Zixuan Liu, Zhen Wang, Yixin Chen, Li Shen, Xueqian Wang, DaCheng Tao
Plasticity, the ability of a neural network to evolve with new data, is crucial for high-performance and sample-efficient visual reinforcement learning (VRL).
1 code implementation • 8 Oct 2023 • Cong Duan, Zixuan Liu, Jiahao Xia, Minghai Zhang, Jiacai Liao, Libo Cao
The experiments indicate that the Score-Softmax classifier reduces the interference of background noise, enhancing the robustness of the model.
no code implementations • 7 Oct 2023 • Zixuan Liu, Gaurush Hiranandani, Kun Qian, Eddie W. Huang, Yi Xu, Belinda Zeng, Karthik Subbian, Sheng Wang
ForeSeer transfers reviews from similar products on a large product graph and exploits these reviews to predict aspects that might emerge in future reviews.
2 code implementations • 14 Sep 2023 • Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang
In this paper, we address the limitation above by 1) introducing vision-language Model with Multi-Modal In-Context Learning(MMICL), a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLM's ability to understand complex multi-modal prompts.
Ranked #16 on Visual Reasoning on Winoground
1 code implementation • 21 Aug 2023 • Zixuan Liu, Liu Liu, Xueqian Wang, Peilin Zhao
Differentiable optimization has received a significant amount of attention due to its foundational role in the domain of machine learning based on neural networks.
no code implementations • 2 Mar 2023 • Zixuan Liu, Ziqiao Wang, Hongyu Guo, Yongyi Mao
Mixup, which creates synthetic training instances by linearly interpolating random sample pairs, is a simple and yet effective regularization technique to boost the performance of deep models trained with SGD.
no code implementations • 24 Jul 2022 • Can Chen, Xi Chen, Chen Ma, Zixuan Liu, Xue Liu
In this survey, we first give a formal definition of the gradient-based bi-level optimization.
1 code implementation • 11 Jul 2022 • Mehmet Saygın Seyfioğlu, Zixuan Liu, Pranav Kamath, Sadjyot Gangolli, Sheng Wang, Thomas Grabowski, Linda Shapiro
On top of BAR, we propose using a soft-label-capable supervised contrastive loss, aiming to learn the relative similarity of representations that reflect how mixed are the synthetic MRIs using our soft labels.
no code implementations • 23 Nov 2021 • Xin Zhang, Zixuan Liu, Kaiwen Xiao, Tian Shen, Junzhou Huang, Wei Yang, Dimitris Samaras, Xiao Han
Labels are costly and sometimes unreliable.
Ranked #5 on Image Classification on mini WebVision 1.0
1 code implementation • NAACL (maiworkshop) 2021 • Weixin Liang, Yanhao Jiang, Zixuan Liu
Images are more than a collection of objects or attributes -- they represent a web of relationships among interconnected objects.
Ranked #1 on Graph Question Answering on GQA
no code implementations • 16 Feb 2021 • Zixuan Liu, Ehsan Adeli, Kilian M. Pohl, Qingyu Zhao
Interpretability is a critical factor in applying complex deep learning models to advance the understanding of brain disorders in neuroimaging studies.
no code implementations • 7 Dec 2020 • Giulio Chiribella, Zixuan Liu
A fundamental question is whether it is possible to conceive a broader set of operations that probe quantum processes in the backward direction, from the future to the past, or more generally, in a combination of the forward and backward directions.
Quantum Physics Mathematical Physics Mathematical Physics
no code implementations • 12 Jun 2020 • Qingyu Zhao, Zixuan Liu, Ehsan Adeli, Kilian M. Pohl
Machine learning analysis of longitudinal neuroimaging data is typically based on supervised learning, which requires a large number of ground-truth labels to be informative.
no code implementations • 2 Jan 2020 • Weixin Liang, Zixuan Liu, Can Liu
Based on DAWSON, We also propose MUSIC MATINEE, which is the first few-shot music generation model.