1 code implementation • 7 Apr 2024 • Xingyu Su, Xiaojie Zhu, Yang Li, Yong Li, Chi Chen, Paulo Esteves-Veríssimo
Amidst the surge in deep learning-based password guessing models, challenges of generating high-quality passwords and reducing duplicate passwords persist.
1 code implementation • 4 Apr 2024 • Houzhe Wang, Xiaojie Zhu, Chi Chen, Paulo Esteves-Veríssimo
To address the challenge of low validity in existing machine unlearning algorithms, we propose a novel loss function.
no code implementations • 21 Feb 2024 • Fuwen Luo, Chi Chen, Zihao Wan, Zhaolu Kang, Qidong Yan, Yingjie Li, Xiaolong Wang, Siyu Wang, Ziyue Wang, Xiaoyue Mi, Peng Li, Ning Ma, Maosong Sun, Yang Liu
Multimodal large language models (MLLMs) have demonstrated promising results in a variety of tasks that combine vision and language.
no code implementations • 20 Feb 2024 • Chi Chen, Yiyang Du, Zheng Fang, Ziyue Wang, Fuwen Luo, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Maosong Sun, Yang Liu
In this paper, we propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
1 code implementation • 19 Feb 2024 • Ziyue Wang, Chi Chen, Yiqi Zhu, Fuwen Luo, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Maosong Sun, Yang Liu
With the bloom of Large Language Models (LLMs), Multimodal Large Language Models (MLLMs) that incorporate LLMs with pre-trained vision models have recently demonstrated impressive performance across diverse vision-language tasks.
1 code implementation • 20 Nov 2023 • Ziyue Wang, Chi Chen, Peng Li, Yang Liu
Large Language Models (LLMs) demonstrate impressive reasoning ability and the maintenance of world knowledge not only in natural language tasks, but also in some vision-language tasks such as open-domain knowledge-based visual question answering (OK-VQA).
1 code implementation • 25 Aug 2023 • Chi Chen, Ruoyu Qin, Fuwen Luo, Xiaoyue Mi, Peng Li, Maosong Sun, Yang Liu
However, existing visual instruction tuning methods only utilize image-language instruction data to align the language and image modalities, lacking a more fine-grained cross-modal alignment.
no code implementations • 8 Jun 2023 • Shuxin Zheng, Jiyan He, Chang Liu, Yu Shi, Ziheng Lu, Weitao Feng, Fusong Ju, Jiaxi Wang, Jianwei Zhu, Yaosen Min, He Zhang, Shidi Tang, Hongxia Hao, Peiran Jin, Chi Chen, Frank Noé, Haiguang Liu, Tie-Yan Liu
In this paper, we introduce a novel deep learning framework, called Distributional Graphormer (DiG), in an attempt to predict the equilibrium distribution of molecular systems.
no code implementations • 24 May 2023 • Chi Chen, Peng Li, Maosong Sun, Yang Liu
Weakly supervised vision-and-language pre-training (WVLP), which learns cross-modal representations with limited cross-modal supervision, has been shown to effectively reduce the data cost of pre-training while maintaining decent performance on downstream tasks.
1 code implementation • 24 Feb 2022 • Ruiqi Ma, Chi Chen, Bisheng Yang, Deren Li, Haiping Wang, Yangzi Cong, Zongtian Hu
At present, the anchor-based or anchor-free models that use LiDAR point clouds for 3D object detection use the center assigner strategy to infer the 3D bounding boxes.
no code implementations • 11 Mar 2021 • Wan-Ping Chan, Jyun-Hong Chen, Wei-Lun Chou, Wen-Yuan Chen, Hao-Yu Liu, Hsiao-Ching Hu, Chien-Chung Jeng, Jie-Ren Li, Chi Chen, Shiuan-Yeh Chen
Strong coupling between light and matter is the foundation of promising quantum photonic devices such as deterministic single photon sources, single atom lasers and photonic quantum gates, which consist of an atom and a photonic cavity.
Optics Quantum Physics
1 code implementation • 4 Feb 2021 • Chi Chen, Shyue Ping Ong
Here we leverage the transfer learning concept and the graph network deep learning framework and develop the AtomSets machine learning framework for consistent high model accuracy at both small and large materials data.
Feature Engineering Transfer Learning Materials Science
1 code implementation • ACL 2021 • Chi Chen, Maosong Sun, Yang Liu
Word alignment, which aims to align translationally equivalent words between source and target sentences, plays an important role in many natural language processing tasks.
no code implementations • 15 Oct 2020 • Chi Chen, Xin Peng, Zhenchang Xing, Jun Sun, Xin Wang, Yifan Zhao, Wenyun Zhao
APIRec-CST is a deep learning model that combines the API usage with the text information in the source code based on an API Context Graph Network and a Code Token Network that simultaneously learn structural and textual features for API recommendation.
3 code implementations • 9 May 2020 • Chi Chen, Yunxing Zuo, Weike Ye, Xiangguo Li, Shyue Ping Ong
Predicting the properties of a material from the arrangement of its atoms is a fundamental goal in materials science.
Materials Science Disordered Systems and Neural Networks
3 code implementations • Chem. Mater. 2018 • Chi Chen, Weike Ye, Yunxing Zuo, Chen Zheng, Shyue Ping Ong
Similarly, we show that MEGNet models trained on $\sim 60, 000$ crystals in the Materials Project substantially outperform prior ML models in the prediction of the formation energies, band gaps and elastic moduli of crystals, achieving better than DFT accuracy over a much larger data set.
Ranked #4 on Formation Energy on Materials Project
Drug Discovery Formation Energy Materials Science Computational Physics
no code implementations • 6 Nov 2017 • Chen Zheng, Kiran Mathew, Chi Chen, Yiming Chen, Hanmei Tang, Alan Dozier, Joshua J. Kas, Fernando D. Vila, John J. Rehr, Louis F. J. Piper, Kristin Persson, Shyue Ping Ong
We report the development of XASdb, a large database of computed reference X-ray absorption spectra (XAS), and a novel Ensemble-Learned Spectra IdEntification (ELSIE) algorithm for the matching of spectra.
Materials Science