Search Results for author: Chi Chen

Found 17 papers, 10 papers with code

PagPassGPT: Pattern Guided Password Guessing via Generative Pretrained Transformer

1 code implementation7 Apr 2024 Xingyu Su, Xiaojie Zhu, Yang Li, Yong Li, Chi Chen, Paulo Esteves-Veríssimo

Amidst the surge in deep learning-based password guessing models, challenges of generating high-quality passwords and reducing duplicate passwords persist.

Goldfish: An Efficient Federated Unlearning Framework

1 code implementation4 Apr 2024 Houzhe Wang, Xiaojie Zhu, Chi Chen, Paulo Esteves-Veríssimo

To address the challenge of low validity in existing machine unlearning algorithms, we propose a novel loss function.

Knowledge Distillation Machine Unlearning

Model Composition for Multimodal Large Language Models

no code implementations20 Feb 2024 Chi Chen, Yiyang Du, Zheng Fang, Ziyue Wang, Fuwen Luo, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Maosong Sun, Yang Liu

In this paper, we propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.

Browse and Concentrate: Comprehending Multimodal Content via prior-LLM Context Fusion

1 code implementation19 Feb 2024 Ziyue Wang, Chi Chen, Yiqi Zhu, Fuwen Luo, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Maosong Sun, Yang Liu

With the bloom of Large Language Models (LLMs), Multimodal Large Language Models (MLLMs) that incorporate LLMs with pre-trained vision models have recently demonstrated impressive performance across diverse vision-language tasks.

Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask Questions

1 code implementation20 Nov 2023 Ziyue Wang, Chi Chen, Peng Li, Yang Liu

Large Language Models (LLMs) demonstrate impressive reasoning ability and the maintenance of world knowledge not only in natural language tasks, but also in some vision-language tasks such as open-domain knowledge-based visual question answering (OK-VQA).

Question Answering Visual Question Answering +1

Position-Enhanced Visual Instruction Tuning for Multimodal Large Language Models

1 code implementation25 Aug 2023 Chi Chen, Ruoyu Qin, Fuwen Luo, Xiaoyue Mi, Peng Li, Maosong Sun, Yang Liu

However, existing visual instruction tuning methods only utilize image-language instruction data to align the language and image modalities, lacking a more fine-grained cross-modal alignment.

Position

Towards Predicting Equilibrium Distributions for Molecular Systems with Deep Learning

no code implementations8 Jun 2023 Shuxin Zheng, Jiyan He, Chang Liu, Yu Shi, Ziheng Lu, Weitao Feng, Fusong Ju, Jiaxi Wang, Jianwei Zhu, Yaosen Min, He Zhang, Shidi Tang, Hongxia Hao, Peiran Jin, Chi Chen, Frank Noé, Haiguang Liu, Tie-Yan Liu

In this paper, we introduce a novel deep learning framework, called Distributional Graphormer (DiG), in an attempt to predict the equilibrium distribution of molecular systems.

Weakly Supervised Vision-and-Language Pre-training with Relative Representations

no code implementations24 May 2023 Chi Chen, Peng Li, Maosong Sun, Yang Liu

Weakly supervised vision-and-language pre-training (WVLP), which learns cross-modal representations with limited cross-modal supervision, has been shown to effectively reduce the data cost of pre-training while maintaining decent performance on downstream tasks.

Retrieval

CG-SSD: Corner Guided Single Stage 3D Object Detection from LiDAR Point Cloud

1 code implementation24 Feb 2022 Ruiqi Ma, Chi Chen, Bisheng Yang, Deren Li, Haiping Wang, Yangzi Cong, Zongtian Hu

At present, the anchor-based or anchor-free models that use LiDAR point clouds for 3D object detection use the center assigner strategy to infer the 3D bounding boxes.

3D Object Detection Object +1

Efficient DNA-driven nanocavities for approaching quasi-deterministic strong coupling to a few fluorophores

no code implementations11 Mar 2021 Wan-Ping Chan, Jyun-Hong Chen, Wei-Lun Chou, Wen-Yuan Chen, Hao-Yu Liu, Hsiao-Ching Hu, Chien-Chung Jeng, Jie-Ren Li, Chi Chen, Shiuan-Yeh Chen

Strong coupling between light and matter is the foundation of promising quantum photonic devices such as deterministic single photon sources, single atom lasers and photonic quantum gates, which consist of an atom and a photonic cavity.

Optics Quantum Physics

AtomSets -- A Hierarchical Transfer Learning Framework for Small and Large Materials Datasets

1 code implementation4 Feb 2021 Chi Chen, Shyue Ping Ong

Here we leverage the transfer learning concept and the graph network deep learning framework and develop the AtomSets machine learning framework for consistent high model accuracy at both small and large materials data.

Feature Engineering Transfer Learning Materials Science

Mask-Align: Self-Supervised Neural Word Alignment

1 code implementation ACL 2021 Chi Chen, Maosong Sun, Yang Liu

Word alignment, which aims to align translationally equivalent words between source and target sentences, plays an important role in many natural language processing tasks.

Machine Translation Translation +1

Holistic Combination of Structural and Textual Code Information for Context based API Recommendation

no code implementations15 Oct 2020 Chi Chen, Xin Peng, Zhenchang Xing, Jun Sun, Xin Wang, Yifan Zhao, Wenyun Zhao

APIRec-CST is a deep learning model that combines the API usage with the text information in the source code based on an API Context Graph Network and a Code Token Network that simultaneously learn structural and textual features for API recommendation.

Learning Properties of Ordered and Disordered Materials from Multi-fidelity Data

3 code implementations9 May 2020 Chi Chen, Yunxing Zuo, Weike Ye, Xiangguo Li, Shyue Ping Ong

Predicting the properties of a material from the arrangement of its atoms is a fundamental goal in materials science.

Materials Science Disordered Systems and Neural Networks

Graph Networks as a Universal Machine Learning Framework for Molecules and Crystals

3 code implementations Chem. Mater. 2018 Chi Chen, Weike Ye, Yunxing Zuo, Chen Zheng, Shyue Ping Ong

Similarly, we show that MEGNet models trained on $\sim 60, 000$ crystals in the Materials Project substantially outperform prior ML models in the prediction of the formation energies, band gaps and elastic moduli of crystals, achieving better than DFT accuracy over a much larger data set.

Drug Discovery Formation Energy Materials Science Computational Physics

Automated Generation and Ensemble-Learned Matching of X-ray Absorption Spectra

no code implementations6 Nov 2017 Chen Zheng, Kiran Mathew, Chi Chen, Yiming Chen, Hanmei Tang, Alan Dozier, Joshua J. Kas, Fernando D. Vila, John J. Rehr, Louis F. J. Piper, Kristin Persson, Shyue Ping Ong

We report the development of XASdb, a large database of computed reference X-ray absorption spectra (XAS), and a novel Ensemble-Learned Spectra IdEntification (ELSIE) algorithm for the matching of spectra.

Materials Science

Cannot find the paper you are looking for? You can Submit a new open access paper.