1 code implementation • 20 Feb 2025 • Félix Therrien, Jamal Abou Haibeh, Divya Sharma, Rhiannon Hendley, Alex Hernández-García, Sun Sun, Alain Tchagang, Jiang Su, Samuel Huberman, Yoshua Bengio, Hongyu Guo, Homin Shin
The goal of this work is to facilitate the use of machine learning for solid-state electrolyte materials discovery.
1 code implementation • 26 Jan 2025 • Yadong Li, Jun Liu, Tao Zhang, Song Chen, Tianpeng Li, zehuan li, Lijun Liu, Lingfeng Ming, Guosheng Dong, Da Pan, Chong Li, Yuanbo Fang, Dongdong Kuang, Mingrui Wang, Chenglin Zhu, Youwei Zhang, Hongyu Guo, Fengyu Zhang, Yuran Wang, Bowen Ding, Wei Song, Xu Li, Yuqi Huo, Zheng Liang, Shusen Zhang, Xin Wu, Shuai Zhao, Linchu Xiong, Yozhen Wu, Jiahui Ye, Wenhao Lu, Bowen Li, Yan Zhang, Yaqi Zhou, Xin Chen, Lei Su, Hongda Zhang, Fuzhong Chen, Xuezhen Dong, Na Nie, Zhiying Wu, Bin Xiao, Ting Li, Shunya Dang, Ping Zhang, Yijia Sun, Jincheng Wu, Jinjie Yang, Xionghai Lin, Zhi Ma, Kegeng Wu, Jia Li, Aiyuan Yang, Hui Liu, Jianqiang Zhang, Xiaoxi Chen, Guangwei Ai, Wentao Zhang, Yicong Chen, Xiaoqin Huang, Kun Li, Wenjing Luo, Yifei Duan, Lingling Zhu, Ran Xiao, Zhe Su, Jiani Pu, Dian Wang, Xu Jia, Tianyu Zhang, Mengyu Ai, Mang Wang, Yujing Qiao, Lei Zhang, Yanjun Shen, Fan Yang, Miao Zhen, Yijie Zhou, Mingyang Chen, Fei Li, Chenzheng Zhu, Keer Lu, Yaqi Zhao, Hao Liang, Youquan Li, Yanzhao Qin, Linzhuang Sun, Jianhua Xu, Haoze Sun, MingAn Lin, Zenan Zhou, WeiPeng Chen
We introduce Baichuan-Omni-1. 5, an omni-modal model that not only has omni-modal understanding capabilities but also provides end-to-end audio generation capabilities.
no code implementations • 26 Nov 2024 • Mingjing Li, Huihui Zhou, Xiaofeng Xu, Zhiwei Zhong, Puli Quan, Xueke Zhu, Yanyu Lin, Wenjie Lin, Hongyu Guo, Junchao Zhang, Yunhao Ma, Wei Wang, Qingyan Meng, Zhengyu Ma, Guoqi Li, Xiaoxin Cui, Yonghong Tian
There is a growing necessity for edge training to adapt to dynamically changing environment.
no code implementations • 24 Oct 2024 • Jiarui Lu, Xiaoyin Chen, Stephen Zhewen Lu, Chence Shi, Hongyu Guo, Yoshua Bengio, Jian Tang
In this paper, we introduce Structure Language Modeling (SLM) as a novel framework for efficient protein conformation generation.
no code implementations • 16 Sep 2024 • Shengchao Liu, Divin Yan, Weitao Du, Weiyang Liu, Zhuoxinran Li, Hongyu Guo, Christian Borgs, Jennifer Chayes, Anima Anandkumar
Artificial intelligence models have shown great potential in structure-based drug design, generating ligands with high binding affinities.
no code implementations • 15 Feb 2024 • Yiwei Lu, Guojun Zhang, Sun Sun, Hongyu Guo, YaoLiang Yu
In self-supervised contrastive learning, a widely-adopted objective function is InfoNCE, which uses the heuristic cosine similarity for the representation comparison, and is closely related to maximizing the Kullback-Leibler (KL)-based mutual information.
1 code implementation • 26 Jan 2024 • Shengchao Liu, Weitao Du, Hannan Xu, Yanjing Li, Zhuoxinran Li, Vignesh Bhethanabotla, Divin Yan, Christian Borgs, Anima Anandkumar, Hongyu Guo, Jennifer Chayes
We demonstrate the efficiency and effectiveness of NeuralMD, achieving over 1K$\times$ speedup compared to standard numerical MD simulations.
1 code implementation • 5 Jan 2024 • Stephen Obadinma, Xiaodan Zhu, Hongyu Guo
In this work, we highlight and perform a comprehensive study on calibration attacks, a form of adversarial attacks that aim to trap victim models to be heavily miscalibrated without altering their predicted labels, hence endangering the trustworthiness of the models and follow-up decision making based on their confidence.
1 code implementation • NeurIPS 2023 • Shengchao Liu, Weitao Du, Yanjing Li, Zhuoxinran Li, Zhiling Zheng, Chenru Duan, ZhiMing Ma, Omar Yaghi, Anima Anandkumar, Christian Borgs, Jennifer Chayes, Hongyu Guo, Jian Tang
Artificial intelligence for scientific discovery has recently generated significant interest within the machine learning and scientific communities, particularly in the domains of chemistry, biology, and material discovery.
1 code implementation • 29 May 2023 • Shengchao Liu, Jiongxiao Wang, Yijin Yang, Chengpeng Wang, Ling Liu, Hongyu Guo, Chaowei Xiao
This research sheds light on the potential of ChatGPT and conversational LLMs for drug editing.
1 code implementation • 28 May 2023 • Shengchao Liu, Weitao Du, ZhiMing Ma, Hongyu Guo, Jian Tang
Meanwhile, existing molecule multi-modal pretraining approaches approximate MI based on the representation space encoded from the topology and geometry, thus resulting in the loss of critical structural information of molecules.
no code implementations • 19 Mar 2023 • Hongyu Guo
The WHCNets are composed of two major components: a convolutional neural network (CNN) as the front-end for wheat head image feature extraction and a CNN with skip connections for the back-end to generate high-quality density maps.
no code implementations • 5 Mar 2023 • Stephen Obadinma, Hongyu Guo, Xiaodan Zhu
In this paper, we examine the effectiveness of several popular task-agnostic data augmentation techniques, i. e., EDA, Back Translation, and Mixup, when using two general parameter efficient tuning methods, P-tuning v2 and LoRA, under data scarcity.
no code implementations • 2 Mar 2023 • Ziqi Chen, Martin Renqiang Min, Hongyu Guo, Chao Cheng, Trevor Clancy, Xia Ning
This process is known as TCR recognition and constitutes a key step for immune response.
no code implementations • 2 Mar 2023 • Zixuan Liu, Ziqiao Wang, Hongyu Guo, Yongyi Mao
Mixup, which creates synthetic training instances by linearly interpolating random sample pairs, is a simple and yet effective regularization technique to boost the performance of deep models trained with SGD.
1 code implementation • 23 Feb 2023 • Fang Sun, Zhihao Zhan, Hongyu Guo, Ming Zhang, Jian Tang
In particular, GraphVF represents the first controllable geometry-aware, protein-specific molecule generation method, which can generate binding 3D molecules with tailored sub-structures and physio-chemical properties.
3 code implementations • 9 Feb 2023 • Shengchao Liu, Yanjing Li, Zhuoxinran Li, Anthony Gitter, Yutao Zhu, Jiarui Lu, Zhao Xu, Weili Nie, Arvind Ramanathan, Chaowei Xiao, Jian Tang, Hongyu Guo, Anima Anandkumar
Current AI-assisted protein design mainly utilizes protein sequential and structural information.
no code implementations • 29 Sep 2022 • Hongyu Guo, Kun Xie, Mehdi Keyvan-Ekbatani
Lane changes are complex driving behaviors and frequently involve safety-critical situations.
2 code implementations • 27 Jun 2022 • Shengchao Liu, Hongyu Guo, Jian Tang
Further by leveraging an SE(3)-invariant score matching method, we propose GeoSSL-DDM in which the coordinate denoising proxy task is effectively boiled down to denoising the pairwise atomic distances in a molecule.
no code implementations • 21 Apr 2022 • Hongyu Guo, Sun Sun
Augmented graphs play a vital role in regularizing Graph Neural Networks (GNNs), which leverage information exchange along edges in graphs, in the form of message passing, for learning.
no code implementations • 18 Oct 2021 • Hongyu Guo, Yongyi Mao
We present a simple and yet effective interpolation-based regularization technique, aiming to improve the generalization of Graph Neural Networks (GNNs) on supervised graph classification.
1 code implementation • ICLR 2022 • Shengchao Liu, Hanchen Wang, Weiyang Liu, Joan Lasenby, Hongyu Guo, Jian Tang
However, the lack of 3D information in real-world scenarios has significantly impeded the learning of geometric graph representation.
no code implementations • 29 Sep 2021 • Hongyu Guo, Yongyi Mao
We present a simple and yet effective interpolation-based regularization technique to improve the generalization of Graph Neural Networks (GNNs).
no code implementations • 29 Sep 2021 • Guojun Zhang, Yiwei Lu, Sun Sun, Hongyu Guo, YaoLiang Yu
Self-supervised contrastive learning is an emerging field due to its power in providing good data representations.
no code implementations • 29 Sep 2021 • Stephen Obadinma, Xiaodan Zhu, Hongyu Guo
Our studies suggest the following: most of the time curriculum learning has a negligible effect on calibration, but in certain cases under the context of limited training time and noisy data, curriculum learning can substantially reduce calibration error in a manner that cannot be explained by dynamically sampling the dataset.
no code implementations • 26 Jun 2021 • Hongyu Guo
PLS first creates midpoint samples by averaging random sample pairs and then learns a smoothing distribution during training for each of these midpoint samples, resulting in midpoints with high uncertainty labels for training.
1 code implementation • 24 Jun 2021 • Sun Sun, Hongyu Guo
With the symmetric treatment of the data and the latent representation, the algorithm implicitly preserves the local structure of the data in the latent space.
no code implementations • 8 Jun 2021 • Hangrui Bi, Hengyi Wang, Chence Shi, Connor Coley, Jian Tang, Hongyu Guo
Reliably predicting the products of chemical reactions presents a fundamental challenge in synthetic chemistry.
1 code implementation • 8 Jun 2021 • Minghao Xu, Hang Wang, Bingbing Ni, Hongyu Guo, Jian Tang
This paper studies unsupervised/self-supervised whole-graph representation learning, which is critical in many tasks such as molecule properties prediction in drug and material discovery.
no code implementations • 1 Jan 2021 • Hongyu Guo
Mixup and its variants have promoted a surge of interest due to their capability of boosting the accuracy of deep models.
no code implementations • 2 Dec 2020 • Hongyu Guo
Label Smoothing (LS) is an effective regularizer to improve the generalization of state-of-the-art deep models.
no code implementations • 2 Sep 2020 • Ziqiao Wang, Yongyi Mao, Hongyu Guo, Richong Zhang
SkipGram word embedding models with negative sampling, or SGN in short, is an elegant family of word embedding models.
no code implementations • ICML 2020 • Chence Shi, Minkai Xu, Hongyu Guo, Ming Zhang, Jian Tang
A fundamental problem in computational chemistry is to find a set of reactants to synthesize a target molecule, a. k. a.
Ranked #29 on
Single-step retrosynthesis
on USPTO-50k
1 code implementation • 12 Jan 2020 • Masoumeh Soflaei, Hongyu Guo, Ali Al-Bashabsheh, Yongyi Mao, Richong Zhang
We show that IB learning is, in fact, equivalent to a special class of the quantization problem.
no code implementations • 7 Oct 2019 • Hongyu Guo, Khalique Newaz, Scott Emrich, Tijana Milenkovic, Jun Li
We develop a weighted network that depicts the protein structures, and more importantly, we propose the first graphlet-based measure that applies to weighted networks.
1 code implementation • IJCNLP 2019 • Junfan Chen, Richong Zhang, Yongyi Mao, Hongyu Guo, Jie Xu
Distant supervision for relation extraction enables one to effectively acquire structured relations out of very large text corpora with less human efforts.
no code implementations • ICLR 2020 • Guillaume P. Archambault, Yongyi Mao, Hongyu Guo, Richong Zhang
We prove that the family of Untied MixUp schemes is equivalent to the entire class of DAT schemes.
3 code implementations • 22 May 2019 • Hongyu Guo, Yongyi Mao, Richong Zhang
Mixup, a recent proposed data augmentation method through linearly interpolating inputs and modeling targets of random samples, has demonstrated its capability of significantly improving the predictive accuracy of the state-of-the-art networks for image classification.
no code implementations • EMNLP 2018 • Richong Zhang, Zhiyuan Hu, Hongyu Guo, Yongyi Mao
We propose a novel strategy to encode the syntax parse tree of sentence into a learnable distributed representation.
2 code implementations • 7 Sep 2018 • Hongyu Guo, Yongyi Mao, Richong Zhang
To address this issue, we propose a novel adaptive version of MixUp, where the mixing policies are automatically learned from the data using an additional network and objective function designed to avoid manifold intrusion.
no code implementations • 26 Jul 2018 • Hongyu Guo, Yongyi Mao, Ali Al-Bashabsheh, Richong Zhang
Based on the notion of information bottleneck (IB), we formulate a quantization problem called "IB quantization".
no code implementations • 14 Oct 2017 • Martin Renqiang Min, Hongyu Guo, Dinghan Shen
Parametric embedding methods such as parametric t-SNE (pt-SNE) have been widely adopted for data visualization and out-of-sample data embedding without further computationally expensive optimization or approximation.
no code implementations • ACL 2017 • Hongyu Guo
While natural languages are compositional, how state-of-the-art neural models achieve compositionality is still unclear.
no code implementations • 19 Apr 2017 • Hongyu Guo, Colin Cherry, Jiang Su
For a bag-of-words representation, each view focuses on a different subset of the text's words.
no code implementations • 21 Feb 2017 • Martin Renqiang Min, Hongyu Guo, Dongjin Song
Our strategy learns a shallow high-order parametric embedding function and compares training/test data only with learned or precomputed exemplars, resulting in a cost function with linear computational complexity for both training and testing.
no code implementations • 16 Aug 2016 • Martin Renqiang Min, Hongyu Guo, Dongjin Song
These exemplars in combination with the feature mapping learned by HOPE effectively capture essential data variations.
no code implementations • 30 Oct 2015 • Hongyu Guo
The newly modified output sequence is subsequently used as the input to the DQN for the next decoding iteration.
no code implementations • 29 Apr 2015 • Hongyu Guo, Xiaodan Zhu, Martin Renqiang Min
Many real-world applications are associated with structured data, where not only input but also output has interplay.
no code implementations • 16 Mar 2015 • Xiaodan Zhu, Parinaz Sobhani, Hongyu Guo
The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation.