1 code implementation • 4 Mar 2024 • Chao Xu, Yang Liu, Jiazheng Xing, Weida Wang, Mingze Sun, Jun Dan, Tianxin Huang, Siyuan Li, Zhi-Qi Cheng, Ying Tai, Baigui Sun
In this paper, we abstract the process of people hearing speech, extracting meaningful cues, and creating various dynamically audio-consistent talking faces, termed Listening and Imagining, into the task of high-fidelity diverse talking faces generation from a single audio.
3 code implementations • 27 May 2022 • Siyuan Li, Di wu, Fang Wu, Zelin Zang, Stan. Z. Li
We then propose an Architecture-Agnostic Masked Image Modeling framework (A$^2$MIM), which is compatible with both Transformers and CNNs in a unified way.
1 code implementation • 17 Oct 2017 • Li Yi, Lin Shao, Manolis Savva, Haibin Huang, Yang Zhou, Qirui Wang, Benjamin Graham, Martin Engelcke, Roman Klokov, Victor Lempitsky, Yuan Gan, Pengyu Wang, Kun Liu, Fenggen Yu, Panpan Shui, Bingyang Hu, Yan Zhang, Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Minki Jeong, Jaehoon Choi, Changick Kim, Angom Geetchandra, Narasimha Murthy, Bhargava Ramu, Bharadwaj Manda, M. Ramanathan, Gautam Kumar, P Preetham, Siddharth Srivastava, Swati Bhugra, Brejesh lall, Christian Haene, Shubham Tulsiani, Jitendra Malik, Jared Lafer, Ramsey Jones, Siyuan Li, Jie Lu, Shi Jin, Jingyi Yu, Qi-Xing Huang, Evangelos Kalogerakis, Silvio Savarese, Pat Hanrahan, Thomas Funkhouser, Hao Su, Leonidas Guibas
We introduce a large-scale 3D shape understanding benchmark using data and annotation from ShapeNet 3D object database.
18 code implementations • 28 Feb 2019 • Xiaojie Guo, Siyuan Li, Jinke Yu, Jiawan Zhang, Jiayi Ma, Lin Ma, Wei Liu, Haibin Ling
Being accurate, efficient, and compact is essential to a facial landmark detector for practical use.
2 code implementations • CVPR 2023 • Cheng Tan, Zhangyang Gao, Lirong Wu, Yongjie Xu, Jun Xia, Siyuan Li, Stan Z. Li
Spatiotemporal predictive learning aims to generate future frames by learning from historical frames.
Ranked #12 on Video Prediction on Moving MNIST
6 code implementations • 7 Nov 2022 • Siyuan Li, Zedong Wang, Zicheng Liu, Cheng Tan, Haitao Lin, Di wu, ZhiYuan Chen, Jiangbin Zheng, Stan Z. Li
Notably, MogaNet hits 80. 0\% and 87. 8\% accuracy with 5. 2M and 181M parameters on ImageNet-1K, outperforming ParC-Net and ConvNeXt-L, while saving 59\% FLOPs and 17M parameters, respectively.
Ranked #1 on Pose Estimation on COCO val2017
2 code implementations • 22 Nov 2022 • Cheng Tan, Zhangyang Gao, Siyuan Li, Stan Z. Li
Without introducing any extra tricks and strategies, SimVP can achieve superior performance on various benchmark datasets.
Ranked #1 on Video Prediction on Moving MNIST
2 code implementations • NeurIPS 2023 • Cheng Tan, Siyuan Li, Zhangyang Gao, Wenfei Guan, Zedong Wang, Zicheng Liu, Lirong Wu, Stan Z. Li
Spatio-temporal predictive learning is a learning paradigm that enables models to learn spatial and temporal patterns by predicting future frames from given past frames in an unsupervised manner.
2 code implementations • 24 Mar 2021 • Zicheng Liu, Siyuan Li, Di wu, Zihan Liu, ZhiYuan Chen, Lirong Wu, Stan Z. Li
Specifically, AutoMix reformulates the mixup classification into two sub-tasks (i. e., mixed sample generation and mixup classification) with corresponding sub-networks and solves them in a bi-level optimization framework.
Ranked #8 on Image Classification on Places205
1 code implementation • 30 Jun 2021 • Di wu, Siyuan Li, Zelin Zang, Stan Z. Li
Self-supervised contrastive learning has demonstrated great potential in learning visual representations.
Ranked #22 on Fine-Grained Image Classification on NABirds
1 code implementation • 27 Oct 2021 • Siyuan Li, Zicheng Liu, Zelin Zang, Di wu, ZhiYuan Chen, Stan Z. Li
For example, dimension reduction methods, t-SNE, and UMAP optimize pair-wise data relationships by preserving the global geometric structure, while self-supervised learning, SimCLR, and BYOL focus on mining the local statistics of instances under specific augmentations.
1 code implementation • 30 Nov 2021 • Siyuan Li, Zicheng Liu, Zedong Wang, Di wu, Zihan Liu, Stan Z. Li
Accordingly, we propose $\eta$-balanced mixup loss for complementary learning of the two sub-objectives.
Ranked #7 on Image Classification on Places205
2 code implementations • 7 Jul 2022 • Zelin Zang, Siyuan Li, Di wu, Ge Wang, Lei Shang, Baigui Sun, Hao Li, Stan Z. Li
To overcome the underconstrained embedding problem, we design a loss and theoretically demonstrate that it leads to a more suitable embedding based on the local flatness.
Ranked #2 on Image Classification on ImageNet-100
1 code implementation • 11 Sep 2022 • Siyuan Li, Zedong Wang, Zicheng Liu, Di wu, Cheng Tan, Weiyang Jin, Stan Z. Li
Data mixing, or mixup, is a data-dependent augmentation technique that has greatly enhanced the generalizability of modern deep neural networks.
1 code implementation • NeurIPS 2023 • Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li
However, we found that the extra optimizing step may be redundant because label-mismatched mixed samples are informative hard mixed samples for deep models to localize discriminative features.
2 code implementations • 14 Feb 2024 • Siyuan Li, Zicheng Liu, Juanxi Tian, Ge Wang, Zedong Wang, Weiyang Jin, Di wu, Cheng Tan, Tao Lin, Yang Liu, Baigui Sun, Stan Z. Li
Exponential Moving Average (EMA) is a widely used weight averaging (WA) regularization to learn flat optima for better generalizations without extra cost in deep neural network (DNN) optimization.
1 code implementation • 27 Mar 2024 • Luigi Piccinelli, Yung-Hsu Yang, Christos Sakaridis, Mattia Segu, Siyuan Li, Luc van Gool, Fisher Yu
However, the remarkable accuracy of recent MMDE methods is confined to their training domains.
Ranked #2 on Monocular Depth Estimation on NYU-Depth V2 (using extra training data)
1 code implementation • 17 Apr 2023 • Xiao Wang, Weikang Zhou, Can Zu, Han Xia, Tianze Chen, Yuansen Zhang, Rui Zheng, Junjie Ye, Qi Zhang, Tao Gui, Jihua Kang, Jingsheng Yang, Siyuan Li, Chunsai Du
Large language models have unlocked strong multi-task capabilities from reading instructive prompts.
Ranked #2 on Zero-shot Named Entity Recognition (NER) on CrossNER (using extra training data)
1 code implementation • 31 Dec 2023 • Siyuan Li, Luyuan Zhang, Zedong Wang, Di wu, Lirong Wu, Zicheng Liu, Jun Xia, Cheng Tan, Yang Liu, Baigui Sun, Stan Z. Li
As the deep learning revolution marches on, self-supervised learning has garnered increasing attention in recent years thanks to its remarkable representation learning ability and the low dependence on labeled data.
1 code implementation • 25 Nov 2022 • Siyuan Li, Li Sun, Qingli Li
The key idea is to fully exploit the cross-modal description ability in CLIP through a set of learnable text tokens for each ID and give them to the text encoder to form ambiguous descriptions.
Ranked #1 on Person Re-Identification on MSMT17
1 code implementation • CVPR 2022 • Ye Liu, Siyuan Li, Yang Wu, Chang Wen Chen, Ying Shan, XiaoHu Qie
Finding relevant moments and highlights in videos according to natural language queries is a natural and highly valuable common need in the current video content explosion era.
Ranked #3 on Highlight Detection on YouTube Highlights
1 code implementation • CVPR 2022 • Xueqi Hu, Qiusheng Huang, Zhengyi Shi, Siyuan Li, Changxin Gao, Li Sun, Qingli Li
Existing GAN inversion methods fail to provide latent codes for reliable reconstruction and flexible editing simultaneously.
1 code implementation • CVPR 2019 • Siyuan Li, Iago Breno Araujo, Wenqi Ren, Zhangyang Wang, Eric K. Tokuda, Roberto Hirata Junior, Roberto Cesar-Junior, Jiawan Zhang, Xiaojie Guo, Xiaochun Cao
We present a comprehensive study and evaluation of existing single image deraining algorithms, using a new large-scale benchmark consisting of both synthetic and real-world rainy images. This dataset highlights diverse data sources and image contents, and is divided into three subsets (rain streak, rain drop, rain and mist), each serving different training or evaluation purposes.
1 code implementation • 22 Feb 2024 • Lirong Wu, Yijun Tian, Yufei Huang, Siyuan Li, Haitao Lin, Nitesh V Chawla, Stan Z. Li
In addition, microenvironments defined in previous work are largely based on experimentally assayed physicochemical properties, for which the "vocabulary" is usually extremely small.
1 code implementation • 26 Jul 2022 • Siyuan Li, Martin Danelljan, Henghui Ding, Thomas E. Huang, Fisher Yu
Our experiments show that TETA evaluates trackers more comprehensively, and TETer achieves significant improvements on the challenging large-scale datasets BDD100K and TAO compared to the state-of-the-art.
Ranked #4 on Multi-Object Tracking on TAO
1 code implementation • ICCV 2023 • Mingqiao Ye, Lei Ke, Siyuan Li, Yu-Wing Tai, Chi-Keung Tang, Martin Danelljan, Fisher Yu
While dominating on the COCO benchmark, recent Transformer-based detection methods are not competitive in diverse domains.
1 code implementation • CVPR 2023 • Jiangbin Zheng, Yile Wang, Cheng Tan, Siyuan Li, Ge Wang, Jun Xia, Yidong Chen, Stan Z. Li
In this work, we propose a novel contrastive visual-textual transformation for SLR, CVT-SLR, to fully explore the pretrained knowledge of both the visual and language modalities.
1 code implementation • CVPR 2023 • Siyuan Li, Tobias Fischer, Lei Ke, Henghui Ding, Martin Danelljan, Fisher Yu
This leaves contemporary MOT methods limited to a small set of pre-defined object categories.
1 code implementation • 11 Dec 2023 • Jiangbin Zheng, Siyuan Li, Yufei Huang, Zhangyang Gao, Cheng Tan, Bozhen Hu, Jun Xia, Ge Wang, Stan Z. Li
Protein design involves generating protein sequences based on their corresponding protein backbones.
1 code implementation • 4 Oct 2023 • Siyuan Li, Weiyang Jin, Zedong Wang, Fang Wu, Zicheng Liu, Cheng Tan, Stan Z. Li
The main challenge is how to distinguish high-quality pseudo labels against the confirmation bias.
2 code implementations • 12 Nov 2022 • Ziyi Zhang, Weikai Chen, Hui Cheng, Zhen Li, Siyuan Li, Liang Lin, Guanbin Li
We investigate a practical domain adaptation task, called source-free domain adaptation (SFUDA), where the source-pretrained model is adapted to the target domain without access to the source data.
Ranked #4 on Source-Free Domain Adaptation on VisDA-2017
1 code implementation • 15 May 2022 • Fang Wu, Siyuan Li, Lirong Wu, Dragomir Radev, Stan Z. Li
Graph neural networks (GNNs) mainly rely on the message-passing paradigm to propagate node features and build interactions, and different graph learning tasks require different ranges of node interactions.
1 code implementation • 25 Jan 2023 • Cheng Tan, Yijie Zhang, Zhangyang Gao, Bozhen Hu, Siyuan Li, Zicheng Liu, Stan Z. Li
We crafted a large, well-curated benchmark dataset and designed a comprehensive structural modeling approach to represent the complex RNA tertiary structure.
1 code implementation • NeurIPS 2019 • Siyuan Li, Rui Wang, Minxue Tang, Chongjie Zhang
In addition, we also theoretically prove that optimizing low-level skills with this auxiliary reward will increase the task return for the joint policy.
Hierarchical Reinforcement Learning reinforcement-learning +1
1 code implementation • CVPR 2022 • Cheng Tan, Zhangyang Gao, Lirong Wu, Siyuan Li, Stan Z. Li
Though it benefits from taking advantage of both feature-dependent information from self-supervised learning and label-dependent information from supervised learning, this scheme remains suffering from bias of the classifier.
1 code implementation • 21 Sep 2020 • Lirong Wu, Zicheng Liu, Zelin Zang, Jun Xia, Siyuan Li, Stan Z. Li
Though manifold-based clustering has become a popular research topic, we observe that one important factor has been omitted by these works, namely that the defined clustering loss may corrupt the local and global structure of the latent space.
1 code implementation • 20 Apr 2022 • Di wu, Siyuan Li, Jie Yang, Mohamad Sawan
Extensive data labeling on neurophysiological signals is often prohibitively expensive or impractical, as it may require particular infrastructure or domain expertise.
1 code implementation • NeurIPS 2021 • Jianhao Wang, Wenzhe Li, Haozhe Jiang, Guangxiang Zhu, Siyuan Li, Chongjie Zhang
These reverse imaginations provide informed data augmentation for model-free policy learning and enable conservative generalization beyond the offline dataset.
1 code implementation • 8 May 2023 • Rushuai Yang, Chenjia Bai, Hongyi Guo, Siyuan Li, Bin Zhao, Zhen Wang, Peng Liu, Xuelong Li
Under mild assumptions, our objective maximizes the MI between different behaviors based on the same skill, which serves as an upper bound of the previous MI objective.
1 code implementation • 6 Jul 2023 • Ruiqi Zhu, Siyuan Li, Tianhong Dai, Chongjie Zhang, Oya Celiktutan
Our method can endow agents with the ability to explore and acquire the required prior behaviours and then connect to the task-specific behaviours in the demonstration to solve sparse-reward tasks without requiring additional demonstration of the prior behaviours.
1 code implementation • 9 Dec 2021 • Saeed Saadatnejad, Siyuan Li, Taylor Mordan, Alexandre Alahi
We build on successful cGAN models to propose a new semantically-aware discriminator that better guides the generator.
1 code implementation • 8 Apr 2023 • Fang Wu, Huiling Qin, Siyuan Li, Stan Z. Li, Xianyuan Zhan, Jinbo Xu
In the field of artificial intelligence for science, it is consistently an essential challenge to face a limited amount of labeled data for real-world problems.
1 code implementation • 24 Jul 2023 • Jingxuan Wei, Cheng Tan, Zhangyang Gao, Linzhuang Sun, Siyuan Li, Bihui Yu, Ruifeng Guo, Stan Z. Li
Multimodal reasoning is a critical component in the pursuit of artificial intelligence systems that exhibit human-like intelligence, especially when tackling complex tasks.
1 code implementation • ICLR 2022 • Siyuan Li, Jin Zhang, Jianhao Wang, Yang Yu, Chongjie Zhang
Although GCHRL possesses superior exploration ability by decomposing tasks via subgoals, existing GCHRL methods struggle in temporally extended tasks with sparse external rewards, since the high-level policy learning relies on external rewards.
1 code implementation • 7 Jan 2023 • Fang Wu, Siyuan Li, Xurui Jin, Yinghui Jiang, Dragomir Radev, Zhangming Niu, Stan Z. Li
It takes advantage of MatchExplainer to fix the most informative portion of the graph and merely operates graph augmentations on the rest less informative part.
1 code implementation • 9 Aug 2023 • Siyuan Li, Lei Cheng, Ting Zhang, Hangfang Zhao, Jianlong Li
Accurately reconstructing a three-dimensional ocean sound speed field (3D SSF) is essential for various ocean acoustic applications, but the sparsity and uncertainty of sound speed samples across a vast ocean region make it a challenging task.
1 code implementation • 23 Nov 2023 • Cheng Tan, Jingxuan Wei, Zhangyang Gao, Linzhuang Sun, Siyuan Li, Xihong Yang, Stan Z. Li
Remarkably, we show that even smaller base models, when equipped with our proposed approach, can achieve results comparable to those of larger models, illustrating the potential of our approach in harnessing the power of rationales for improved multimodal reasoning.
1 code implementation • 27 Apr 2021 • Zelin Zang, Siyuan Li, Di wu, Jianzhu Guo, Yongjie Xu, Stan Z. Li
Unsupervised attributed graph representation learning is challenging since both structural and feature information are required to be represented in the latent space.
Ranked #2 on Node Clustering on Pubmed
1 code implementation • 15 Oct 2022 • Jin Zhang, Siyuan Li, Chongjie Zhang
The ability to reuse previous policies is an important aspect of human intelligence.
1 code implementation • 7 Oct 2020 • Siyuan Li, Haitao Lin, Zelin Zang, Lirong Wu, Jun Xia, Stan Z. Li
Dimension reduction (DR) aims to learn low-dimensional representations of high-dimensional data with the preservation of essential information.
1 code implementation • 7 Aug 2022 • Zihan Liu, Yun Luo, Lirong Wu, Siyuan Li, Zicheng Liu, Stan Z. Li
These errors arise from rough gradient usage due to the discreteness of the graph structure and from the unreliability in the meta-gradient on the graph structure.
no code implementations • 11 Jun 2018 • Siyuan Li, Fangda Gu, Guangxiang Zhu, Chongjie Zhang
Transfer learning can greatly speed up reinforcement learning for a new task by leveraging policies of relevant tasks.
no code implementations • 8 Apr 2018 • Siyuan LI, Wenqi Ren, Jiawan Zhang, Jinke Yu, Xiaojie Guo
Rain effect in images typically is annoying for many multimedia and computer vision tasks.
no code implementations • 29 Nov 2017 • Xinqing Guo, Zhang Chen, Siyuan Li, Yang Yang, Jingyi Yu
We then construct three individual networks: a Focus-Net to extract depth from a single focal stack, a EDoF-Net to obtain the extended depth of field (EDoF) image from the focal stack, and a Stereo-Net to conduct stereo matching.
no code implementations • 2 Aug 2017 • Zhang Chen, Xinqing Guo, Siyuan Li, Xuan Cao, Jingyi Yu
Depth from defocus (DfD) and stereo matching are two most studied passive depth sensing schemes.
no code implementations • 24 Sep 2017 • Siyuan Li, Chongjie Zhang
In this paper, we develop an optimal online method to select source policies for reinforcement learning.
no code implementations • CVPR 2020 • Siyuan Li, Semih Günel, Mirela Ostrek, Pavan Ramdya, Pascal Fua, Helge Rhodin
We compare our approach with existing domain transfer methods and demonstrate improved pose estimation accuracy on Drosophila melanogaster (fruit fly), Caenorhabditis elegans (worm) and Danio rerio (zebrafish), without requiring any manual annotation on the target domain and despite using simplistic off-the-shelf animal characters for simulation, or simple geometric shapes as models.
no code implementations • 1 Jan 2021 • Jun Xia, Haitao Lin, Yongjie Xu, Lirong Wu, Zhangyang Gao, Siyuan Li, Stan Z. Li
A pseudo label is computed from the neighboring labels for each node in the training set using LP; meta learning is utilized to learn a proper aggregation of the original and pseudo label as the final label.
no code implementations • ICLR 2021 • Siyuan Li, Lulu Zheng, Jianhao Wang, Chongjie Zhang
In goal-conditioned Hierarchical Reinforcement Learning (HRL), a high-level policy periodically sets subgoals for a low-level policy, and the low-level policy is trained to reach those subgoals.
no code implementations • 29 Sep 2021 • Siyuan Li, Zicheng Liu, Di wu, Stan Z. Li
In this paper, we decompose mixup into two sub-tasks of mixup generation and classification and formulate it for discriminative representations as class- and instance-level mixup.
no code implementations • 28 Sep 2020 • Lirong Wu, Zicheng Liu, Zelin Zang, Jun Xia, Siyuan Li, Stan Z. Li
To overcome the problem that clusteringoriented losses may deteriorate the geometric structure of embeddings in the latent space, an isometric loss is proposed for preserving intra-manifold structure locally and a ranking loss for inter-manifold structure globally.
no code implementations • 9 May 2022 • Junwen Ding, Liangcai Song, Siyuan Li, Chen Wu, Ronghua He, Zhouxing Su, Zhipeng Lü
Computing workflows in heterogeneous multiprocessor systems are frequently modeled as directed acyclic graphs of tasks and data blocks, which represent computational modules and their dependencies in the form of data produced by a task and used by others.
no code implementations • International Joint Conference on Artificial Intelligence 2020 • Siyuan Li, Zhi Zhang, Ziyu Liu, Anna Wang, Linglong Qiu, Feng Du
Target localization and proposal generation are two essential subtasks in generic visual tracking, and it is a challenge to address both the two efficiently.
no code implementations • 1 Sep 2022 • Hui Niu, Siyuan Li, Jian Li
We evaluate the proposed approach on three real-world index datasets and compare it to state-of-the-art baselines.
1 code implementation • 24 Oct 2022 • Shijie Han, Siyuan Li, Bo An, Wei Zhao, Peng Liu
In this work, we develop a novel identity detection reinforcement learning (IDRL) framework that allows an agent to dynamically infer the identities of nearby agents and select an appropriate policy to accomplish the task.
Multi-agent Reinforcement Learning reinforcement-learning +2
no code implementations • 1 Nov 2022 • Jiangbin Zheng, Siyuan Li, Cheng Tan, Chong Wu, Yidong Chen, Stan Z. Li
Therefore, we propose to introduce additional word-level semantic knowledge of sign language linguistics to assist in improving current end-to-end neural SLT models.
no code implementations • 2 Dec 2022 • Yiqin Yang, Hao Hu, Wenzhe Li, Siyuan Li, Jun Yang, Qianchuan Zhao, Chongjie Zhang
We show that such lossless primitives can drastically improve the performance of hierarchical policies.
no code implementations • 9 Dec 2022 • Haitao Lin, Lirong Wu, Yongjie Xu, Yufei Huang, Siyuan Li, Guojiang Zhao, Stan Z. Li
Solving partial differential equations is difficult.
no code implementations • 19 Mar 2023 • Jiangbin Zheng, Ge Wang, Yufei Huang, Bozhen Hu, Siyuan Li, Cheng Tan, Xinwen Fan, Stan Z. Li
In this work, we introduce a novel unsupervised protein structure representation pretraining with a robust protein language model.
no code implementations • NeurIPS 2023 • Haitao Lin, Yufei Huang, Odin Zhang, Lirong Wu, Siyuan Li, ZhiYuan Chen, Stan Z. Li
In this way, however, it is hard to generate realistic fragments with complicated structures.
no code implementations • 14 Aug 2023 • Siyuan Li, Hao Li, Jin Zhang, Zhen Wang, Peng Liu, Chongjie Zhang
Humans have the ability to reuse previously learned policies to solve new tasks quickly, and reinforcement learning (RL) agents can do the same by transferring knowledge from source policies to a related target task.
no code implementations • 17 Aug 2023 • Hui Niu, Siyuan Li, Jiahao Zheng, Zhouchi Lin, Jian Li, Jian Guo, Bo An
Market making (MM) has attracted significant attention in financial trading owing to its essential function in ensuring market liquidity.
no code implementations • 9 Oct 2023 • Cheng Tan, Jue Wang, Zhangyang Gao, Siyuan Li, Lirong Wu, Jun Xia, Stan Z. Li
In this paper, we re-examine the two dominant temporal modeling approaches within the realm of spatio-temporal predictive learning, offering a unified perspective.
no code implementations • 9 Oct 2023 • Chan Wu, Hanxiao Zhang, Lin Ju, Jinjing Huang, Youshao Xiao, ZhaoXin Huan, Siyuan Li, Fanzhuang Meng, Lei Liang, Xiaolu Zhang, Jun Zhou
In this paper, we rethink the impact of memory consumption and communication costs on the training speed of large language models, and propose a memory-communication balanced strategy set Partial Redundancy Optimizer (PaRO).
no code implementations • 14 Oct 2023 • Yufei Huang, Siyuan Li, Jin Su, Lirong Wu, Odin Zhang, Haitao Lin, Jingqi Qi, Zihan Liu, Zhangyang Gao, Yuyang Liu, Jiangbin Zheng, Stan. ZQ. Li
To study this problem, we identify a Protein 3D Graph Structure Learning Problem for Robust Protein Property Prediction (PGSL-RP3), collect benchmark datasets, and present a protein Structure embedding Alignment Optimization framework (SAO) to mitigate the problem of structure embedding bias between the predicted and experimental protein structures.
no code implementations • 22 Oct 2023 • Siyuan Li, Xun Wang, Rongchang Zuo, Kewu Sun, Lingfei Cui, Jishiyu Ding, Peng Liu, Zhe Ma
Imitation learning (IL) has achieved considerable success in solving complex sequential decision-making problems.
no code implementations • 12 Feb 2024 • Siyuan Li, Shijie Han, Yingnan Zhao, By Liang, Peng Liu
To achieve automatic auxiliary reward generation, we propose a novel representation learning approach that can measure the ``transition distance'' between states.
no code implementations • 18 Feb 2024 • Yufei Huang, Odin Zhang, Lirong Wu, Cheng Tan, Haitao Lin, Zhangyang Gao, Siyuan Li, Stan. Z. Li
Accurate prediction of protein-ligand binding structures, a task known as molecular docking is crucial for drug design but remains challenging.
no code implementations • 28 Mar 2024 • Siyuan Li
Unmanned aerial vehicle (UAV) techniques have developed rapidly within the past few decades.
no code implementations • 15 Apr 2024 • Siyuan Li, Youshao Xiao, Fanzhuang Meng, Lin Ju, Lei Liang, Lin Wang, Jun Zhou
Offline batch inference is a common task in the industry for deep learning applications, but it can be challenging to ensure stability and performance when dealing with large amounts of data and complicated inference pipelines.
no code implementations • 15 Apr 2024 • Youshao Xiao, Lin Ju, Zhenglei Zhou, Siyuan Li, ZhaoXin Huan, Dalong Zhang, Rujie Jiang, Lin Wang, Xiaolu Zhang, Lei Liang, Jun Zhou
Previous works only address part of the stragglers and could not adaptively solve various stragglers in practice.
no code implementations • 17 Apr 2024 • Zicheng Liu, Li Wang, Siyuan Li, Zedong Wang, Haitao Lin, Stan Z. Li
Transformer models have been successful in various sequence processing tasks, but the self-attention mechanism's computational cost limits its practicality for long sequences.
no code implementations • 5 May 2024 • Siyuan Li, Xi Lin, Hansong Xu, Kun Hua, Xiaomin Jin, Gaolei Li, Jianhua Li
In this paper, we focus on the edge optimization of AIGC task execution and propose GMEL, a generative model-driven industrial AIGC collaborative edge learning framework.
no code implementations • 9 May 2024 • Siyuan Li, Xi Lin, Yaju Liu, Jianhua Li
We believe that TrustGAIN is a necessary paradigm for intelligent and trustworthy 6G networks to support AIGC services, ensuring the security, privacy, and fairness of AIGC network services.