Search Results for author: Cong Li

Found 29 papers, 3 papers with code

Nyonic Technical Report

no code implementations24 Apr 2024 Junfeng Tian, Rui Wang, Cong Li, Yudong Zhou, Jun Liu, Jun Wang

This report details the development and key achievements of our latest language model designed for custom large language models.

Language Modelling

scInterpreter: Training Large Language Models to Interpret scRNA-seq Data for Cell Type Annotation

no code implementations18 Feb 2024 Cong Li, Meng Xiao, Pengfei Wang, Guihai Feng, Xin Li, Yuanchun Zhou

Despite the inherent limitations of existing Large Language Models in directly reading and interpreting single-cell omics data, they demonstrate significant potential and flexibility as the Foundation Model.

Language Modelling Large Language Model

Noise-induced stochastic Nash equilibrium

no code implementations25 Oct 2023 Cong Li, Tianjiao Feng, Xiudeng Zheng, Sabin Lessard, Yi Tao

In order to better understand the impact of environmental stochastic fluctuations on the evolution of animal behavior, we introduce the concept of a stochastic Nash equilibrium (SNE) that extends the classical concept of a Nash equilibrium (NE).

Intelligent machines work in unstructured environments by differential neuromorphic computing

no code implementations16 Sep 2023 Shengbo Wang, Shuo Gao, Chenyu Tang, Edoardo Occhipinti, Cong Li, Shurui Wang, Jiaqi Wang, Hubin Zhao, Guohua Hu, Arokia Nathan, Ravinder Dahiya, Luigi Occhipinti

By mimicking the intrinsic nature of human low-level perception mechanisms, the electronic memristive neuromorphic circuit-based method, presented here shows the potential for adapting to diverse sensing technologies and helping intelligent machines generate smart high-level decisions in the real world.

Autonomous Driving Decision Making

Enhancing Depth Completion with Multi-View Monitored Distillation

no code implementations28 Mar 2023 Jia-Wei Guo, Cong Li, Sen-Hua Zhu, Chang-Zheng Zhang, Ming Ouyang, Ning Ding, Hung-Chyun Chou

Our approach builds upon the state-of-the-art ensemble distillation method, in which we introduce a stereo-based model as a teacher model to improve the accuracy of the student model for depth completion.

Depth Completion

Evolutionary rationality of risk preference

no code implementations20 Jun 2022 Songjia Fan, Yi Tao, Cong Li

We highlight the importance of selection intensity and fitness, as well as their equivalents in the human mind, named as attention degree and meta-fitness, in the decision making process.

Decision Making

Data Informed Residual Reinforcement Learning for High-Dimensional Robotic Tracking Control

no code implementations28 Oct 2021 Cong Li, Fangzhou Liu, Yongchao Wang, Martin Buss

The learning inefficiency of reinforcement learning (RL) from scratch hinders its practical application towards continuous robotic tracking control, especially for high-dimensional robots.

reinforcement-learning Reinforcement Learning (RL)

Model-Free Incremental Adaptive Dynamic Programming Based Approximate Robust Optimal Regulation

no code implementations4 May 2021 Cong Li, Yongchao Wang, Fangzhou Liu, Qingchen Liu, Martin Buss

This paper presents a new formulation for model-free robust optimal regulation of continuous-time nonlinear systems.

Deep Attributed Network Representation Learning via Attribute Enhanced Neighborhood

no code implementations12 Apr 2021 Cong Li, Min Shi, Bo Qu, Xiang Li

In this paper, we propose a deep attributed network representation learning via attribute enhanced neighborhood (DANRL-ANE) model to improve the robustness and effectiveness of node representations.

Attribute Link Prediction +2

Origin of the Electronic Structure in Single-Layer FeSe/SrTiO3 Films

no code implementations16 Dec 2020 Defa Liu, Xianxin Wu, Fangsen Li, Yong Hu, Jianwei Huang, Yu Xu, Cong Li, Yunyi Zang, Junfeng He, Lin Zhao, Shaolong He, Chenjia Tang, Zhi Li, Lili Wang, Qingyan Wang, Guodong Liu, Zuyan Xu, Xu-Cun Ma, Qi-Kun Xue, Jiangping Hu, X. J. Zhou

These observations not only show the first direct evidence that the electronic structure of single-layer FeSe/SrTiO3 films originates from bulk FeSe through a combined effect of an electronic phase transition and an interfacial charge transfer, but also provide a quantitative basis for theoretical models in describing the electronic structure and understanding the superconducting mechanism in single-layer FeSe/SrTiO3 films.

Band Gap Superconductivity Strongly Correlated Electrons

TSAM: Temporal Link Prediction in Directed Networks based on Self-Attention Mechanism

no code implementations23 Aug 2020 Jinsong Li, Jianhua Peng, Shuxin Liu, Lintianran Weng, Cong Li

In this paper, we address the problem of temporal link prediction in directed networks and propose a deep learning model based on GCN and self-attention mechanism, namely TSAM.

Link Prediction

Off-Policy Risk-Sensitive Reinforcement Learning Based Constrained Robust Optimal Control

no code implementations10 Jun 2020 Cong Li, Qingchen Liu, Zhehua Zhou, Martin Buss, Fangzhou Liu

By introducing pseudo controls and risk-sensitive input and state penalty terms, the constrained robust stabilization problem of the original system is converted into an equivalent optimal control problem of an auxiliary system.

reinforcement-learning Reinforcement Learning (RL)

Neural Input Search for Large Scale Recommendation Models

no code implementations10 Jul 2019 Manas R. Joglekar, Cong Li, Jay K. Adams, Pranav Khaitan, Quoc V. Le

During training we use reinforcement learning to find the optimal vocabulary size for each feature and embedding dimension for each value of the feature.

Retrieval

Electronic structure and $H$-$T$ phase diagram of Eu(Fe$_{1-x}$Rh$_x$)$_2$As$_2$

no code implementations28 May 2019 Shaozhu Xiao, Darren C. Peets, Wei Liu, Shiju Zhang, Ya Feng, Wen-He Jiao, Guang-Han Cao, Eike F. Schwier, Kenya Shimada, Cong Li, Xingjiang Zhou, Shaolong He

The iron-based superconductors represent a promising platform for high-temperature superconductivity, but the interactions underpinning their pairing present a puzzle.

Superconductivity Strongly Correlated Electrons

R$^2$-CNN: Fast Tiny Object Detection in Large-Scale Remote Sensing Images

no code implementations16 Feb 2019 Jiangmiao Pang, Cong Li, Jianping Shi, Zhihai Xu, Huajun Feng

To tackle these problems, we propose a unified and self-reinforced network called remote sensing region-based convolutional neural network ($\mathcal{R}^2$-CNN), composing of backbone Tiny-Net, intermediate global attention block, and final classifier and detector.

object-detection Object Detection

Multi-Task Learning Using Neighborhood Kernels

no code implementations11 Jul 2017 Niloofar Yousefi, Cong Li, Mansooreh Mollaghasemi, Georgios Anagnostopoulos, Michael Georgiopoulos

As shown by our empirical results, our algorithm consistently outperforms the traditional kernel learning algorithms such as uniform combination solution, convex combinations of base kernels as well as some kernel alignment-based models, which have been proven to give promising results in the past.

Multi-Task Learning

Conic Multi-Task Classification

1 code implementation20 Aug 2014 Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos

Traditionally, Multi-task Learning (MTL) models optimize the average of task-related objective functions, which is an intuitive approach and which we will be referring to as Average MTL.

Classification General Classification +1

Pareto-Path Multi-Task Multiple Kernel Learning

no code implementations11 Apr 2014 Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos

A traditional and intuitively appealing Multi-Task Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing amongst tasks.

Multi-Task Learning

A Unifying Framework for Typical Multi-Task Multiple Kernel Learning Problems

no code implementations21 Jan 2014 Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos

Over the past few years, Multi-Kernel Learning (MKL) has received significant attention among data-driven feature selection techniques in the context of kernel-based learning.

feature selection Multi-Task Learning

Multi-Task Classification Hypothesis Space with Improved Generalization Bounds

no code implementations9 Dec 2013 Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos

This paper presents a RKHS, in general, of vector-valued functions intended to be used as hypothesis space for multi-task classification.

Classification General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.