Search Results for author: Long Li

Found 8 papers, 4 papers with code

How Do Humans Write Code? Large Models Do It the Same Way Too

no code implementations24 Feb 2024 Long Li

However, we observe that when LLMs solve mathematical problems using code, they tend to generate more incorrect reasoning than when using natural language.

Code Generation Natural Language Inference

Agents: An Open-source Framework for Autonomous Language Agents

1 code implementation14 Sep 2023 Wangchunshu Zhou, Yuchen Eleanor Jiang, Long Li, Jialong Wu, Tiannan Wang, Shi Qiu, Jintian Zhang, Jing Chen, Ruipu Wu, Shuai Wang, Shiding Zhu, Jiyu Chen, Wentao Zhang, Xiangru Tang, Ningyu Zhang, Huajun Chen, Peng Cui, Mrinmaya Sachan

Recent advances on large language models (LLMs) enable researchers and developers to build autonomous language agents that can automatically solve various tasks and interact with environments, humans, and other agents using natural language interfaces.

Discriminative Co-Saliency and Background Mining Transformer for Co-Salient Object Detection

1 code implementation CVPR 2023 Long Li, Junwei Han, Ni Zhang, Nian Liu, Salman Khan, Hisham Cholakkal, Rao Muhammad Anwer, Fahad Shahbaz Khan

Then, we use two types of pre-defined tokens to mine co-saliency and background information via our proposed contrast-induced pixel-to-token correlation and co-saliency token-to-token correlation modules.

Computational Efficiency Co-Salient Object Detection +3

Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters

1 code implementation19 May 2022 Yang Xiang, Zhihua Wu, Weibao Gong, Siyu Ding, Xianjie Mo, Yuang Liu, Shuohuan Wang, Peng Liu, Yongshuai Hou, Long Li, Bin Wang, Shaohuai Shi, Yaqian Han, Yue Yu, Ge Li, Yu Sun, Yanjun Ma, dianhai yu

We took natural language processing (NLP) as an example to show how Nebula-I works in different training phases that include: a) pre-training a multilingual language model using two remote clusters; and b) fine-tuning a machine translation model using knowledge distilled from pre-trained models, which run through the most popular paradigm of recent deep learning.

Cross-Lingual Natural Language Inference Distributed Computing +2

Fast Electromagnetic Validations of Large-Scale Digital Coding Metasurfaces Accelerated by Recurrence Rebuild and Retrieval Method

no code implementations4 Dec 2021 Yu Zhao, Shang Xiang, Long Li

The recurrence rebuild and retrieval method (R3M) is proposed in this paper to accelerate the electromagnetic (EM) validations of large-scale digital coding metasurfaces (DCMs).

Retrieval

Instance-Level Relative Saliency Ranking with Graph Reasoning

no code implementations8 Jul 2021 Nian Liu, Long Li, Wangbo Zhao, Junwei Han, Ling Shao

Conventional salient object detection models cannot differentiate the importance of different salient objects.

Image Retargeting object-detection +2

Weakly Supervised Video Salient Object Detection

1 code implementation CVPR 2021 Wangbo Zhao, Jing Zhang, Long Li, Nick Barnes, Nian Liu, Junwei Han

Significant performance improvement has been achieved for fully-supervised video salient object detection with the pixel-wise labeled training datasets, which are time-consuming and expensive to obtain.

Object object-detection +4

Cannot find the paper you are looking for? You can Submit a new open access paper.