Search Results for author: Junlei Zhang

Found 7 papers, 5 papers with code

AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents

2 code implementations24 Jan 2024 Chang Ma, Junlei Zhang, Zhihao Zhu, Cheng Yang, Yujiu Yang, Yaohui Jin, Zhenzhong Lan, Lingpeng Kong, Junxian He

Evaluating large language models (LLMs) as general-purpose agents is essential for understanding their capabilities and facilitating their integration into practical applications.

Benchmarking

PsyBench: a balanced and in-depth Psychological Chinese Evaluation Benchmark for Foundation Models

no code implementations16 Nov 2023 Junlei Zhang, Hongliang He, Nirui Song, Shuyuan He, Shuai Zhang, Huachuan Qiu, Anqi Li, Lizhi Ma, Zhenzhong Lan

As Large Language Models (LLMs) are becoming prevalent in various fields, there is an urgent need for improved NLP benchmarks that encompass all the necessary knowledge of individual discipline.

Multiple-choice

Contrastive Learning of Sentence Embeddings from Scratch

2 code implementations24 May 2023 Junlei Zhang, Zhenzhong Lan, Junxian He

Contrastive learning has been the dominant approach to train state-of-the-art sentence embeddings.

Contrastive Learning Natural Language Inference +3

C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models

1 code implementation NeurIPS 2023 Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, Junxian He

We present C-Eval, the first comprehensive Chinese evaluation suite designed to assess advanced knowledge and reasoning abilities of foundation models in a Chinese context.

Multiple-choice

Instance Smoothed Contrastive Learning for Unsupervised Sentence Embedding

1 code implementation12 May 2023 Hongliang He, Junlei Zhang, Zhenzhong Lan, Yue Zhang

Contrastive learning-based methods, such as unsup-SimCSE, have achieved state-of-the-art (SOTA) performances in learning unsupervised sentence embeddings.

Contrastive Learning Semantic Similarity +6

S-SimCSE: Sampled Sub-networks for Contrastive Learning of Sentence Embedding

no code implementations23 Nov 2021 Junlei Zhang, Zhenzhong Lan

The corresponding outputs, two sentence embeddings derived from the same sentence with different dropout masks, can be used to build a positive pair.

Contrastive Learning Data Augmentation +4

Residual Distillation: Towards Portable Deep Neural Networks without Shortcuts

1 code implementation NeurIPS 2020 Guilin Li, Junlei Zhang, Yunhe Wang, Chuanjian Liu, Matthias Tan, Yunfeng Lin, Wei zhang, Jiashi Feng, Tong Zhang

In particular, we propose a novel joint-training framework to train plain CNN by leveraging the gradients of the ResNet counterpart.

Cannot find the paper you are looking for? You can Submit a new open access paper.