Search Results for author: Binbin Lin

Found 42 papers, 19 papers with code

ChemAgent: Enhancing LLMs for Chemistry and Materials Science through Tree-Search Based Tool Learning

no code implementations9 Jun 2025 Mengsong Wu, Yafei Wang, Yidong Ming, Yuqi An, Yuwei Wan, Wenliang Chen, Binbin Lin, Yuqiang Li, Tong Xie, Dongzhan Zhou

Large language models (LLMs) have recently demonstrated promising capabilities in chemistry tasks while still facing challenges due to outdated pretraining knowledge and the difficulty of incorporating specialized chemical expertise.

Information Retrieval

InsQABench: Benchmarking Chinese Insurance Domain Question Answering with Large Language Models

1 code implementation19 Jan 2025 Jing Ding, Kai Feng, Binbin Lin, Jiarui Cai, Qiushi Wang, Yu Xie, Xiaojin Zhang, Zhongyu Wei, Wei Chen

The application of large language models (LLMs) has achieved remarkable success in various fields, but their effectiveness in specialized domains like the Chinese insurance industry remains underexplored.

Benchmarking Question Answering +1

Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control

no code implementations4 Nov 2024 Yuxin Xiao, Chaoqun Wan, Yonggang Zhang, Wenxiao Wang, Binbin Lin, Xiaofei He, Xu Shen, Jieping Ye

This technique leverages semantic features to control the representation of LLM's intermediate hidden states, enabling the model to meet specific requirements such as increased honesty or heightened safety awareness.

SciPIP: An LLM-based Scientific Paper Idea Proposer

1 code implementation30 Oct 2024 Wenxiao Wang, Lihui Gu, Liye Zhang, Yunxiang Luo, Yi Dai, Chen Shen, Liang Xie, Binbin Lin, Xiaofei He, Jieping Ye

Based on a user-provided research background, SciPIP retrieves helpful papers from a literature database while leveraging the capabilities of LLMs to generate more novel and feasible ideas.

Retrieval

Delving into the Reversal Curse: How Far Can Large Language Models Generalize?

1 code implementation24 Oct 2024 Zhengkai Lin, Zhihang Fu, Kai Liu, Liang Xie, Binbin Lin, Wenxiao Wang, Deng Cai, Yue Wu, Jieping Ye

(2) This generalization ability is highly correlated to the structure of the fact "A is B" in the training documents.

Multiple-choice

Depth Any Video with Scalable Synthetic Data

1 code implementation14 Oct 2024 Honghui Yang, Di Huang, Wei Yin, Chunhua Shen, Haifeng Liu, Xiaofei He, Binbin Lin, Wanli Ouyang, Tong He

Video depth estimation has long been hindered by the scarcity of consistent and scalable ground truth data, leading to inconsistent and unreliable results.

Depth Estimation

From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning

no code implementations3 Sep 2024 Wei Chen, Zhen Huang, Liang Xie, Binbin Lin, Houqiang Li, Le Lu, Xinmei Tian, Deng Cai, Yonggang Zhang, Wenxiao Wang, Xu Shen, Jieping Ye

Recent works propose to employ supervised fine-tuning (SFT) to mitigate the sycophancy issue, while it typically leads to the degeneration of LLMs' general capability.

Semi-supervised 3D Object Detection with PatchTeacher and PillarMix

1 code implementation13 Jul 2024 Xiaopei Wu, Liang Peng, Liang Xie, Yuenan Hou, Binbin Lin, Xiaoshui Huang, Haifeng Liu, Deng Cai, Wanli Ouyang

In this paper, we propose PatchTeacher, which focuses on partial scene 3D object detection to provide high-quality pseudo labels for the student.

3D Object Detection Data Augmentation +2

AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning

1 code implementation25 May 2024 Minghao Chen, Yihang Li, Yanting Yang, Shiyu Yu, Binbin Lin, Xiaofei He

We introduce AutoManual, a framework enabling LLM agents to autonomously build their understanding through interaction and adapt to new environments.

G2LTraj: A Global-to-Local Generation Approach for Trajectory Prediction

1 code implementation30 Apr 2024 Zhanwei Zhang, Zishuo Hua, Minghao Chen, Wei Lu, Binbin Lin, Deng Cai, Wenxiao Wang

Finally, to ensure the optimal granularity of key steps, we design a selectable granularity strategy that caters to each predicted trajectory.

Autonomous Driving Trajectory Prediction

NeRF-Det++: Incorporating Semantic Cues and Perspective-aware Depth Supervision for Indoor Multi-View 3D Detection

1 code implementation22 Feb 2024 Chenxi Huang, Yuenan Hou, Weicai Ye, Di Huang, Xiaoshui Huang, Binbin Lin, Deng Cai, Wanli Ouyang

We project the freely available 3D segmentation annotations onto the 2D plane and leverage the corresponding 2D semantic maps as the supervision signal, significantly enhancing the semantic awareness of multi-view detectors.

Depth Estimation Depth Prediction +2

Model Compression and Efficient Inference for Large Language Models: A Survey

no code implementations15 Feb 2024 Wenxiao Wang, Wei Chen, Yicong Luo, Yongliu Long, Zhengkai Lin, Liye Zhang, Binbin Lin, Deng Cai, Xiaofei He

However, Large language models have two prominent characteristics compared to smaller models: (1) Most of compression algorithms require finetuning or even retraining the model after compression.

Knowledge Distillation Model Compression +1

Efficient Long-Short Temporal Attention Network for Unsupervised Video Object Segmentation

no code implementations21 Sep 2023 Ping Li, Yu Zhang, Li Yuan, Huaxin Xiao, Binbin Lin, Xianghua Xu

Unsupervised Video Object Segmentation (VOS) aims at identifying the contours of primary foreground objects in videos without any prior knowledge.

Semantic Segmentation Unsupervised Video Object Segmentation +1

NormKD: Normalized Logits for Knowledge Distillation

1 code implementation1 Aug 2023 Zhihao Chi, Tu Zheng, Hengjia Li, Zheng Yang, Boxi Wu, Binbin Lin, Deng Cai

In this paper, we restudy the hyper-parameter temperature and figure out its incapability to distill the knowledge from each sample sufficiently when it is a single value.

image-classification Image Classification +1

PVT-SSD: Single-Stage 3D Object Detector with Point-Voxel Transformer

1 code implementation CVPR 2023 Honghui Yang, Wenxiao Wang, Minghao Chen, Binbin Lin, Tong He, Hua Chen, Xiaofei He, Wanli Ouyang

The key to associating the two different representations is our introduced input-dependent Query Initialization module, which could efficiently generate reference points and content queries.

Autonomous Driving Quantization

Neural Collapse Inspired Federated Learning with Non-iid Data

no code implementations27 Mar 2023 Chenxi Huang, Liang Xie, Yibo Yang, Wenxiao Wang, Binbin Lin, Deng Cai

One of the challenges in federated learning is the non-independent and identically distributed (non-iid) characteristics between heterogeneous devices, which cause significant differences in local updates and affect the performance of the central server.

Federated Learning

CrossFormer++: A Versatile Vision Transformer Hinging on Cross-scale Attention

1 code implementation13 Mar 2023 Wenxiao Wang, Wei Chen, Qibo Qiu, Long Chen, Boxi Wu, Binbin Lin, Xiaofei He, Wei Liu

On the one hand, CEL blends each token with multiple patches of different scales, providing the self-attention module itself with cross-scale features.

image-classification Image Classification +4

OBMO: One Bounding Box Multiple Objects for Monocular 3D Object Detection

1 code implementation20 Dec 2022 Chenxi Huang, Tong He, Haidong Ren, Wenxiao Wang, Binbin Lin, Deng Cai

Unfortunately, the network cannot accurately distinguish different depths from such non-discriminative visual features, resulting in unstable depth training.

Monocular 3D Object Detection object-detection

GD-MAE: Generative Decoder for MAE Pre-training on LiDAR Point Clouds

1 code implementation CVPR 2023 Honghui Yang, Tong He, Jiaheng Liu, Hua Chen, Boxi Wu, Binbin Lin, Xiaofei He, Wanli Ouyang

In contrast to previous 3D MAE frameworks, which either design a complex decoder to infer masked information from maintained regions or adopt sophisticated masking strategies, we instead propose a much simpler paradigm.

Decoder

Boosting Semi-Supervised 3D Object Detection with Semi-Sampling

no code implementations14 Nov 2022 Xiaopei Wu, Yang Zhao, Liang Peng, Hua Chen, Xiaoshui Huang, Binbin Lin, Haifeng Liu, Deng Cai, Wanli Ouyang

When training a teacher-student semi-supervised framework, we randomly select gt samples and pseudo samples to both labeled frames and unlabeled frames, making a strong data augmentation for them.

3D Object Detection Data Augmentation +2

SkipNode: On Alleviating Performance Degradation for Deep Graph Convolutional Networks

1 code implementation22 Dec 2021 Weigang Lu, Yibing Zhan, Binbin Lin, Ziyu Guan, Liu Liu, Baosheng Yu, Wei Zhao, Yaming Yang, DaCheng Tao

In this paper, we conduct theoretical and experimental analysis to explore the fundamental causes of performance degradation in deep GCNs: over-smoothing and gradient vanishing have a mutually reinforcing effect that causes the performance to deteriorate more quickly in deep GCNs.

Link Prediction Node Classification

CrossFormer: A Versatile Vision Transformer Hinging on Cross-scale Attention

4 code implementations ICLR 2022 Wenxiao Wang, Lu Yao, Long Chen, Binbin Lin, Deng Cai, Xiaofei He, Wei Liu

On the one hand, CEL blends each embedding with multiple patches of different scales, providing the self-attention module itself with cross-scale features.

image-classification Image Classification +5

Stochastic Coordinate Coding and Its Application for Drosophila Gene Expression Pattern Annotation

no code implementations30 Jul 2014 Binbin Lin, Qingyang Li, Qian Sun, Ming-Jun Lai, Ian Davidson, Wei Fan, Jieping Ye

The effectiveness of gene expression pattern annotation relies on the quality of feature representation.

Geodesic Distance Function Learning via Heat Flow on Vector Fields

no code implementations1 May 2014 Binbin Lin, Ji Yang, Xiaofei He, Jieping Ye

Based on our theoretical analysis, we propose to first learn the gradient field of the distance function and then learn the distance function itself.

Multi-task Vector Field Learning

no code implementations NeurIPS 2012 Binbin Lin, Sen yang, Chiyuan Zhang, Jieping Ye, Xiaofei He

MTVFL has the following key properties: (1) the vector fields we learned are close to the gradient fields of the prediction functions; (2) within each task, the vector field is required to be as parallel as possible which is expected to span a low dimensional subspace; (3) the vector fields from all tasks share a low dimensional subspace.

Multi-Task Learning

Semi-supervised Regression via Parallel Field Regularization

no code implementations NeurIPS 2011 Binbin Lin, Chiyuan Zhang, Xiaofei He

To achieve this goal, we show that the second order smoothness measures the linearity of the function, and the gradient field of a linear function has to be a parallel vector field.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.