1 code implementation • Findings (NAACL) 2022 • Siyu Ren, Kenny Zhu
Pretrained masked language models (PLMs) were shown to be inheriting a considerable amount of relational knowledge from the source corpora.
no code implementations • 18 Mar 2024 • Yuxin Yao, Siyu Ren, Junhui Hou, Zhi Deng, Juyong Zhang, Wenping Wang
Furthermore, we propose a learnable deformation representation based on the learnable control points and blending weights, which can deform the template surface non-rigidly while maintaining the consistency of the local shape.
no code implementations • 2 Mar 2024 • Yiming Zeng, Junhui Hou, Qijian Zhang, Siyu Ren, Wenping Wang
The structured nature of our SPCV representation allows for the seamless adaptation of well-established 2D image/video techniques, enabling efficient and effective processing and analysis of 3D point cloud sequences.
1 code implementation • 9 Feb 2024 • Siyu Ren, Kenny Q. Zhu
Despite the recent success associated with Large Language Models (LLMs), they are notably cost-prohibitive to deploy in resource-constrained environments due to their excessive memory and computational demands.
no code implementations • 23 Jan 2024 • Yifan Zhang, Siyu Ren, Junhui Hou, Jinjian Wu, Guangming Shi
First, we propose the learnable transformation alignment to bridge the domain gap between image and point cloud data, converting features into a unified representation space for effective comparison and matching.
no code implementations • 18 Jan 2024 • Siyu Ren, Junhui Hou, Xiaodong Chen, Hongkai Xiong, Wenping Wang
We then transfer the discrepancy between two 3D geometric models as the discrepancy between their DDFs defined on an identical domain, naturally establishing model correspondence.
no code implementations • 15 Nov 2023 • Fangzhi Xu, Zhiyong Wu, Qiushi Sun, Siyu Ren, Fei Yuan, Shuai Yuan, Qika Lin, Yu Qiao, Jun Liu
Although Large Language Models (LLMs) demonstrate remarkable ability in processing and generating human-like text, they do have limitations when it comes to comprehending and expressing world knowledge that extends beyond the boundaries of natural language(e. g., chemical molecular formula).
1 code implementation • 18 Oct 2023 • Qi Jia, Siyu Ren, Yizhu Liu, Kenny Q. Zhu
Despite tremendous improvements in natural language generation, summarization models still suffer from the unfaithfulness issue.
1 code implementation • 12 Oct 2023 • Siyu Ren, Qi Jia, Kenny Q. Zhu
The quadratic complexity of the attention module makes it gradually become the bulk of compute in Transformer-based LLMs during generation.
1 code implementation • 7 Oct 2023 • Siyu Ren, Zhiyong Wu, Kenny Q. Zhu
In this paper, we propose Earth Mover Distance Optimization (EMO) for auto-regressive language modeling.
no code implementations • 25 Jun 2023 • Siyu Ren, Kenny Q. Zhu
The components underpinning PLMs -- large weight matrices -- were shown to bear considerable redundancy.
1 code implementation • 1 Jun 2023 • Siyu Ren, Junhui Hou
By associating each reference point with two given point clouds through computing its directional distances to them, the difference in directional distances of an identical reference point characterizes the geometric difference between a typical local region of the two point clouds.
1 code implementation • 21 May 2023 • Siyu Ren, Kenny Q. Zhu
Iterative pruning is one of the most effective compression methods for pre-trained language models.
1 code implementation • ICCV 2023 • Siyu Ren, Junhui Hou, Xiaodong Chen, Ying He, Wenping Wang
We present a learning-based method, namely GeoUDF, to tackle the long-standing and challenging problem of reconstructing a discrete surface from a sparse point cloud. To be specific, we propose a geometry-guided learning method for UDF and its gradient estimation that explicitly formulates the unsigned distance of a query point as the learnable affine averaging of its distances to the tangent planes of neighboring points on the surface.
no code implementations • 18 Oct 2022 • Qi Jia, Yizhu Liu, Siyu Ren, Kenny Q. Zhu
Abstractive dialogue summarization is to generate a concise and fluent summary covering the salient information in a dialogue among two or more interlocutors.
1 code implementation • 12 Jul 2022 • Siyu Ren, Yiming Zeng, Junhui Hou, Xiaodong Chen
Motivated by the intuition that the critical step of localizing a 2D image in the corresponding 3D point cloud is establishing 2D-3D correspondence between them, we propose the first feature-based dense correspondence framework for addressing the image-to-point cloud registration problem, dubbed CorrI2P, which consists of three modules, i. e., feature embedding, symmetric overlapping region detection, and pose estimation through the established correspondence.
Ranked #1 on Image to Point Cloud Registration on KITTI
1 code implementation • NAACL 2022 • Siyu Ren, Kenny Q. Zhu
Current text-image approaches (e. g., CLIP) typically adopt dual-encoder architecture us- ing pre-trained vision-language representation.
1 code implementation • EMNLP 2020 • Qi Jia, Yizhu Liu, Siyu Ren, Kenny Q. Zhu, Haifeng Tang
In this paper, we propose a dialogue extraction algorithm to transform a dialogue history into threads based on their dependency relations.
no code implementations • 21 Apr 2020 • Siyu Ren, Kenny Q. Zhu
In this paper, we propose a novel configurable framework to automatically generate distractive choices for open-domain cloze-style multiple-choice questions, which incorporates a general-purpose knowledge base to effectively create a small distractor candidate set, and a feature-rich learning-to-rank model to select distractors that are both plausible and reliable.