Search Results for author: Zhouhan Lin

Found 51 papers, 37 papers with code

Cluster-wise Graph Transformer with Dual-granularity Kernelized Attention

1 code implementation9 Oct 2024 Siyuan Huang, Yunchong Song, Jiayue Zhou, Zhouhan Lin

In the realm of graph learning, there is a category of methods that conceptualize graphs as hierarchical structures, utilizing node clustering to capture broader structural information.

Graph Learning Node Clustering

Enhancing SPARQL Generation by Triplet-order-sensitive Pre-training

1 code implementation8 Oct 2024 Chang Su, Jiexing Qi, He Yan, Kai Zou, Zhouhan Lin

Semantic parsing that translates natural language queries to SPARQL is of great importance for Knowledge Graph Question Answering (KGQA) systems.

Graph Question Answering Language Modelling +5

Training-free LLM-generated Text Detection by Mining Token Probability Sequences

no code implementations8 Oct 2024 Yihuai Xu, Yongwei Wang, Yifei Bi, Huangsen Cao, Zhouhan Lin, Yu Zhao, Fei Wu

Despite this, existing training-free detection methods typically rely on global text sequence statistics, neglecting the modeling of local discriminative features, thereby limiting their detection efficacy.

LLM-generated Text Detection Text Detection +1

Mirror-Consistency: Harnessing Inconsistency in Majority Voting

no code implementations7 Oct 2024 Siyuan Huang, Zhiyuan Ma, Jintao Du, Changhua Meng, Weiqiang Wang, Zhouhan Lin

Self-Consistency, a widely-used decoding strategy, significantly boosts the reasoning capabilities of Large Language Models (LLMs).

Autoregressive Enzyme Function Prediction with Multi-scale Multi-modality Fusion

no code implementations11 Aug 2024 Dingyi Rong, Wenzhuo Zheng, Bozitao Zhong, Zhouhan Lin, Liang Hong, Ning Liu

Accurate prediction of enzyme function is crucial for elucidating biological mechanisms and driving innovation across various sectors.

Protein Function Prediction

Graph Parsing Networks

1 code implementation22 Feb 2024 Yunchong Song, Siyuan Huang, Xinbing Wang, Chenghu Zhou, Zhouhan Lin

GPN benefits from the discrete assignments generated by the graph parsing algorithm, allowing good memory efficiency while preserving node information intact.

Graph Classification Graph Reconstruction +2

Critical Data Size of Language Models from a Grokking Perspective

no code implementations19 Jan 2024 Xuekai Zhu, Yao Fu, BoWen Zhou, Zhouhan Lin

We formalize the phase transition under the grokking configuration into the Data Efficiency Hypothesis and identify data insufficiency, sufficiency, and surplus regimes in language models training dynamics.

Language Modelling Memorization

SH2: Self-Highlighted Hesitation Helps You Decode More Truthfully

2 code implementations11 Jan 2024 Jushi Kai, Tianhang Zhang, Hai Hu, Zhouhan Lin

Therefore, we propose to ''highlight'' the factual information by selecting the tokens with the lowest probabilities and concatenating them to the original context, thus forcing the model to repeatedly read and hesitate on these tokens before generation.

Hallucination Text Generation

Towards Controlled Table-to-Text Generation with Scientific Reasoning

no code implementations8 Dec 2023 Zhixin Guo, Jianping Zhou, Jiexing Qi, Mingxuan Yan, Ziwei He, Guanjie Zheng, Zhouhan Lin, Xinbing Wang, Chenghu Zhou

The sheer volume of scientific experimental results and complex technical statements, often presented in tabular formats, presents a formidable barrier to individuals acquiring preferred information.

Table-to-Text Generation

HuRef: HUman-REadable Fingerprint for Large Language Models

1 code implementation8 Dec 2023 Boyi Zeng, Lizheng Wang, Yuncong Hu, Yi Xu, Chenghu Zhou, Xinbing Wang, Yu Yu, Zhouhan Lin

In this study, we introduce HuRef, a human-readable fingerprint for LLMs that uniquely identifies the base model without interfering with training or exposing model parameters to the public.

I2SRM: Intra- and Inter-Sample Relationship Modeling for Multimodal Information Extraction

1 code implementation10 Oct 2023 Yusheng Huang, Zhouhan Lin

Multimodal information extraction is attracting research attention nowadays, which requires aggregating representations from different modalities.

named-entity-recognition Named Entity Recognition +1

Tailoring Self-Attention for Graph via Rooted Subtrees

1 code implementation NeurIPS 2023 Siyuan Huang, Yunchong Song, Jiayue Zhou, Zhouhan Lin

Attention mechanisms have made significant strides in graph learning, yet they still exhibit notable limitations: local attention faces challenges in capturing long-range information due to the inherent problems of the message-passing scheme, while global attention cannot reflect the hierarchical neighborhood structure and fails to capture fine-grained local information.

Graph Attention Graph Learning +2

Document-Level Relation Extraction with Relation Correlation Enhancement

1 code implementation6 Oct 2023 Yusheng Huang, Zhouhan Lin

Document-level relation extraction (DocRE) is a task that focuses on identifying relations between entities within a document.

Document-level Relation Extraction Graph Attention +1

Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator

1 code implementation24 May 2023 Ziwei He, Meng Yang, Minwei Feng, Jingcheng Yin, Xinbing Wang, Jingwen Leng, Zhouhan Lin

Many researchers have focused on designing new forms of self-attention or introducing new parameters to overcome this limitation, however a large portion of them prohibits the model to inherit weights from large pretrained models.

Abstractive Text Summarization Document Summarization +2

PaD: Program-aided Distillation Can Teach Small Models Reasoning Better than Chain-of-thought Fine-tuning

1 code implementation23 May 2023 Xuekai Zhu, Biqing Qi, Kaiyan Zhang, Xinwei Long, Zhouhan Lin, BoWen Zhou

While large language models (LLMs) excel in various natural language processing tasks, their huge size and the inaccessibility of parameters present challenges for practical deployment.

Arithmetic Reasoning GSM8K +1

Asymmetric Polynomial Loss For Multi-Label Classification

1 code implementation10 Apr 2023 Yusheng Huang, Jiexing Qi, Xinbing Wang, Zhouhan Lin

We further employ the asymmetric focusing mechanism to decouple the gradient contribution from the negative and positive samples.

Image Classification Multi-Label Classification +3

Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset

1 code implementation19 Feb 2023 Jiexing Qi, Shuhao Li, Zhixin Guo, Yusheng Huang, Chenghu Zhou, Weinan Zhang, Xinbing Wang, Zhouhan Lin

In this work, we first collect a large-scale institution name normalization dataset LoT-insts1, which contains over 25k classes that exhibit a naturally long-tailed distribution.

Long-tail Learning open-set classification +4

Ordered GNN: Ordering Message Passing to Deal with Heterophily and Over-smoothing

1 code implementation3 Feb 2023 Yunchong Song, Chenghu Zhou, Xinbing Wang, Zhouhan Lin

This is achieved by aligning the hierarchy of the rooted-tree of a central node with the ordered neurons in its node representation.

Node Classification

Syntax-guided Localized Self-attention by Constituency Syntactic Distance

1 code implementation21 Oct 2022 Shengyuan Hou, Jushi Kai, Haotian Xue, Bingyu Zhu, Bo Yuan, Longtao Huang, Xinbing Wang, Zhouhan Lin

Recent works have revealed that Transformers are implicitly learning the syntactic information in its lower layers from data, albeit is highly dependent on the quality and scale of the training data.

Machine Translation Translation

Text Editing as Imitation Game

1 code implementation21 Oct 2022 Ning Shi, Bin Tang, Bo Yuan, Longtao Huang, Yewen Pu, Jie Fu, Zhouhan Lin

Text editing, such as grammatical error correction, arises naturally from imperfect textual data.

Action Generation Grammatical Error Correction +1

INFINITY: A Simple Yet Effective Unsupervised Framework for Graph-Text Mutual Conversion

no code implementations22 Sep 2022 Yi Xu, Luoyi Fu, Zhouhan Lin, Jiexing Qi, Xinbing Wang

As a fully unsupervised framework, INFINITY is empirically verified to outperform state-of-the-art baselines for G2T and T2G tasks.

Knowledge Graphs

Transkimmer: Transformer Learns to Layer-wise Skim

1 code implementation ACL 2022 Yue Guan, Zhengyi Li, Jingwen Leng, Zhouhan Lin, Minyi Guo

To address the above limitations, we propose the Transkimmer architecture, which learns to identify hidden state tokens that are not required by each layer.

Computational Efficiency

RASAT: Integrating Relational Structures into Pretrained Seq2Seq Model for Text-to-SQL

1 code implementation14 May 2022 Jiexing Qi, Jingyao Tang, Ziwei He, Xiangpeng Wan, Yu Cheng, Chenghu Zhou, Xinbing Wang, Quanshi Zhang, Zhouhan Lin

Our model can incorporate almost all types of existing relations in the literature, and in addition, we propose introducing co-reference relations for the multi-turn scenario.

Dialogue State Tracking Text-To-SQL

Leveraging Unimodal Self-Supervised Learning for Multimodal Audio-Visual Speech Recognition

1 code implementation ACL 2022 Xichen Pan, Peiyu Chen, Yichen Gong, Helong Zhou, Xinbing Wang, Zhouhan Lin

In particular, audio and visual front-ends are trained on large-scale unimodal datasets, then we integrate components of both front-ends into a larger multimodal framework which learns to recognize parallel audio-visual data into characters through a combination of CTC and seq2seq decoding.

Audio-Visual Speech Recognition Automatic Speech Recognition (ASR) +7

Block-Skim: Efficient Question Answering for Transformer

1 code implementation16 Dec 2021 Yue Guan, Zhengyi Li, Jingwen Leng, Zhouhan Lin, Minyi Guo, Yuhao Zhu

We further prune the hidden states corresponding to the unnecessary positions early in lower layers, achieving significant inference-time speedup.

Extractive Question-Answering Question Answering

Annotation Inconsistency and Entity Bias in MultiWOZ

no code implementations SIGDIAL (ACL) 2021 Kun Qian, Ahmad Beirami, Zhouhan Lin, Ankita De, Alborz Geramifard, Zhou Yu, Chinnadhurai Sankar

In this work, we identify an overlooked issue with dialog state annotation inconsistencies in the dataset, where a slot type is tagged inconsistently across similar dialogs leading to confusion for DST modeling.

dialog state tracking Memorization +1

Reciprocal Supervised Learning Improves Neural Machine Translation

1 code implementation5 Dec 2020 Minkai Xu, Mingxuan Wang, Zhouhan Lin, Hao Zhou, Weinan Zhang, Lei LI

Despite the recent success on image classification, self-training has only achieved limited gains on structured prediction tasks such as neural machine translation (NMT).

Image Classification Knowledge Distillation +4

Ordered Memory

1 code implementation NeurIPS 2019 Yikang Shen, Shawn Tan, Arian Hosseini, Zhouhan Lin, Alessandro Sordoni, Aaron Courville

Inspired by Ordered Neurons (Shen et al., 2018), we introduce a new attention-based mechanism and use its cumulative probability to control the writing and erasing operation of the memory.

ListOps

Neural Language Modeling by Jointly Learning Syntax and Lexicon

1 code implementation ICLR 2018 Yikang Shen, Zhouhan Lin, Chin-wei Huang, Aaron Courville

In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model.

Constituency Grammar Induction Language Modelling

Recurrent Neural Networks With Limited Numerical Precision

1 code implementation21 Nov 2016 Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, Yoshua Bengio

Recurrent Neural Networks (RNNs) produce state-of-art performance on many machine learning tasks but their demand on resources in terms of memory and computational power are often high.

Quantization

Recurrent Neural Networks With Limited Numerical Precision

1 code implementation24 Aug 2016 Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, Yoshua Bengio

We present results from the use of different stochastic and deterministic reduced precision training methods applied to three major RNN types which are then tested on several datasets.

Binarization

Theano: A Python framework for fast computation of mathematical expressions

1 code implementation9 May 2016 The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang

Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements.

BIG-bench Machine Learning Clustering +2

Towards Biologically Plausible Deep Learning

no code implementations14 Feb 2015 Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein, Thomas Mesnard, Zhouhan Lin

Neuroscientists have long criticised deep learning algorithms as incompatible with current knowledge of neurobiology.

Deep Learning Denoising +2

Cannot find the paper you are looking for? You can Submit a new open access paper.