Search Results for author: Fangyu Liu

Found 44 papers, 30 papers with code

Fine-Grained Controllable Text Generation Using Non-Residual Prompting

1 code implementation ACL 2022 Fredrik Carlsson, Joey Öhman, Fangyu Liu, Severine Verlinden, Joakim Nivre, Magnus Sahlgren

We propose a resource-efficient method for converting a pre-trained CLM into this architecture, and demonstrate its potential on various experiments, including the novel task of contextualized word inclusion.

Text Generation

Integrating Transformers and Knowledge Graphs for Twitter Stance Detection

no code implementations WNUT (ACL) 2021 Thomas Clark, Costanza Conforti, Fangyu Liu, Zaiqiao Meng, Ehsan Shareghi, Nigel Collier

Stance detection (SD) entails classifying the sentiment of a text towards a given target, and is a relevant sub-task for opinion mining and social media analysis.

Knowledge Graphs Knowledge Probing +2

SAT: Size-Aware Transformer for 3D Point Cloud Semantic Segmentation

no code implementations17 Jan 2023 Junjie Zhou, Yongping Xiong, Chinwai Chiu, Fangyu Liu, Xiangyang Gong

In this paper, we propose the Size-Aware Transformer (SAT) that can tailor effective receptive fields for objects of different sizes.

Point Cloud Segmentation Semantic Segmentation

DePlot: One-shot visual language reasoning by plot-to-table translation

1 code implementation20 Dec 2022 Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun

Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24. 0% improvement over finetuned SOTA on human-written queries from the task of chart QA.

Chart Question Answering Language Modelling +1

Reranking Overgenerated Responses for End-to-End Task-Oriented Dialogue Systems

1 code implementation7 Nov 2022 Songbo Hu, Ivan Vulić, Fangyu Liu, Anna Korhonen

At training, the high-scoring partition comprises all generated responses whose similarity to the gold response is higher than the similarity of the greedy response to the gold response.

Task-Oriented Dialogue Systems

Improving Bilingual Lexicon Induction with Cross-Encoder Reranking

1 code implementation30 Oct 2022 Yaoyiran Li, Fangyu Liu, Ivan Vulić, Anna Korhonen

This crucial step is done via 1) creating a word similarity dataset, comprising positive word pairs (i. e., true translations) and hard negative pairs induced from the original CLWE space, and then 2) fine-tuning an mPLM (e. g., mBERT or XLM-R) in a cross-encoder manner to predict the similarity scores.

Bilingual Lexicon Induction Cross-Lingual Word Embeddings +7

How to tackle an emerging topic? Combining strong and weak labels for Covid news NER

1 code implementation29 Sep 2022 Aleksander Ficek, Fangyu Liu, Nigel Collier

Being able to train Named Entity Recognition (NER) models for emerging topics is crucial for many real-world applications especially in the medical domain where new topics are continuously evolving out of the scope of existing models and datasets.

named-entity-recognition Named Entity Recognition +2

WinoDict: Probing language models for in-context word acquisition

no code implementations25 Sep 2022 Julian Martin Eisenschlos, Jeremy R. Cole, Fangyu Liu, William W. Cohen

We introduce a new in-context learning paradigm to measure Large Language Models' (LLMs) ability to learn novel words during inference.

Probing Language Models

On Reality and the Limits of Language Data: Aligning LLMs with Human Norms

no code implementations25 Aug 2022 Nigel H. Collier, Fangyu Liu, Ehsan Shareghi

Recent advancements in Large Language Models (LLMs) harness linguistic associations in vast natural language data for practical applications.

Common Sense Reasoning

Language Models Can See: Plugging Visual Controls in Text Generation

1 code implementation5 May 2022 Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lingpeng Kong, Nigel Collier

MAGIC is a flexible framework and is theoretically compatible with any text generation tasks that incorporate image grounding.

Image Captioning Story Generation +1

Probing Cross-Lingual Lexical Knowledge from Multilingual Sentence Encoders

no code implementations30 Apr 2022 Ivan Vulić, Goran Glavaš, Fangyu Liu, Nigel Collier, Edoardo Maria Ponti, Anna Korhonen

In this work, we probe SEs for the amount of cross-lingual lexical knowledge stored in their parameters, and compare them against the original multilingual LMs.

Contrastive Learning Cross-Lingual Entity Linking +5

Modality-Balanced Embedding for Video Retrieval

no code implementations18 Apr 2022 Xun Wang, Bingqing Ke, Xuanping Li, Fangyu Liu, Mingyu Zhang, Xiao Liang, Qiushi Xiao, Cheng Luo, Yue Yu

This modality imbalanceresults from a) modality gap: the relevance between a query and a video text is much easier to learn as the query is also a piece of text, with the same modality as the video text; b) data bias: most training samples can be solved solely by text matching.

Retrieval Text Matching +1

Improving Word Translation via Two-Stage Contrastive Learning

1 code implementation ACL 2022 Yaoyiran Li, Fangyu Liu, Nigel Collier, Anna Korhonen, Ivan Vulić

At Stage C1, we propose to refine standard cross-lingual linear maps between static word embeddings (WEs) via a contrastive learning objective; we also show how to integrate it into the self-learning procedure for even more refined cross-lingual maps.

Bilingual Lexicon Induction Contrastive Learning +8

Revisiting Parameter-Efficient Tuning: Are We Really There Yet?

1 code implementation16 Feb 2022 Guanzheng Chen, Fangyu Liu, Zaiqiao Meng, Shangsong Liang

Parameter-Efficient Tuning (PETuning) methods have been deemed by many as the new paradigm for using pretrained language models (PLMs).

IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages

2 code implementations27 Jan 2022 Emanuele Bugliarello, Fangyu Liu, Jonas Pfeiffer, Siva Reddy, Desmond Elliott, Edoardo Maria Ponti, Ivan Vulić

Our benchmark enables the evaluation of multilingual multimodal models for transfer learning, not only in a zero-shot setting, but also in newly defined few-shot learning setups.

Cross-Modal Retrieval Few-Shot Learning +5

Sharpness-Aware Minimization with Dynamic Reweighting

no code implementations16 Dec 2021 Wenxuan Zhou, Fangyu Liu, huan zhang, Muhao Chen

Deep neural networks are often overparameterized and may not easily achieve model generalization.

Natural Language Understanding

Rewire-then-Probe: A Contrastive Recipe for Probing Biomedical Knowledge of Pre-trained Language Models

1 code implementation ACL 2022 Zaiqiao Meng, Fangyu Liu, Ehsan Shareghi, Yixuan Su, Charlotte Collins, Nigel Collier

To catalyse the research in this direction, we release a well-curated biomedical knowledge probing benchmark, MedLAMA, which is constructed based on the Unified Medical Language System (UMLS) Metathesaurus.

Knowledge Probing Transfer Learning

Visually Grounded Reasoning across Languages and Cultures

2 code implementations EMNLP 2021 Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, Desmond Elliott

The design of widespread vision-and-language datasets and pre-trained encoders directly adopts, or draws inspiration from, the concepts and images of ImageNet.

Visual Reasoning Zero-Shot Learning

MirrorWiC: On Eliciting Word-in-Context Representations from Pretrained Language Models

1 code implementation CoNLL (EMNLP) 2021 Qianchu Liu, Fangyu Liu, Nigel Collier, Anna Korhonen, Ivan Vulić

Recent work indicated that pretrained language models (PLMs) such as BERT and RoBERTa can be transformed into effective sentence and word encoders even via simple self-supervised techniques.

Contextualised Word Representations Contrastive Learning

Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking

1 code implementation ACL 2021 Fangyu Liu, Ivan Vulić, Anna Korhonen, Nigel Collier

To this end, we propose and evaluate a series of cross-lingual transfer methods for the XL-BEL task, and demonstrate that general-domain bitext helps propagate the available English knowledge to languages with little to no in-domain data.

Cross-Lingual Transfer Entity Linking

Self-Alignment Pretraining for Biomedical Entity Representations

1 code implementation NAACL 2021 Fangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, Nigel Collier

Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge.

Benchmarking Entity Linking +2

COMETA: A Corpus for Medical Entity Linking in the Social Media

1 code implementation EMNLP 2020 Marco Basaldella, Fangyu Liu, Ehsan Shareghi, Nigel Collier

Whilst there has been growing progress in Entity Linking (EL) for general language, existing datasets fail to address the complex nature of health terminology in layman's language.

Entity Linking

Visual Pivoting for (Unsupervised) Entity Alignment

2 code implementations28 Sep 2020 Fangyu Liu, Muhao Chen, Dan Roth, Nigel Collier

This work studies the use of visual semantic representations to align entities in heterogeneous knowledge graphs (KGs).

Ranked #11 on Entity Alignment on dbp15k ja-en (using extra training data)

Entity Alignment Knowledge Graphs

HAL: Improved Text-Image Matching by Mitigating Visual Semantic Hubs

1 code implementation22 Nov 2019 Fangyu Liu, Rongtian Ye, Xun Wang, Shuaipeng Li

The hubness problem widely exists in high-dimensional embedding space and is a fundamental source of error for cross-modal matching tasks.

A Strong and Robust Baseline for Text-Image Matching

no code implementations ACL 2019 Fangyu Liu, Rongtian Ye

We review the current schemes of text-image matching models and propose improvements for both training and inference.

Auto-Classification of Retinal Diseases in the Limit of Sparse Data Using a Two-Streams Machine Learning Model

1 code implementation16 Aug 2018 C. -H. Huck Yang, Fangyu Liu, Jia-Hong Huang, Meng Tian, Hiromasa Morikawa, I-Hung Lin, Yi-Chieh Liu, Hao-Hsiang Yang, Jesper Tegner

Automatic clinical diagnosis of retinal diseases has emerged as a promising approach to facilitate discovery in areas with limited access to specialists.

General Classification

3D Depthwise Convolution: Reducing Model Parameters in 3D Vision Tasks

no code implementations5 Aug 2018 Rongtian Ye, Fangyu Liu, Liqiang Zhang

Standard 3D convolution operations require much larger amounts of memory and computation cost than 2D convolution operations.

General Classification

A Novel Hybrid Machine Learning Model for Auto-Classification of Retinal Diseases

1 code implementation17 Jun 2018 C. -H. Huck Yang, Jia-Hong Huang, Fangyu Liu, Fang-Yi Chiu, Mengya Gao, Weifeng Lyu, I-Hung Lin M. D., Jesper Tegner

Automatic clinical diagnosis of retinal diseases has emerged as a promising approach to facilitate discovery in areas with limited access to specialists.

BIG-bench Machine Learning General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.