Search Results for author: Ting Liu

Found 341 papers, 138 papers with code

CogBERT: Cognition-Guided Pre-trained Language Models

1 code implementation COLING 2022 Xiao Ding, Bowen Chen, Li Du, Bing Qin, Ting Liu

To fill the gap, we propose CogBERT, a framework that can induce fine-grained cognitive features from cognitive data and incorporate cognitive features into BERT by adaptively adjusting the weight of cognitive features for different NLP tasks.

EEG

Neural Natural Logic Inference for Interpretable Question Answering

1 code implementation EMNLP 2021 Jihao Shi, Xiao Ding, Li Du, Ting Liu, Bing Qin

Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses.

Multiple-choice Natural Language Inference +1

All Information is Valuable: Question Matching over Full Information Transmission Network

no code implementations Findings (NAACL) 2022 Le Qi, Yu Zhang, Qingyu Yin, Guidong Zheng, Wen Junjie, Jinlong Li, Ting Liu

In this process, there are two kinds of critical information that are commonly employed: the representation information of original questions and the interactive information between pairs of questions.

\textrm{DuReader}_{\textrm{vis}}: A Chinese Dataset for Open-domain Document Visual Question Answering

1 code implementation Findings (ACL) 2022 Le Qi, Shangwen Lv, Hongyu Li, Jing Liu, Yu Zhang, Qiaoqiao She, Hua Wu, Haifeng Wang, Ting Liu

Open-domain question answering has been used in a wide range of applications, such as web search and enterprise search, which usually takes clean texts extracted from various formats of documents (e. g., web pages, PDFs, or Word documents) as the information source.

document understanding Open-Domain Question Answering +1

Technical Report on Shared Task in DialDoc21

no code implementations ACL (dialdoc) 2021 Jiapeng Li, Mingda Li, Longxuan Ma, Wei-Nan Zhang, Ting Liu

The task requires identifying the grounding knowledge in form of a document span for the next dialogue turn.

Data Augmentation

Weakly Supervised Semantic Parsing by Learning from Mistakes

1 code implementation Findings (EMNLP) 2021 Jiaqi Guo, Jian-Guang Lou, Ting Liu, Dongmei Zhang

Using only 10% of utterance-denotation pairs, the parser achieves 84. 2 denotation accuracy on WikiSQL, which is competitive with the previous state-of-the-art approaches using 100% labeled data.

Semantic Parsing

2nd Place Solution for MOSE Track in CVPR 2024 PVUW workshop: Complex Video Object Segmentation

no code implementations12 Jun 2024 Zhensong Xu, Jiangtao Yao, Chengjing Wu, Ting Liu, Luoqi Liu

Our method ranked 2nd in the MOSE track of PVUW 2024, with a $\mathcal{J}$ of 0. 8007, a $\mathcal{F}$ of 0. 8683 and a $\mathcal{J}$\&$\mathcal{F}$ of 0. 8345.

Instance Segmentation Semantic Segmentation +4

3rd Place Solution for PVUW Challenge 2024: Video Panoptic Segmentation

no code implementations6 Jun 2024 Ruipu Wu, Jifei Che, Han Li, Chengjing Wu, Ting Liu, Luoqi Liu

Video panoptic segmentation is an advanced task that extends panoptic segmentation by applying its concept to video sequences.

Segmentation Video Panoptic Segmentation +1

Sparse-Tuning: Adapting Vision Transformers with Efficient Fine-tuning and Inference

1 code implementation23 May 2024 Ting Liu, Xuyang Liu, Liangtao Shi, Zunnan Xu, Siteng Huang, Yi Xin, Quanjun Yin

Sparse-Tuning efficiently fine-tunes the pre-trained ViT by sparsely preserving the informative tokens and merging redundant ones, enabling the ViT to focus on the foreground while reducing computational costs on background regions in the images.

Dynamic Loss Decay based Robust Oriented Object Detection on Remote Sensing Images with Noisy Labels

no code implementations15 May 2024 Guozhang Liu, Ting Liu, Mengke Yuan, Tao Pang, Guangxing Yang, Hao Fu, Tao Wang, Tongkui Liao

The ambiguous appearance, tiny scale, and fine-grained classes of objects in remote sensing imagery inevitably lead to the noisy annotations in category labels of detection dataset.

Memorization object-detection +2

DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Grounding

1 code implementation10 May 2024 Ting Liu, Xuyang Liu, Siteng Huang, Honggang Chen, Quanjun Yin, Long Qin, Donglin Wang, Yue Hu

Specifically, we propose \textbf{DARA}, a novel PETL method comprising \underline{\textbf{D}}omain-aware \underline{\textbf{A}}dapters (DA Adapters) and \underline{\textbf{R}}elation-aware \underline{\textbf{A}}dapters (RA Adapters) for VG.

Relation Transfer Learning +1

F5C-finder: An Explainable and Ensemble Biological Language Model for Predicting 5-Formylcytidine Modifications on mRNA

no code implementations20 Apr 2024 Guohao Wang, Ting Liu, Hongqiang Lyu, Ze Liu

The result highlights the effectiveness of biological language model in capturing both the order (sequential) and functional meaning (semantics) within genomes.

Ensemble Learning Language Modelling

Ensemble Learning for Heterogeneous Large Language Models with Deep Parallel Collaboration

1 code implementation19 Apr 2024 Yichong Huang, Xiaocheng Feng, Baohang Li, Yang Xiang, Hui Wang, Bing Qin, Ting Liu

To address this challenge, DeePEn maps the probability distribution of each model from its own probability space to a universal relative space based on the relative representation theory, and performs aggregation.

Ensemble Learning

Towards Generalizable and Faithful Logic Reasoning over Natural Language via Resolution Refutation

1 code implementation2 Apr 2024 Zhouhao Sun, Xiao Ding, Li Du, Bibo Cai, Jinglong Gao, Ting Liu, Qin Bing

To address this issue, we propose a novel framework, named Generalizable and Faithful Reasoner (GFaiR), which introduces the paradigm of resolution refutation.

RU22Fact: Optimizing Evidence for Multilingual Explainable Fact-Checking on Russia-Ukraine Conflict

1 code implementation25 Mar 2024 Yirong Zeng, Xiao Ding, Yi Zhao, Xiangyu Li, Jie Zhang, Chao Yao, Ting Liu, Bing Qin

Furthermore, we construct RU22Fact, a novel multilingual explainable fact-checking dataset on the Russia-Ukraine conflict in 2022 of 16K samples, each containing real-world claims, optimized evidence, and referenced explanation.

16k Claim Verification +4

iDAT: inverse Distillation Adapter-Tuning

1 code implementation23 Mar 2024 Jiacheng Ruan, Jingsheng Gao, Mingye Xie, Daize Dong, Suncheng Xiang, Ting Liu, Yuzhuo Fu

Adapter-Tuning (AT) method involves freezing a pre-trained model and introducing trainable adapter modules to acquire downstream knowledge, thereby calibrating the model for better adaptation to downstream tasks.

Image Classification Knowledge Distillation

Meaningful Learning: Advancing Abstract Reasoning in Large Language Models via Generic Fact Guidance

no code implementations14 Mar 2024 Kai Xiong, Xiao Ding, Ting Liu, Bing Qin, Dongliang Xu, Qing Yang, Hongtao Liu, Yixin Cao

Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence.

Memorization

AS-ES Learning: Towards Efficient CoT Learning in Small Models

no code implementations4 Mar 2024 Nuwa Xi, Yuhan Chen, Sendong Zhao, Haochun Wang, Bing Qin, Ting Liu

Chain-of-Thought (CoT) serves as a critical emerging ability in LLMs, especially when it comes to logical reasoning.

Data Augmentation Logical Reasoning

Deciphering the Impact of Pretraining Data on Large Language Models through Machine Unlearning

no code implementations18 Feb 2024 Yang Zhao, Li Du, Xiao Ding, Kai Xiong, Zhouhao Sun, Jun Shi, Ting Liu, Bing Qin

Through pretraining on a corpus with various sources, Large Language Models (LLMs) have gained impressive performance.

Machine Unlearning

Exploring Hybrid Question Answering via Program-based Prompting

no code implementations16 Feb 2024 Qi Shi, Han Cui, Haofeng Wang, Qingfu Zhu, Wanxiang Che, Ting Liu

Question answering over heterogeneous data requires reasoning over diverse sources of data, which is challenging due to the large scale of information and organic coupling of heterogeneous data.

Code Generation Question Answering

Beyond the Answers: Reviewing the Rationality of Multiple Choice Question Answering for the Evaluation of Large Language Models

no code implementations2 Feb 2024 Haochun Wang, Sendong Zhao, Zewen Qiang, Nuwa Xi, Bing Qin, Ting Liu

In the field of natural language processing (NLP), Large Language Models (LLMs) have precipitated a paradigm shift, markedly enhancing performance in natural language generation tasks.

Multiple-choice Multiple Choice Question Answering (MCQA) +1

Beyond Direct Diagnosis: LLM-based Multi-Specialist Agent Consultation for Automatic Diagnosis

no code implementations29 Jan 2024 Haochun Wang, Sendong Zhao, Zewen Qiang, Nuwa Xi, Bing Qin, Ting Liu

Automatic diagnosis is a significant application of AI in healthcare, where diagnoses are generated based on the symptom description of patients.

Natural Language Understanding

Aligning Translation-Specific Understanding to General Understanding in Large Language Models

no code implementations10 Jan 2024 Yichong Huang, Xiaocheng Feng, Baohang Li, Chengpeng Fu, Wenshuai Huo, Ting Liu, Bing Qin

To align the translation-specific understanding to the general one, we propose a novel translation process xIoD (Cross-Lingual Interpretation of Difficult words), explicitly incorporating the general understanding on the content incurring inconsistent understanding to guide the translation.

Machine Translation Translation

Oceanship: A Large-Scale Dataset for Underwater Audio Target Recognition

1 code implementation4 Jan 2024 Zeyu Li, Suncheng Xiang, Tong Yu, Jingsheng Gao, Jiacheng Ruan, Yanping Hu, Ting Liu, Yuzhuo Fu

While audio retrieval tasks are well-established in general audio classification, they have not been explored in the context of underwater audio recognition.

Attribute Audio Classification +3

Length Extrapolation of Transformers: A Survey from the Perspective of Positional Encoding

no code implementations28 Dec 2023 Liang Zhao, Xiaocheng Feng, Xiachong Feng, Dongliang Xu, Qing Yang, Hongtao Liu, Bing Qin, Ting Liu

In this survey, we present these advances towards length extrapolation in a unified notation from the perspective of PE.

Position

LAMM: Label Alignment for Multi-Modal Prompt Learning

1 code implementation13 Dec 2023 Jingsheng Gao, Jiacheng Ruan, Suncheng Xiang, Zefang Yu, Ke Ji, Mingye Xie, Ting Liu, Yuzhuo Fu

We conduct experiments on 11 downstream vision datasets and demonstrate that our method significantly improves the performance of existing multi-modal prompt learning models in few-shot scenarios, exhibiting an average accuracy improvement of 2. 31(\%) compared to the state-of-the-art methods on 16 shots.

Continual Learning

GIST: Improving Parameter Efficient Fine Tuning via Knowledge Interaction

1 code implementation12 Dec 2023 Jiacheng Ruan, Jingsheng Gao, Mingye Xie, Suncheng Xiang, Zefang Yu, Ting Liu, Yuzhuo Fu

2) They neglect the interaction between the intrinsic task-agnostic knowledge of pre-trained models and the task-specific knowledge in downstream tasks.

Trends in Integration of Knowledge and Large Language Models: A Survey and Taxonomy of Methods, Benchmarks, and Applications

no code implementations10 Nov 2023 Zhangyin Feng, Weitao Ma, Weijiang Yu, Lei Huang, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu

In this paper, we propose a review to discuss the trends in integration of knowledge and large language models, including taxonomy of methods, benchmarks, and applications.

knowledge editing Retrieval

A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions

1 code implementation9 Nov 2023 Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, Ting Liu

The emergence of large language models (LLMs) has marked a significant breakthrough in natural language processing (NLP), leading to remarkable advancements in text understanding and generation.

Hallucination

Efficient Cloud Pipelines for Neural Radiance Fields

no code implementations3 Nov 2023 Derek Jacoby, Donglin Xu, Weder Ribas, Minyi Xu, Ting Liu, Vishwanath Jayaraman, Mengdi Wei, Emma De Blois, Yvonne Coady

Since their introduction in 2020, Neural Radiance Fields (NeRFs) have taken the computer vision community by storm.

Change Detection

Knowledge-tuning Large Language Models with Structured Medical Knowledge Bases for Reliable Response Generation in Chinese

1 code implementation8 Sep 2023 Haochun Wang, Sendong Zhao, Zewen Qiang, Zijian Li, Nuwa Xi, Yanrui Du, MuZhen Cai, Haoqiang Guo, Yuhan Chen, Haoming Xu, Bing Qin, Ting Liu

To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate reliable response generation.

Domain Adaptation Hallucination +2

Manifold-based Verbalizer Space Re-embedding for Tuning-free Prompt-based Classification

1 code implementation8 Sep 2023 Haochun Wang, Sendong Zhao, Chi Liu, Nuwa Xi, MuZhen Cai, Bing Qin, Ting Liu

Experimental results indicate that even without tuning any parameters, our LLE-INC is on par with automated verbalizers with parameter tuning.

Prompt-based Context- and Domain-aware Pretraining for Vision and Language Navigation

no code implementations7 Sep 2023 Ting Liu, Yue Hu, Wansen Wu, Youkai Wang, Kai Xu, Quanjun Yin

In the indoor-aware stage, we apply an efficient tuning paradigm to learn deep visual prompts from an indoor dataset, in order to augment pretrained models with inductive biases towards indoor environments.

Contrastive Learning Vision and Language Navigation +1

Deep Deformable Models: Learning 3D Shape Abstractions with Part Consistency

no code implementations2 Sep 2023 Di Liu, Long Zhao, Qilong Zhangli, Yunhe Gao, Ting Liu, Dimitris N. Metaxas

The task of shape abstraction with semantic part consistency is challenging due to the complex geometries of natural objects.

Class Binarization to NeuroEvolution for Multiclass Classification

1 code implementation26 Aug 2023 Gongjin Lan, Zhenyu Gao, Lingyao Tong, Ting Liu

In this paper, we apply class binarization techniques to a neuroevolution algorithm, NeuroEvolution of Augmenting Topologies (NEAT), that is used to generate neural networks for multiclass classification.

Binarization Binary Classification +1

A Parse-Then-Place Approach for Generating Graphic Layouts from Textual Descriptions

no code implementations ICCV 2023 Jiawei Lin, Jiaqi Guo, Shizhao Sun, Weijiang Xu, Ting Liu, Jian-Guang Lou, Dongmei Zhang

To model combined and incomplete constraints, we use a Transformer-based layout generation model and carefully design a way to represent constraints and layouts as sequences.

Learning from Semantic Alignment between Unpaired Multiviews for Egocentric Video Recognition

1 code implementation ICCV 2023 Qitong Wang, Long Zhao, Liangzhe Yuan, Ting Liu, Xi Peng

To facilitate the data efficiency of multiview learning, we further perform video-text alignment for first-person and third-person videos, to fully leverage the semantic knowledge to improve video representations.

Multiview Learning Video Recognition

Through the Lens of Core Competency: Survey on Evaluation of Large Language Models

no code implementations15 Aug 2023 Ziyu Zhuang, Qiguang Chen, Longxuan Ma, Mingda Li, Yi Han, Yushan Qian, Haopeng Bai, Zixian Feng, Weinan Zhang, Ting Liu

From pre-trained language model (PLM) to large language model (LLM), the field of natural language processing (NLP) has witnessed steep performance gains and wide practical uses.

Language Modelling Large Language Model

HGDNet: A Height-Hierarchy Guided Dual-Decoder Network for Single View Building Extraction and Height Estimation

no code implementations10 Aug 2023 Chaoran Lu, Ningning Cao, Pan Zhang, Ting Liu, Baochai Peng, Guozhang Liu, Mengke Yuan, Sen Zhang, Simin Huang, Tao Wang

Unifying the correlative single-view satellite image building extraction and height estimation tasks indicates a promising way to share representations and acquire generalist model for large-scale urban 3D reconstruction.

3D Reconstruction Decoder

Fine-grained building roof instance segmentation based on domain adapted pretraining and composite dual-backbone

no code implementations10 Aug 2023 Guozhang Liu, Baochai Peng, Ting Liu, Pan Zhang, Mengke Yuan, Chaoran Lu, Ningning Cao, Sen Zhang, Simin Huang, Tao Wang

The diversity of building architecture styles of global cities situated on various landforms, the degraded optical imagery affected by clouds and shadows, and the significant inter-class imbalance of roof types pose challenges for designing a robust and accurate building roof instance segmentor.

Data Augmentation Instance Segmentation +1

EGE-UNet: an Efficient Group Enhanced UNet for skin lesion segmentation

1 code implementation17 Jul 2023 Jiacheng Ruan, Mingye Xie, Jingsheng Gao, Ting Liu, Yuzhuo Fu

Moreover, to our best knowledge, this is the first model with a parameter count limited to just 50KB.

Decoder Image Segmentation +3

VideoGLUE: Video General Understanding Evaluation of Foundation Models

1 code implementation6 Jul 2023 Liangzhe Yuan, Nitesh Bharadwaj Gundavarapu, Long Zhao, Hao Zhou, Yin Cui, Lu Jiang, Xuan Yang, Menglin Jia, Tobias Weyand, Luke Friedman, Mikhail Sirotenko, Huisheng Wang, Florian Schroff, Hartwig Adam, Ming-Hsuan Yang, Ting Liu, Boqing Gong

We evaluate existing foundation models video understanding capabilities using a carefully designed experiment protocol consisting of three hallmark tasks (action recognition, temporal localization, and spatiotemporal localization), eight datasets well received by the community, and four adaptation methods tailoring a foundation model (FM) for a downstream task.

Action Recognition Temporal Localization +1

I run as fast as a rabbit, can you? A Multilingual Simile Dialogue Dataset

1 code implementation9 Jun 2023 Longxuan Ma, Weinan Zhang, Shuhan Zhou, Churui Sun, Changxin Ke, Ting Liu

Meanwhile, the MSD data can also be used on dialogue tasks to test the ability of dialogue systems when using similes.

Retrieval Sentence

Examining Inter-Consistency of Large Language Models Collaboration: An In-depth Analysis via Debate

1 code implementation19 May 2023 Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, Bing Qin

Through extensive experiments on various datasets, LLMs can effectively collaborate to reach a consensus despite noticeable inter-inconsistencies, but imbalances in their abilities can lead to domination by superior LLMs.

Decision Making

NoisywikiHow: A Benchmark for Learning with Real-world Noisy Labels in Natural Language Processing

1 code implementation18 May 2023 Tingting Wu, Xiao Ding, Minji Tang, Hao Zhang, Bing Qin, Ting Liu

To mitigate the effects of label noise, learning with noisy labels (LNL) methods are designed to achieve better generalization performance.

Learning with noisy labels

CSED: A Chinese Semantic Error Diagnosis Corpus

no code implementations9 May 2023 Bo Sun, Baoxin Wang, YiXuan Wang, Wanxiang Che, Dayong Wu, Shijin Wang, Ting Liu

Our experiments show that powerful pre-trained models perform poorly on this corpus.

Lyapunov-Stable Deep Equilibrium Models

no code implementations25 Apr 2023 Haoyu Chu, Shikui Wei, Ting Liu, Yao Zhao, Yuto Miyatake

Deep equilibrium (DEQ) models have emerged as a promising class of implicit layer models, which abandon traditional depth by solving for the fixed points of a single nonlinear layer.

Adversarial Defense Adversarial Robustness

Learning Robust Visual-Semantic Embedding for Generalizable Person Re-identification

1 code implementation19 Apr 2023 Suncheng Xiang, Jingsheng Gao, Mengyuan Guan, Jiacheng Ruan, Chengfeng Zhou, Ting Liu, Dahong Qian, Yuzhuo Fu

In this paper, we propose a Multi-Modal Equivalent Transformer called MMET for more robust visual-semantic embedding learning on visual, textual and visual-textual tasks respectively.

Generalizable Person Re-identification Representation Learning

HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge

1 code implementation14 Apr 2023 Haochun Wang, Chi Liu, Nuwa Xi, Zewen Qiang, Sendong Zhao, Bing Qin, Ting Liu

Large Language Models (LLMs), such as the LLaMA model, have demonstrated their effectiveness in various general-domain natural language processing (NLP) tasks.

Monte Carlo Linear Clustering with Single-Point Supervision is Enough for Infrared Small Target Detection

1 code implementation ICCV 2023 Boyang Li, Yingqian Wang, Longguang Wang, Fei Zhang, Ting Liu, Zaiping Lin, Wei An, Yulan Guo

The core idea of this work is to recover the per-pixel mask of each target from the given single point label by using clustering approaches, which looks simple but is indeed challenging since targets are always insalient and accompanied with background clutters.

Clustering

Structured Video-Language Modeling with Temporal Grouping and Spatial Grounding

no code implementations28 Mar 2023 Yuanhao Xiong, Long Zhao, Boqing Gong, Ming-Hsuan Yang, Florian Schroff, Ting Liu, Cho-Jui Hsieh, Liangzhe Yuan

Existing video-language pre-training methods primarily focus on instance-level alignment between video clips and captions via global contrastive learning but neglect rich fine-grained local information in both videos and text, which is of importance to downstream tasks requiring temporal localization and semantic reasoning.

Action Recognition Contrastive Learning +7

Unified Visual Relationship Detection with Vision and Language Models

1 code implementation ICCV 2023 Long Zhao, Liangzhe Yuan, Boqing Gong, Yin Cui, Florian Schroff, Ming-Hsuan Yang, Hartwig Adam, Ting Liu

To address this challenge, we propose UniVRD, a novel bottom-up method for Unified Visual Relationship Detection by leveraging vision and language models (VLMs).

Human-Object Interaction Detection Relationship Detection +2

Steering Prototypes with Prompt-tuning for Rehearsal-free Continual Learning

2 code implementations16 Mar 2023 Zhuowei Li, Long Zhao, Zizhao Zhang, Han Zhang, Di Liu, Ting Liu, Dimitris N. Metaxas

In the context of continual learning, prototypes-as representative class embeddings-offer advantages in memory conservation and the mitigation of catastrophic forgetting.

Class Incremental Learning Contrastive Learning +1

CluCDD:Contrastive Dialogue Disentanglement via Clustering

1 code implementation16 Feb 2023 Jingsheng Gao, Zeyu Li, Suncheng Xiang, Ting Liu, Yuzhuo Fu

A huge number of multi-participant dialogues happen online every day, which leads to difficulty in understanding the nature of dialogue dynamics for both humans and machines.

Clustering Contrastive Learning +1

Learning to Generate Image Embeddings with User-level Differential Privacy

1 code implementation CVPR 2023 Zheng Xu, Maxwell Collins, Yuxiao Wang, Liviu Panait, Sewoong Oh, Sean Augenstein, Ting Liu, Florian Schroff, H. Brendan McMahan

Small on-device models have been successfully trained with user-level differential privacy (DP) for next word prediction and image classification tasks in the past.

Federated Learning Image Classification

LERT: A Linguistically-motivated Pre-trained Language Model

1 code implementation10 Nov 2022 Yiming Cui, Wanxiang Che, Shijin Wang, Ting Liu

We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original MLM pre-training task, using a linguistically-informed pre-training (LIP) strategy.

Language Modelling Stock Market Prediction +1

An Empirical Study on Clustering Pretrained Embeddings: Is Deep Strictly Better?

no code implementations9 Nov 2022 Tyler R. Scott, Ting Liu, Michael C. Mozer, Andrew C. Gallagher

Recent research in clustering face embeddings has found that unsupervised, shallow, heuristic-based methods -- including $k$-means and hierarchical agglomerative clustering -- underperform supervised, deep, inductive methods.

Clustering

MALUNet: A Multi-Attention and Light-weight UNet for Skin Lesion Segmentation

1 code implementation3 Nov 2022 Jiacheng Ruan, Suncheng Xiang, Mingye Xie, Ting Liu, Yuzhuo Fu

To address this challenge, we propose a light-weight model to achieve competitive performances for skin lesion segmentation at the lowest cost of parameters and computational complexity so far.

Image Segmentation Lesion Segmentation +3

Deep Multimodal Fusion for Generalizable Person Re-identification

1 code implementation2 Nov 2022 Suncheng Xiang, Hao Chen, Wei Ran, Zefang Yu, Ting Liu, Dahong Qian, Yuzhuo Fu

Person re-identification plays a significant role in realistic scenarios due to its various applications in public security and video surveillance.

Domain Generalization Generalizable Person Re-identification +2

MEW-UNet: Multi-axis representation learning in frequency domain for medical image segmentation

1 code implementation25 Oct 2022 Jiacheng Ruan, Mingye Xie, Suncheng Xiang, Ting Liu, Yuzhuo Fu

Specifically, our block performs a Fourier transform on the three axes of the input feature and assigns the external weight in the frequency domain, which is generated by our Weights Generator.

Image Segmentation Medical Image Segmentation +2

MTU-Net: Multi-level TransUNet for Space-based Infrared Tiny Ship Detection

1 code implementation28 Sep 2022 Tianhao Wu, Boyang Li, Yihang Luo, Yingqian Wang, Chao Xiao, Ting Liu, Jungang Yang, Wei An, Yulan Guo

Due to the extremely large image coverage area (e. g., thousands square kilometers), candidate targets in these images are much smaller, dimer, more changeable than those targets observed by aerial-based and land-based imaging devices.

Data Augmentation

Prompt Combines Paraphrase: Teaching Pre-trained Models to Understand Rare Biomedical Words

1 code implementation COLING 2022 Haochun Wang, Chi Liu, Nuwa Xi, Sendong Zhao, Meizhi Ju, Shiwei Zhang, Ziheng Zhang, Yefeng Zheng, Bing Qin, Ting Liu

Prompt-based fine-tuning for pre-trained models has proven effective for many natural language processing tasks under few-shot settings in general domain.

Natural Language Inference

DiscrimLoss: A Universal Loss for Hard Samples and Incorrect Samples Discrimination

no code implementations21 Aug 2022 Tingting Wu, Xiao Ding, Hao Zhang, Jinglong Gao, Li Du, Bing Qin, Ting Liu

To relieve this issue, curriculum learning is proposed to improve model performance and generalization by ordering training samples in a meaningful (e. g., easy to hard) sequence.

Image Classification regression

Text Difficulty Study: Do machines behave the same as humans regarding text difficulty?

no code implementations14 Aug 2022 Bowen Chen, Xiao Ding, Li Du, Qin Bing, Ting Liu

Given a task, human learns from easy to hard, whereas the model learns randomly.

Multi-stage Moving Target Defense: A Security-enhanced D-FACTS Implementation Approach

no code implementations2 Jun 2022 Jiazhou Wang, Jue Tian, Yang Liu, Xiaohong Guan, Dong Yang, Ting Liu

We prove that a designed MMTD can significantly improve the detection capability compared to existing one-stage MTDs.

A Graph Enhanced BERT Model for Event Prediction

no code implementations Findings (ACL) 2022 Li Du, Xiao Ding, Yue Zhang, Kai Xiong, Ting Liu, Bing Qin

To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process.

Explanation-Guided Fairness Testing through Genetic Algorithm

1 code implementation16 May 2022 Ming Fan, Wenying Wei, Wuxia Jin, Zijiang Yang, Ting Liu

ExpGA employs the explanation results generated by interpretable methods to collect high-quality initial seeds, which are prone to derive discriminatory samples by slightly modifying feature values.

Attribute Fairness +1

e-CARE: a New Dataset for Exploring Explainable Causal Reasoning

1 code implementation ACL 2022 Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin

Understanding causality has vital importance for various Natural Language Processing (NLP) applications.

valid

Surrogate Gap Minimization Improves Sharpness-Aware Training

2 code implementations ICLR 2022 Juntang Zhuang, Boqing Gong, Liangzhe Yuan, Yin Cui, Hartwig Adam, Nicha Dvornek, Sekhar Tatikonda, James Duncan, Ting Liu

Instead, we define a \textit{surrogate gap}, a measure equivalent to the dominant eigenvalue of Hessian at a local minimum when the radius of the neighborhood (to derive the perturbed loss) is small.

PERT: Pre-training BERT with Permuted Language Model

1 code implementation14 Mar 2022 Yiming Cui, Ziqing Yang, Ting Liu

We permute a proportion of the input text, and the training objective is to predict the position of the original token.

Language Modelling Natural Language Understanding +1

CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment

no code implementations ACL 2022 Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, Furu Wei

We first evaluate CLIP's zero-shot performance on a typical visual question answering task and demonstrate a zero-shot cross-modality transfer capability of CLIP on the visual entailment task.

Question Answering Visual Entailment +1

LEMON: Language-Based Environment Manipulation via Execution-Guided Pre-training

2 code implementations20 Jan 2022 Qi Shi, Qian Liu, Bei Chen, Yu Zhang, Ting Liu, Jian-Guang Lou

In this work, we propose LEMON, a general framework for language-based environment manipulation tasks.

Language Modelling

A Semantic Web Technology Index

no code implementations14 Jan 2022 Gongjin Lan, Ting Liu, Xu Wang, Xueli Pan, Zhisheng Huang

In this paper, we propose an SW technology index to standardize the development for ensuring that the work of SW technology is designed well and to quantitatively evaluate the quality of the work in SW technology.

Multi-Source Uncertainty Mining for Deep Unsupervised Saliency Detection

no code implementations CVPR 2022 Yifan Wang, Wenbo Zhang, Lijun Wang, Ting Liu, Huchuan Lu

We design an Uncertainty Mining Network (UMNet) which consists of multiple Merge-and-Split (MS) modules to recursively analyze the commonality and difference among multiple noisy labels and infer pixel-wise uncertainty map for each label.

object-detection Object Detection +2

Multi-modal 3D Human Pose Estimation with 2D Weak Supervision in Autonomous Driving

no code implementations22 Dec 2021 Jingxiao Zheng, Xinwei Shi, Alexander Gorban, Junhua Mao, Yang song, Charles R. Qi, Ting Liu, Visesh Chari, Andre Cornman, Yin Zhou, CongCong Li, Dragomir Anguelov

3D human pose estimation (HPE) in autonomous vehicles (AV) differs from other use cases in many factors, including the 3D resolution and range of data, absence of dense depth maps, failure modes for LiDAR, relative location between the camera and LiDAR, and a high bar for estimation accuracy.

3D Human Pose Estimation Autonomous Driving

Exploring Temporal Granularity in Self-Supervised Video Representation Learning

no code implementations8 Dec 2021 Rui Qian, Yeqing Li, Liangzhe Yuan, Boqing Gong, Ting Liu, Matthew Brown, Serge Belongie, Ming-Hsuan Yang, Hartwig Adam, Yin Cui

The training objective consists of two parts: a fine-grained temporal learning objective to maximize the similarity between corresponding temporal embeddings in the short clip and the long clip, and a persistent temporal learning objective to pull together global embeddings of the two clips.

Representation Learning Self-Supervised Learning

One to Multiple Mapping Dual Learning: Learning Multiple Sources from One Mixed Signal

no code implementations13 Oct 2021 Ting Liu, Wenwu Wang, Xiaofei Zhang, Zhenyin Gong, Yina Guo

Single channel blind source separation (SCBSS) refers to separate multiple sources from a mixed signal collected by a single sensor.

blind source separation Generative Adversarial Network

Less is More: Learning from Synthetic Data with Fine-grained Attributes for Person Re-Identification

1 code implementation22 Sep 2021 Suncheng Xiang, Guanjie You, Mengyuan Guan, Hao Chen, Binjie Yan, Ting Liu, Yuzhuo Fu

Moreover, aiming to fully exploit the potential of FineGPR and promote the efficient training from millions of synthetic data, we propose an attribute analysis pipeline called AOST, which dynamically learns attribute distribution in real domain, then eliminates the gap between synthetic and real-world data and thus is freely deployed to new scenarios.

Attribute Person Re-Identification +1

Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training

2 code implementations EMNLP 2021 Bo Zheng, Li Dong, Shaohan Huang, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei

We find that many languages are under-represented in recent cross-lingual language models due to the limited vocabulary capacity.

Language Modelling

Logic-level Evidence Retrieval and Graph-based Verification Network for Table-based Fact Verification

1 code implementation EMNLP 2021 Qi Shi, Yu Zhang, Qingyu Yin, Ting Liu

Specifically, we first retrieve logic-level program-like evidence from the given table and statement as supplementary evidence for the table.

Fact Verification Retrieval +1

Multilingual Multi-Aspect Explainability Analyses on Machine Reading Comprehension Models

no code implementations26 Aug 2021 Yiming Cui, Wei-Nan Zhang, Wanxiang Che, Ting Liu, Zhigang Chen, Shijin Wang

Achieving human-level performance on some of the Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models (PLMs).

Machine Reading Comprehension Question Answering +1

Verb Metaphor Detection via Contextual Relation Learning

no code implementations ACL 2021 Wei Song, Shuhui Zhou, Ruiji Fu, Ting Liu, Lizhen Liu

Correct natural language understanding requires computers to distinguish the literal and metaphorical senses of a word.

Natural Language Understanding Relation +1

Neural Stylistic Response Generation with Disentangled Latent Variables

no code implementations ACL 2021 Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang

Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style.

Response Generation Sentence

Learning Event Graph Knowledge for Abductive Reasoning

1 code implementation ACL 2021 Li Du, Xiao Ding, Ting Liu, Bing Qin

Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering.

Question Answering Reading Comprehension

Chase: A Large-Scale and Pragmatic Chinese Dataset for Cross-Database Context-Dependent Text-to-SQL

no code implementations ACL 2021 Jiaqi Guo, Ziliang Si, Yu Wang, Qian Liu, Ming Fan, Jian-Guang Lou, Zijiang Yang, Ting Liu

However, we identify two biases in existing datasets for XDTS: (1) a high proportion of context-independent questions and (2) a high proportion of easy SQL queries.

Text-To-SQL

ExCAR: Event Graph Knowledge Enhanced Explainable Causal Reasoning

1 code implementation ACL 2021 Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin

ExCAR first acquires additional evidence information from a large-scale causal event graph as logical rules for causal reasoning.

Representation Learning

Guided Generation of Cause and Effect

no code implementations21 Jul 2021 Zhongyang Li, Xiao Ding, Ting Liu, J. Edward Hu, Benjamin Van Durme

We present a conditional text generation framework that posits sentential expressions of possible causes and effects.

Conditional Text Generation Knowledge Graphs

CausalBERT: Injecting Causal Knowledge Into Pre-trained Models with Minimal Supervision

no code implementations21 Jul 2021 Zhongyang Li, Xiao Ding, Kuo Liao, Bing Qin, Ting Liu

Recent work has shown success in incorporating pre-trained models like BERT to improve NLP systems.

Causal Inference

Non-Convex Tensor Low-Rank Approximation for Infrared Small Target Detection

1 code implementation31 May 2021 Ting Liu, Jungang Yang, Boyang Li, Chao Xiao, Yang Sun, Yingqian Wang, Wei An

Considering that different singular values have different importance and should be treated discriminatively, in this paper, we propose a non-convex tensor low-rank approximation (NTLA) method for infrared small target detection.

Language Model as an Annotator: Exploring DialoGPT for Dialogue Summarization

1 code implementation ACL 2021 Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, Ting Liu

Current dialogue summarization systems usually encode the text with a number of general semantic features (e. g., keywords and topics) to gain more powerful dialogue modeling capabilities.

Conversational Response Generation Language Modelling +1

Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent Detection and Slot Filling

no code implementations Findings (ACL) 2021 Yutai Hou, Yongkui Lai, Cheng Chen, Wanxiang Che, Ting Liu

However, dialogue language understanding contains two closely related tasks, i. e., intent detection and slot filling, and often benefits from jointly learning the two tasks.

Few-Shot Learning Intent Detection +2

ExpMRC: Explainability Evaluation for Machine Reading Comprehension

1 code implementation10 May 2021 Yiming Cui, Ting Liu, Wanxiang Che, Zhigang Chen, Shijin Wang

Achieving human-level performance on some of Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models (PLMs).

Machine Reading Comprehension Multi-Choice MRC +2

DADgraph: A Discourse-aware Dialogue Graph Neural Network for Multiparty Dialogue Machine Reading Comprehension

no code implementations26 Apr 2021 Jiaqi Li, Ming Liu, Zihao Zheng, Heng Zhang, Bing Qin, Min-Yen Kan, Ting Liu

Multiparty Dialogue Machine Reading Comprehension (MRC) differs from traditional MRC as models must handle the complex dialogue discourse structure, previously unconsidered in traditional MRC.

Graph Neural Network Machine Reading Comprehension +1

Learning to Share by Masking the Non-shared for Multi-domain Sentiment Classification

no code implementations17 Apr 2021 Jianhua Yuan, Yanyan Zhao, Bing Qin, Ting Liu

To this end, we propose the BertMasker network which explicitly masks domain-related words from texts, learns domain-invariant sentiment features from these domain-agnostic texts, and uses those masked words to form domain-aware sentence representations.

General Classification Multi-Domain Sentiment Classification +3

Learning from Self-Discrepancy via Multiple Co-teaching for Cross-Domain Person Re-Identification

1 code implementation6 Apr 2021 Suncheng Xiang, Yuzhuo Fu, Mengyuan Guan, Ting Liu

Employing clustering strategy to assign unlabeled target images with pseudo labels has become a trend for person re-identification (re-ID) algorithms in domain adaptation.

Clustering Domain Adaptation +2

A Survey on Spoken Language Understanding: Recent Advances and New Frontiers

1 code implementation4 Mar 2021 Libo Qin, Tianbao Xie, Wanxiang Che, Ting Liu

Spoken Language Understanding (SLU) aims to extract the semantics frame of user queries, which is a core component in a task-oriented dialog system.

Spoken Language Understanding

Memory Augmented Sequential Paragraph Retrieval for Multi-hop Question Answering

no code implementations7 Feb 2021 Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu

To deal with this challenge, most of the existing works consider paragraphs as nodes in a graph and propose graph-based methods to retrieve them.

Information Retrieval Multi-hop Question Answering +2

Discovering Dialog Structure Graph for Open-Domain Dialog Generation

no code implementations31 Dec 2020 Jun Xu, Zeyang Lei, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu

Learning interpretable dialog structure from human-human dialogs yields basic insights into the structure of conversation, and also provides background knowledge to facilitate dialog generation.

Graph Neural Network Open-Domain Dialog

Co-GAT: A Co-Interactive Graph Attention Network for Joint Dialog Act Recognition and Sentiment Classification

1 code implementation24 Dec 2020 Libo Qin, Zhouyang Li, Wanxiang Che, Minheng Ni, Ting Liu

The dialog context information (contextual information) and the mutual interaction information are two key factors that contribute to the two related tasks.

Graph Attention Sentiment Analysis +1

C2C-GenDA: Cluster-to-Cluster Generation for Data Augmentation of Slot Filling

1 code implementation13 Dec 2020 Yutai Hou, Sanyuan Chen, Wanxiang Che, Cheng Chen, Ting Liu

Slot filling, a fundamental module of spoken language understanding, often suffers from insufficient quantity and diversity of training data.

Data Augmentation slot-filling +2

Learning View-Disentangled Human Pose Representation by Contrastive Cross-View Mutual Information Maximization

1 code implementation CVPR 2021 Long Zhao, Yuxiao Wang, Jiaping Zhao, Liangzhe Yuan, Jennifer J. Sun, Florian Schroff, Hartwig Adam, Xi Peng, Dimitris Metaxas, Ting Liu

To evaluate the power of the learned representations, in addition to the conventional fully-supervised action recognition settings, we introduce a novel task called single-shot cross-view action recognition.

Action Recognition Contrastive Learning +1

Biomedical Knowledge Graph Refinement with Embedding and Logic Rules

no code implementations2 Dec 2020 Sendong Zhao, Bing Qin, Ting Liu, Fei Wang

This paper proposes a method BioGRER to improve the BioKG's quality, which comprehensively combines the knowledge graph embedding and logic rules that support and negate triplets in the BioKG.

Knowledge Graph Embedding Knowledge Graphs

TableGPT: Few-shot Table-to-Text Generation with Table Structure Reconstruction and Content Matching

1 code implementation COLING 2020 Heng Gong, Yawei Sun, Xiaocheng Feng, Bing Qin, Wei Bi, Xiaojiang Liu, Ting Liu

Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data.

Few-Shot Learning Language Modelling +2

Unsupervised Explanation Generation for Machine Reading Comprehension

no code implementations13 Nov 2020 Yiming Cui, Ting Liu, Shijin Wang, Guoping Hu

With the blooming of various Pre-trained Language Models (PLMs), Machine Reading Comprehension (MRC) has embraced significant improvements on various benchmarks and even surpass human performances.

Explanation Generation Machine Reading Comprehension +1

CharBERT: Character-aware Pre-trained Language Model

1 code implementation COLING 2020 Wentao Ma, Yiming Cui, Chenglei Si, Ting Liu, Shijin Wang, Guoping Hu

Most pre-trained language models (PLMs) construct word representations at subword level with Byte-Pair Encoding (BPE) or its variations, by which OOV (out-of-vocab) words are almost avoidable.

Language Modelling Question Answering +3

HIT-SCIR at MRP 2020: Transition-based Parser and Iterative Inference Parser

no code implementations CONLL 2020 Longxu Dou, Yunlong Feng, Yuqiu Ji, Wanxiang Che, Ting Liu

This paper describes our submission system (HIT-SCIR) for the CoNLL 2020 shared task: Cross-Framework and Cross-Lingual Meaning Representation Parsing.

Incorporating Commonsense Knowledge into Abstractive Dialogue Summarization via Heterogeneous Graph Networks

1 code implementation CCL 2021 Xiachong Feng, Xiaocheng Feng, Bing Qin, Ting Liu

In detail, we consider utterance and commonsense knowledge as two different types of data and design a Dialogue Heterogeneous Graph Network (D-HGN) for modeling both information.

Abstractive Dialogue Summarization dialogue summary +1

Taking A Closer Look at Synthesis: Fine-grained Attribute Analysis for Person Re-Identification

no code implementations15 Oct 2020 Suncheng Xiang, Yuzhuo Fu, Guanjie You, Ting Liu

Person re-identification (re-ID) plays an important role in applications such as public security and video surveillance.

Attribute GPR +1

A Co-Interactive Transformer for Joint Slot Filling and Intent Detection

1 code implementation8 Oct 2020 Libo Qin, Tailu Liu, Wanxiang Che, Bingbing Kang, Sendong Zhao, Ting Liu

Instead of adopting the self-attention mechanism in vanilla Transformer, we propose a co-interactive module to consider the cross-impact by building a bidirectional connection between the two related tasks.

Intent Detection slot-filling +2