Search Results for author: Jiangtong Li

Found 19 papers, 9 papers with code

Multi-Patch Prediction: Adapting LLMs for Time Series Representation Learning

no code implementations7 Feb 2024 Yuxuan Bian, Xuan Ju, Jiangtong Li, Zhijian Xu, Dawei Cheng, Qiang Xu

In this study, we present aLLM4TS, an innovative framework that adapts Large Language Models (LLMs) for time-series representation learning.

Contrastive Learning Representation Learning +3

DreamCom: Finetuning Text-guided Inpainting Model for Image Composition

no code implementations27 Sep 2023 Lingxiao Lu, Jiangtong Li, Bo Zhang, Li Niu

The goal of image composition is merging a foreground object into a background image to obtain a realistic composite image.

Image Inpainting Object +1

CFGPT: Chinese Financial Assistant with Large Language Model

1 code implementation19 Sep 2023 Jiangtong Li, Yuxuan Bian, Guoxuan Wang, Yang Lei, Dawei Cheng, Zhijun Ding, Changjun Jiang

The CFAPP is centered on large language models (LLMs) and augmented with additional modules to ensure multifaceted functionality in real-world application.

Decision Making Language Modelling +2

Deep Image Harmonization in Dual Color Spaces

1 code implementation5 Aug 2023 Linfeng Tan, Jiangtong Li, Li Niu, Liqing Zhang

The network comprises a $RGB$ harmonization backbone, an $Lab$ encoding module, and an $Lab$ control module.

Image Harmonization

Painterly Image Harmonization using Diffusion Model

1 code implementation4 Aug 2023 Lingxiao Lu, Jiangtong Li, Junyan Cao, Li Niu, Liqing Zhang

Painterly image harmonization aims to insert photographic objects into paintings and obtain artistically coherent composite images.

Generative Adversarial Network Image Harmonization +1

Knowledge Proxy Intervention for Deconfounded Video Question Answering

no code implementations ICCV 2023 Jiangtong Li, Li Niu, Liqing Zhang

To tackle the challenge that the confounder in VideoQA is unobserved and non-enumerable in general, we propose a model-agnostic framework called Knowledge Proxy Intervention (KPI), which introduces an extra knowledge proxy variable in the causal graph to cut the backdoor path and remove the confounder.

Question Answering Video Question Answering

Fast Object Placement Assessment

1 code implementation28 May 2022 Li Niu, Qingyang Liu, Zhenchen Liu, Jiangtong Li

However, given a pair of scaled foreground and background, to enumerate all the reasonable locations, existing OPA model needs to place the foreground at each location on the background and pass the obtained composite image through the model one at a time, which is very time-consuming.

Object

OPA: Object Placement Assessment Dataset

3 code implementations5 Jul 2021 Liu Liu, Zhenchen Liu, Bo Zhang, Jiangtong Li, Li Niu, Qingyang Liu, Liqing Zhang

Image composition aims to generate realistic composite image by inserting an object from one image into another background image, where the placement (e. g., location, size, occlusion) of inserted object may be unreasonable, which would significantly degrade the quality of the composite image.

Object

Zero-Shot Sketch-Based Image Retrieval with Structure-aware Asymmetric Disentanglement

no code implementations29 Nov 2019 Jiangtong Li, Zhixin Ling, Li Niu, Liqing Zhang

The goal of Sketch-Based Image Retrieval (SBIR) is using free-hand sketches to retrieve images of the same category from a natural image gallery.

Disentanglement Retrieval +2

Subword ELMo

no code implementations18 Sep 2019 Jiangtong Li, Hai Zhao, Zuchao Li, Wei Bi, Xiaojiang Liu

Embedding from Language Models (ELMo) has shown to be effective for improving many natural language processing (NLP) tasks, and ELMo takes character information to compose word representation to train language models. However, the character is an insufficient and unnatural linguistic unit for word representation. Thus we introduce Embedding from Subword-aware Language Models (ESuLMo) which learns word representation from subwords using unsupervised segmentation over words. We show that ESuLMo can enhance four benchmark NLP tasks more effectively than ELMo, including syntactic dependency parsing, semantic role labeling, implicit discourse relation recognition and textual entailment, which brings a meaningful improvement over ELMo.

Dependency Parsing Natural Language Inference +1

Lattice-Based Transformer Encoder for Neural Machine Translation

no code implementations ACL 2019 Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, Kehai Chen

To integrate different segmentations with the state-of-the-art NMT model, Transformer, we propose lattice-based encoders to explore effective word or subword representation in an automatic way during training.

Machine Translation NMT +1

Judging Chemical Reaction Practicality From Positive Sample only Learning

no code implementations22 Apr 2019 Shu Jiang, Zhuosheng Zhang, Hai Zhao, Jiangtong Li, Yang Yang, Bao-liang Lu, Ning Xia

Chemical reaction practicality is the core task among all symbol intelligence based chemical information processing, for example, it provides indispensable clue for further automatic synthesis route inference.

Fast Neural Chinese Word Segmentation for Long Sentences

no code implementations6 Nov 2018 Sufeng Duan, Jiangtong Li, Hai Zhao

Rapidly developed neural models have achieved competitive performance in Chinese word segmentation (CWS) as their traditional counterparts.

Chinese Word Segmentation Segmentation +1

Modeling Multi-turn Conversation with Deep Utterance Aggregation

1 code implementation COLING 2018 Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, Gongshen Liu

In this paper, we formulate previous utterances into context using a proposed deep utterance aggregation model to form a fine-grained context representation.

Conversational Response Selection Retrieval

SJTU-NLP at SemEval-2018 Task 9: Neural Hypernym Discovery with Term Embeddings

no code implementations SEMEVAL 2018 Zhuosheng Zhang, Jiangtong Li, Hai Zhao, Bingjie Tang

This paper describes a hypernym discovery system for our participation in the SemEval-2018 Task 9, which aims to discover the best (set of) candidate hypernyms for input concepts or entities, given the search space of a pre-defined vocabulary.

Hypernym Discovery

Cannot find the paper you are looking for? You can Submit a new open access paper.