Search Results for author: Jiachen Liu

Found 33 papers, 11 papers with code

SgSum:Transforming Multi-document Summarization into Sub-graph Selection

1 code implementation EMNLP 2021 Moye Chen, Wei Li, Jiachen Liu, Xinyan Xiao, Hua Wu, Haifeng Wang

Comparing with traditional methods, our method has two main advantages: (1) the relations between sentences are captured by modeling both the graph structure of the whole document set and the candidate sub-graphs; (2) directly outputs an integrate summary in the form of sub-graph which is more informative and coherent.

Document Summarization Multi-Document Summarization +1

Empowering Backbone Models for Visual Text Generation with Input Granularity Control and Glyph-Aware Training

no code implementations6 Oct 2024 Wenbo Li, Guohao Li, Zhibin Lan, Xue Xu, Wanru Zhuang, Jiachen Liu, Xinyan Xiao, Jinsong Su

Diffusion-based text-to-image models have demonstrated impressive achievements in diversity and aesthetics but struggle to generate images with legible visual texts.

Diversity Image Generation +1

The USTC-NERCSLIP Systems for The ICMC-ASR Challenge

no code implementations2 Jul 2024 Minghui Wu, Luzhen Xu, Jie Zhang, Haitao Tang, Yanyan Yue, Ruizhi Liao, Jintao Zhao, Zhengzhe Zhang, Yichi Wang, Haoyin Yan, Hongliang Yu, Tongle Ma, Jiachen Liu, Chongliang Wu, Yongchao Li, Yanyong Zhang, Xin Fang, Yue Zhang

This report describes the submitted system to the In-Car Multi-Channel Automatic Speech Recognition (ICMC-ASR) challenge, which considers the ASR task with multi-speaker overlapping and Mandarin accent dynamics in the ICMC case.

Automatic Speech Recognition Pseudo Label +5

Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services

no code implementations25 Apr 2024 Jiachen Liu, Zhiyu Wu, Jae-Won Chung, Fan Lai, Myungjin Lee, Mosharaf Chowdhury

The advent of large language models (LLMs) has transformed text-based services, enabling capabilities ranging from real-time translation to AI-driven chatbots.

FedTrans: Efficient Federated Learning via Multi-Model Transformation

no code implementations21 Apr 2024 Yuxuan Zhu, Jiachen Liu, Mosharaf Chowdhury, Fan Lai

Federated learning (FL) aims to train machine learning (ML) models across potentially millions of edge client devices.

Federated Learning

UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion

no code implementations24 Jan 2024 Wei Li, Xue Xu, Jiachen Liu, Xinyan Xiao

This paper presents UNIMO-G, a simple multimodal conditional diffusion framework that operates on multimodal prompts with interleaved textual and visual inputs, which demonstrates a unified ability for both text-driven and subject-driven image generation.

Conditional Image Generation Denoising +6

Venn: Resource Management Across Federated Learning Jobs

no code implementations13 Dec 2023 Jiachen Liu, Fan Lai, Ding Ding, Yiwen Zhang, Mosharaf Chowdhury

Scheduling edge resources among multiple FL jobs is different from GPU scheduling for cloud ML because of the ephemeral nature and planetary scale of participating devices as well as the overlapping resource requirements of diverse FL jobs.

Federated Learning Management +1

Efficient Large Language Models: A Survey

3 code implementations6 Dec 2023 Zhongwei Wan, Xin Wang, Che Liu, Samiul Alam, Yu Zheng, Jiachen Liu, Zhongnan Qu, Shen Yan, Yi Zhu, Quanlu Zhang, Mosharaf Chowdhury, Mi Zhang

We hope our survey can serve as a valuable resource to help researchers and practitioners gain a systematic understanding of efficient LLMs research and inspire them to contribute to this important and exciting field.

Natural Language Understanding Survey +1

3D-Aware Talking-Head Video Motion Transfer

no code implementations5 Nov 2023 Haomiao Ni, Jiachen Liu, Yuan Xue, Sharon X. Huang

In this paper, we propose a novel 3D-aware talking-head video motion transfer network, Head3D, which fully exploits the subject appearance information by generating a visually-interpretable 3D canonical head from the 2D subject frames with a recurrent network.

Novel View Synthesis

NeRF-Enhanced Outpainting for Faithful Field-of-View Extrapolation

no code implementations23 Sep 2023 Rui Yu, Jiachen Liu, Zihan Zhou, Sharon X. Huang

In various applications, such as robotic navigation and remote visual assistance, expanding the field of view (FOV) of the camera proves beneficial for enhancing environmental perception.

Image Outpainting

WeCheck: Strong Factual Consistency Checker via Weakly Supervised Learning

1 code implementation20 Dec 2022 Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Sujian Li, Yajuan Lv

As a result, they perform poorly on the real generated text and are biased heavily by their single-source upstream tasks.

Natural Language Inference Question Answering +2

FRSUM: Towards Faithful Abstractive Summarization via Enhancing Factual Robustness

no code implementations1 Nov 2022 Wenhao Wu, Wei Li, Jiachen Liu, Xinyan Xiao, Ziqiang Cao, Sujian Li, Hua Wu

We first measure a model's factual robustness by its success rate to defend against adversarial attacks when generating factual information.

Abstractive Text Summarization

Auxo: Efficient Federated Learning via Scalable Client Clustering

no code implementations29 Oct 2022 Jiachen Liu, Fan Lai, Yinwei Dai, Aditya Akella, Harsha Madhyastha, Mosharaf Chowdhury

In this paper, we explore an additional layer of complexity to mitigate such heterogeneity by grouping clients with statistically similar data distributions (cohorts).

Clustering Federated Learning

Precisely the Point: Adversarial Augmentations for Faithful and Informative Text Generation

no code implementations22 Oct 2022 Wenhao Wu, Wei Li, Jiachen Liu, Xinyan Xiao, Sujian Li, Yajuan Lyu

Though model robustness has been extensively studied in language understanding, the robustness of Seq2Seq generation remains understudied.

Informativeness Text Generation

End-to-end Graph-constrained Vectorized Floorplan Generation with Panoptic Refinement

no code implementations27 Jul 2022 Jiachen Liu, Yuan Xue, Jose Duarte, Krishnendra Shekhawat, Zihan Zhou, Xiaolei Huang

In the first stage, we encode the room connectivity graph input by users with a graph convolutional network (GCN), then apply an autoregressive transformer network to generate an initial floorplan sequence.

PLANET: Dynamic Content Planning in Autoregressive Transformers for Long-form Text Generation

no code implementations ACL 2022 Zhe Hu, Hou Pong Chan, Jiachen Liu, Xinyan Xiao, Hua Wu, Lifu Huang

Despite recent progress of pre-trained language models on generating fluent text, existing methods still suffer from incoherence problems in long-form text generation tasks that require proper content control and planning to form a coherent high-level logical flow.

Contrastive Learning Decoder +2

UNIMO-2: End-to-End Unified Vision-Language Grounded Learning

1 code implementation Findings (ACL) 2022 Wei Li, Can Gao, guocheng niu, Xinyan Xiao, Hao liu, Jiachen Liu, Hua Wu, Haifeng Wang

In particular, we propose to conduct grounded learning on both images and texts via a sharing grounded space, which helps bridge unaligned images and texts, and align the visual and textual semantic spaces on different types of corpora.

Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods

no code implementations10 Mar 2022 Wei Li, Wenhao Wu, Moye Chen, Jiachen Liu, Xinyan Xiao, Hua Wu

In this survey, we provide a systematic overview of the research progress on the faithfulness problem of NLG, including problem analysis, evaluation metrics and optimization methods.

Abstractive Text Summarization Data-to-Text Generation +2

SgSum: Transforming Multi-document Summarization into Sub-graph Selection

1 code implementation25 Oct 2021 Moye Chen, Wei Li, Jiachen Liu, Xinyan Xiao, Hua Wu, Haifeng Wang

Comparing with traditional methods, our method has two main advantages: (1) the relations between sentences are captured by modeling both the graph structure of the whole document set and the candidate sub-graphs; (2) directly outputs an integrate summary in the form of sub-graph which is more informative and coherent.

Document Summarization Multi-Document Summarization +1

Controllable Dialogue Generation with Disentangled Multi-grained Style Specification and Attribute Consistency Reward

no code implementations14 Sep 2021 Zhe Hu, Zhiwei Cao, Hou Pong Chan, Jiachen Liu, Xinyan Xiao, Jinsong Su, Hua Wu

Controllable text generation is an appealing but challenging task, which allows users to specify particular attributes of the generated outputs.

Attribute Decoder +3

BASS: Boosting Abstractive Summarization with Unified Semantic Graph

no code implementations ACL 2021 Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Ziqiang Cao, Sujian Li, Hua Wu, Haifeng Wang

Abstractive summarization for long-document or multi-document remains challenging for the Seq2Seq architecture, as Seq2Seq is not good at analyzing long-distance relations in text.

Abstractive Text Summarization Decoder +3

FedScale: Benchmarking Model and System Performance of Federated Learning at Scale

3 code implementations24 May 2021 Fan Lai, Yinwei Dai, Sanjay S. Singapuram, Jiachen Liu, Xiangfeng Zhu, Harsha V. Madhyastha, Mosharaf Chowdhury

We present FedScale, a federated learning (FL) benchmarking suite with realistic datasets and a scalable runtime to enable reproducible FL research.

Benchmarking Federated Learning +6

Leveraging Graph to Improve Abstractive Multi-Document Summarization

2 code implementations ACL 2020 Wei Li, Xinyan Xiao, Jiachen Liu, Hua Wu, Haifeng Wang, Junping Du

Graphs that capture relations between textual units have great benefits for detecting salient information from multiple documents and generating overall coherent summaries.

Document Summarization Multi-Document Summarization

Exploring Contextual Word-level Style Relevance for Unsupervised Style Transfer

1 code implementation ACL 2020 Chulun Zhou, Liang-Yu Chen, Jiachen Liu, Xinyan Xiao, Jinsong Su, Sheng Guo, Hua Wu

Unsupervised style transfer aims to change the style of an input sentence while preserving its original content without using parallel training data.

Decoder Denoising +2

An Empirical Study of Propagation-based Methods for Video Object Segmentation

no code implementations30 Jul 2019 Hengkai Guo, Wenji Wang, Guanjun Guo, Huaxia Li, Jiachen Liu, Qian He, Xuefeng Xiao

While propagation-based approaches have achieved state-of-the-art performance for video object segmentation, the literature lacks a fair comparison of different methods using the same settings.

Object Semantic Segmentation +2

Joint Training of Candidate Extraction and Answer Selection for Reading Comprehension

no code implementations ACL 2018 Zhen Wang, Jiachen Liu, Xinyan Xiao, Yajuan Lyu, Tian Wu

While sophisticated neural-based techniques have been developed in reading comprehension, most approaches model the answer in an independent manner, ignoring its relations with other answer candidates.

Answer Selection Reading Comprehension +1

Cannot find the paper you are looking for? You can Submit a new open access paper.