Search Results for author: Jiachen Liu

Found 27 papers, 9 papers with code

Joint Training of Candidate Extraction and Answer Selection for Reading Comprehension

no code implementations ACL 2018 Zhen Wang, Jiachen Liu, Xinyan Xiao, Yajuan Lyu, Tian Wu

While sophisticated neural-based techniques have been developed in reading comprehension, most approaches model the answer in an independent manner, ignoring its relations with other answer candidates.

Answer Selection Reading Comprehension

An Empirical Study of Propagation-based Methods for Video Object Segmentation

no code implementations30 Jul 2019 Hengkai Guo, Wenji Wang, Guanjun Guo, Huaxia Li, Jiachen Liu, Qian He, Xuefeng Xiao

While propagation-based approaches have achieved state-of-the-art performance for video object segmentation, the literature lacks a fair comparison of different methods using the same settings.

Object Semantic Segmentation +2

Exploring Contextual Word-level Style Relevance for Unsupervised Style Transfer

1 code implementation ACL 2020 Chulun Zhou, Liang-Yu Chen, Jiachen Liu, Xinyan Xiao, Jinsong Su, Sheng Guo, Hua Wu

Unsupervised style transfer aims to change the style of an input sentence while preserving its original content without using parallel training data.

Denoising Sentence +1

Leveraging Graph to Improve Abstractive Multi-Document Summarization

2 code implementations ACL 2020 Wei Li, Xinyan Xiao, Jiachen Liu, Hua Wu, Haifeng Wang, Junping Du

Graphs that capture relations between textual units have great benefits for detecting salient information from multiple documents and generating overall coherent summaries.

Document Summarization Multi-Document Summarization

FedScale: Benchmarking Model and System Performance of Federated Learning at Scale

3 code implementations24 May 2021 Fan Lai, Yinwei Dai, Sanjay S. Singapuram, Jiachen Liu, Xiangfeng Zhu, Harsha V. Madhyastha, Mosharaf Chowdhury

We present FedScale, a federated learning (FL) benchmarking suite with realistic datasets and a scalable runtime to enable reproducible FL research.

Benchmarking Federated Learning +6

BASS: Boosting Abstractive Summarization with Unified Semantic Graph

no code implementations ACL 2021 Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Ziqiang Cao, Sujian Li, Hua Wu, Haifeng Wang

Abstractive summarization for long-document or multi-document remains challenging for the Seq2Seq architecture, as Seq2Seq is not good at analyzing long-distance relations in text.

Abstractive Text Summarization Document Summarization +2

Controllable Dialogue Generation with Disentangled Multi-grained Style Specification and Attribute Consistency Reward

no code implementations14 Sep 2021 Zhe Hu, Zhiwei Cao, Hou Pong Chan, Jiachen Liu, Xinyan Xiao, Jinsong Su, Hua Wu

Controllable text generation is an appealing but challenging task, which allows users to specify particular attributes of the generated outputs.

Attribute Dialogue Generation +1

SgSum: Transforming Multi-document Summarization into Sub-graph Selection

1 code implementation25 Oct 2021 Moye Chen, Wei Li, Jiachen Liu, Xinyan Xiao, Hua Wu, Haifeng Wang

Comparing with traditional methods, our method has two main advantages: (1) the relations between sentences are captured by modeling both the graph structure of the whole document set and the candidate sub-graphs; (2) directly outputs an integrate summary in the form of sub-graph which is more informative and coherent.

Document Summarization Multi-Document Summarization +1

Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods

no code implementations10 Mar 2022 Wei Li, Wenhao Wu, Moye Chen, Jiachen Liu, Xinyan Xiao, Hua Wu

In this survey, we provide a systematic overview of the research progress on the faithfulness problem of NLG, including problem analysis, evaluation metrics and optimization methods.

Abstractive Text Summarization Data-to-Text Generation +2

UNIMO-2: End-to-End Unified Vision-Language Grounded Learning

1 code implementation Findings (ACL) 2022 Wei Li, Can Gao, guocheng niu, Xinyan Xiao, Hao liu, Jiachen Liu, Hua Wu, Haifeng Wang

In particular, we propose to conduct grounded learning on both images and texts via a sharing grounded space, which helps bridge unaligned images and texts, and align the visual and textual semantic spaces on different types of corpora.

PLANET: Dynamic Content Planning in Autoregressive Transformers for Long-form Text Generation

no code implementations ACL 2022 Zhe Hu, Hou Pong Chan, Jiachen Liu, Xinyan Xiao, Hua Wu, Lifu Huang

Despite recent progress of pre-trained language models on generating fluent text, existing methods still suffer from incoherence problems in long-form text generation tasks that require proper content control and planning to form a coherent high-level logical flow.

Contrastive Learning Sentence +1

PlaneMVS: 3D Plane Reconstruction from Multi-View Stereo

no code implementations CVPR 2022 Jiachen Liu, Pan Ji, Nitin Bansal, Changjiang Cai, Qingan Yan, Xiaolei Huang, Yi Xu

The semantic plane detection branch is based on a single-view plane detection framework but with differences.

3D Reconstruction

End-to-end Graph-constrained Vectorized Floorplan Generation with Panoptic Refinement

no code implementations27 Jul 2022 Jiachen Liu, Yuan Xue, Jose Duarte, Krishnendra Shekhawat, Zihan Zhou, Xiaolei Huang

In the first stage, we encode the room connectivity graph input by users with a graph convolutional network (GCN), then apply an autoregressive transformer network to generate an initial floorplan sequence.

Precisely the Point: Adversarial Augmentations for Faithful and Informative Text Generation

no code implementations22 Oct 2022 Wenhao Wu, Wei Li, Jiachen Liu, Xinyan Xiao, Sujian Li, Yajuan Lyu

Though model robustness has been extensively studied in language understanding, the robustness of Seq2Seq generation remains understudied.

Informativeness Text Generation

Auxo: Efficient Federated Learning via Scalable Client Clustering

no code implementations29 Oct 2022 Jiachen Liu, Fan Lai, Yinwei Dai, Aditya Akella, Harsha Madhyastha, Mosharaf Chowdhury

In this paper, we explore an additional layer of complexity to mitigate such heterogeneity by grouping clients with statistically similar data distributions (cohorts).

Clustering Federated Learning

FRSUM: Towards Faithful Abstractive Summarization via Enhancing Factual Robustness

no code implementations1 Nov 2022 Wenhao Wu, Wei Li, Jiachen Liu, Xinyan Xiao, Ziqiang Cao, Sujian Li, Hua Wu

We first measure a model's factual robustness by its success rate to defend against adversarial attacks when generating factual information.

Abstractive Text Summarization

WeCheck: Strong Factual Consistency Checker via Weakly Supervised Learning

1 code implementation20 Dec 2022 Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu, Sujian Li, Yajuan Lv

As a result, they perform poorly on the real generated text and are biased heavily by their single-source upstream tasks.

Natural Language Inference Question Answering +2

NeRF-Enhanced Outpainting for Faithful Field-of-View Extrapolation

no code implementations23 Sep 2023 Rui Yu, Jiachen Liu, Zihan Zhou, Sharon X. Huang

In various applications, such as robotic navigation and remote visual assistance, expanding the field of view (FOV) of the camera proves beneficial for enhancing environmental perception.

Image Outpainting

3D-Aware Talking-Head Video Motion Transfer

no code implementations5 Nov 2023 Haomiao Ni, Jiachen Liu, Yuan Xue, Sharon X. Huang

In this paper, we propose a novel 3D-aware talking-head video motion transfer network, Head3D, which fully exploits the subject appearance information by generating a visually-interpretable 3D canonical head from the 2D subject frames with a recurrent network.

Novel View Synthesis

Efficient Large Language Models: A Survey

3 code implementations6 Dec 2023 Zhongwei Wan, Xin Wang, Che Liu, Samiul Alam, Yu Zheng, Jiachen Liu, Zhongnan Qu, Shen Yan, Yi Zhu, Quanlu Zhang, Mosharaf Chowdhury, Mi Zhang

Large Language Models (LLMs) have demonstrated remarkable capabilities in important tasks such as natural language understanding, language generation, and complex reasoning and have the potential to make a substantial impact on our society.

Natural Language Understanding Text Generation

Venn: Resource Management Across Federated Learning Jobs

no code implementations13 Dec 2023 Jiachen Liu, Fan Lai, Ding Ding, Yiwen Zhang, Mosharaf Chowdhury

Scheduling edge resources among multiple FL jobs is different from GPU scheduling for cloud ML because of the ephemeral nature and planetary scale of participating devices as well as the overlapping resource requirements of diverse FL jobs.

Federated Learning Management +1

UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion

no code implementations24 Jan 2024 Wei Li, Xue Xu, Jiachen Liu, Xinyan Xiao

This paper presents UNIMO-G, a simple multimodal conditional diffusion framework that operates on multimodal prompts with interleaved textual and visual inputs, which demonstrates a unified ability for both text-driven and subject-driven image generation.

Conditional Image Generation Denoising +5

SgSum:Transforming Multi-document Summarization into Sub-graph Selection

1 code implementation EMNLP 2021 Moye Chen, Wei Li, Jiachen Liu, Xinyan Xiao, Hua Wu, Haifeng Wang

Comparing with traditional methods, our method has two main advantages: (1) the relations between sentences are captured by modeling both the graph structure of the whole document set and the candidate sub-graphs; (2) directly outputs an integrate summary in the form of sub-graph which is more informative and coherent.

Document Summarization Multi-Document Summarization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.