Search Results for author: Jianfeng Liu

Found 14 papers, 7 papers with code

Subobject-level Image Tokenization

1 code implementation22 Feb 2024 Delong Chen, Samuel Cahyawijaya, Jianfeng Liu, Baoyuan Wang, Pascale Fung

Transformer-based vision models typically tokenize images into fixed-size square patches as input units, which lacks the adaptability to image content and overlooks the inherent pixel grouping structure.

Attribute Language Modelling +1

$Se^2$: Sequential Example Selection for In-Context Learning

no code implementations21 Feb 2024 Haoyu Liu, Jianfeng Liu, Shaohan Huang, Yuefeng Zhan, Hao Sun, Weiwei Deng, Furu Wei, Qi Zhang

The remarkable capability of large language models (LLMs) for in-context learning (ICL) needs to be activated by demonstration examples.

In-Context Learning

Visual Instruction Tuning with Polite Flamingo

2 code implementations3 Jul 2023 Delong Chen, Jianfeng Liu, Wenliang Dai, Baoyuan Wang

This side effect negatively impacts the model's ability to format responses appropriately -- for instance, its "politeness" -- due to the overly succinct and unformatted nature of raw annotations, resulting in reduced human preference.

UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation

1 code implementation15 Mar 2023 Daixuan Cheng, Shaohan Huang, Junyu Bi, Yuefeng Zhan, Jianfeng Liu, Yujing Wang, Hao Sun, Furu Wei, Denvy Deng, Qi Zhang

Large Language Models (LLMs) are popular for their impressive abilities, but the need for model-specific fine-tuning or task-specific prompt engineering can hinder their generalization.

Hallucination Prompt Engineering +1

Exploring Contextual Relationships for Cervical Abnormal Cell Detection

1 code implementation11 Jul 2022 Yixiong Liang, Shuo Feng, Qing Liu, Hulin Kuang, Jianfeng Liu, Liyan Liao, Yun Du, Jianxin Wang

To mimic these behaviors, we propose to explore contextual relationships to boost the performance of cervical abnormal cell detection.

Cell Detection

Meet Changes with Constancy: Learning Invariance in Multi-Source Translation

no code implementations COLING 2020 Jianfeng Liu, Ling Luo, Xiang Ao, Yan Song, Haoran Xu, Jian Ye

Multi-source neural machine translation aims to translate from parallel sources of information (e. g. languages, images, etc.)

Machine Translation NMT +1

GoChat: Goal-oriented Chatbots with Hierarchical Reinforcement Learning

no code implementations24 May 2020 Jianfeng Liu, Feiyang Pan, Ling Luo

A chatbot that converses like a human should be goal-oriented (i. e., be purposeful in conversation), which is beyond language generation.

Chatbot Hierarchical Reinforcement Learning +4

Applying Cyclical Learning Rate to Neural Machine Translation

no code implementations6 Apr 2020 Choon Meng Lee, Jianfeng Liu, Wei Peng

In training deep learning networks, the optimizer and related learning rate are often used without much thought or with minimal tuning, even though it is crucial in ensuring a fast convergence to a good quality minimum of the loss function that can also generalize well on the test dataset.

Machine Translation Translation

Huawei's NMT Systems for the WMT 2019 Biomedical Translation Task

no code implementations WS 2019 Wei Peng, Jianfeng Liu, Liangyou Li, Qun Liu

This paper describes Huawei{'}s neural machine translation systems for the WMT 2019 biomedical translation shared task.

Domain Adaptation Machine Translation +3

Efficient Misalignment-Robust Multi-Focus Microscopical Images Fusion

1 code implementation21 Dec 2018 Yixiong Liang, Yuan Mao, Zhihong Tang, Meng Yan, Yuqian Zhao, Jianfeng Liu

Our method provides a flexible and efficient way to integrate complementary and redundant information from multiple multi-focus ultra HD unregistered images into a fused image that contains better description than any of the individual input images.

4k Multi-Focus Microscopical Images Fusion

Scale-Invariant Structure Saliency Selection for Fast Image Fusion

1 code implementation30 Oct 2018 Yixiong Liang, Yuan Mao, Jiazhi Xia, Yao Xiang, Jianfeng Liu

Specifically, we propose a scale-invariant structure saliency selection scheme based on the difference-of-Gaussian (DoG) pyramid of images to build the weights or activity map.

Cannot find the paper you are looking for? You can Submit a new open access paper.