Search Results for author: Zhihan Zhang

Found 24 papers, 12 papers with code

SPAN: Spatial Pyramid Attention Network for Image Manipulation Localization

1 code implementation ECCV 2020 Xuefeng Hu, Zhihan Zhang, Zhenye Jiang, Syomantak Chaudhuri, Zhenheng Yang, Ram Nevatia

Tehchniques for manipulating images are advancing rapidly; while these are helpful for many useful tasks, they also pose a threat to society with their ability to create believable misinformation.

Image Manipulation Image Manipulation Detection +3

Large Language Models Can Self-Correct with Minimal Effort

no code implementations23 May 2024 Zhenyu Wu, Qingkai Zeng, Zhihan Zhang, Zhaoxuan Tan, Chao Shen, Meng Jiang

The condition can be an entity in an open-domain question or a numeric value in a math question, which requires minimal effort (via prompting) to identify.

Arithmetic Reasoning Math +1

Quantum-Classical Separations in Shallow-Circuit-Based Learning with and without Noises

no code implementations1 May 2024 Zhihan Zhang, Weiyuan Gong, Weikang Li, Dong-Ling Deng

In addition, for quantum devices with constant noise strength, we prove that no super-polynomial classical-quantum separation exists for any classification task defined by shallow Clifford circuits, independent of the structures of the circuits that specify the learning models.

Describe-then-Reason: Improving Multimodal Mathematical Reasoning through Visual Comprehension Training

no code implementations22 Apr 2024 Mengzhao Jia, Zhihan Zhang, Wenhao Yu, Fangkai Jiao, Meng Jiang

Open-source multimodal large language models (MLLMs) excel in various tasks involving textual and visual inputs but still struggle with complex multimodal mathematical reasoning, lagging behind proprietary models like GPT-4V(ision) and Gemini-Pro.

Math Mathematical Reasoning

PLUG: Leveraging Pivot Language in Cross-Lingual Instruction Tuning

1 code implementation15 Nov 2023 Zhihan Zhang, Dong-Ho Lee, Yuwei Fang, Wenhao Yu, Mengzhao Jia, Meng Jiang, Francesco Barbieri

Instruction tuning has remarkably advanced large language models (LLMs) in understanding and responding to diverse human instructions.

Instruction Following

Auto-Instruct: Automatic Instruction Generation and Ranking for Black-Box Language Models

no code implementations19 Oct 2023 Zhihan Zhang, Shuohang Wang, Wenhao Yu, Yichong Xu, Dan Iter, Qingkai Zeng, Yang Liu, Chenguang Zhu, Meng Jiang

Large language models (LLMs) can perform a wide range of tasks by following natural language instructions, without the necessity of task-specific fine-tuning.

Exploring and Characterizing Large Language Models For Embedded System Development and Debugging

no code implementations7 Jul 2023 Zachary Englhardt, Richard Li, Dilini Nissanka, Zhihan Zhang, Girish Narayanswamy, Joseph Breda, Xin Liu, Shwetak Patel, Vikram Iyer

We leverage this finding to study how human programmers interact with these tools, and develop an human-AI based software engineering workflow for building embedded systems.

Improving Language Models via Plug-and-Play Retrieval Feedback

no code implementations23 May 2023 Wenhao Yu, Zhihan Zhang, Zhenwen Liang, Meng Jiang, Ashish Sabharwal

ReFeed first generates initial outputs, then utilizes a retrieval model to acquire relevant information from large document collections, and finally incorporates the retrieved information into the in-context demonstration for output refinement, thereby addressing the limitations of LLMs in a more efficient and cost-effective manner.


Pre-training Language Models for Comparative Reasoning

no code implementations23 May 2023 Mengxia Yu, Zhihan Zhang, Wenhao Yu, Meng Jiang

Comparative reasoning is a process of comparing objects, concepts, or entities to draw conclusions, which constitutes a fundamental cognitive ability.

Question Answering Question Generation +1

Exploring Contrast Consistency of Open-Domain Question Answering Systems on Minimally Edited Questions

1 code implementation23 May 2023 Zhihan Zhang, Wenhao Yu, Zheng Ning, Mingxuan Ju, Meng Jiang

Contrast consistency, the ability of a model to make consistently correct predictions in the presence of perturbations, is an essential aspect in NLP.

Data Augmentation Language Modelling +4

Large Language Models are Built-in Autoregressive Search Engines

1 code implementation16 May 2023 Noah Ziems, Wenhao Yu, Zhihan Zhang, Meng Jiang

To overcome this limitation, recent autoregressive search engines replace the dual-encoder architecture by directly generating identifiers for relevant documents in the candidate pool.

Open-Domain Question Answering Retrieval

StarCoder: may the source be with you!

4 code implementations9 May 2023 Raymond Li, Loubna Ben allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de Vries

The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15. 5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention.

8k Code Generation

Retrieval Augmentation for Commonsense Reasoning: A Unified Approach

1 code implementation23 Oct 2022 Wenhao Yu, Chenguang Zhu, Zhihan Zhang, Shuohang Wang, Zhuosheng Zhang, Yuwei Fang, Meng Jiang

However, applying such methods to commonsense reasoning tasks faces two unique challenges, i. e., the lack of a general large-scale corpus for retrieval and a corresponding effective commonsense retriever.


A Unified Encoder-Decoder Framework with Entity Memory

1 code implementation7 Oct 2022 Zhihan Zhang, Wenhao Yu, Chenguang Zhu, Meng Jiang

The entity knowledge is stored in the memory as latent representations, and the memory is pre-trained on Wikipedia along with encoder-decoder parameters.

Decoder Question Answering +2

On the Relationship Between Counterfactual Explainer and Recommender

no code implementations9 Jul 2022 Gang Liu, Zhihan Zhang, Zheng Ning, Meng Jiang

To enable explainability, recent techniques such as ACCENT and FIA are looking for counterfactual explanations that are specific historical actions of a user, the removal of which leads to a change to the recommendation result.

Collaborative Filtering counterfactual +2

A Survey of Multi-task Learning in Natural Language Processing: Regarding Task Relatedness and Training Methods

no code implementations7 Apr 2022 Zhihan Zhang, Wenhao Yu, Mengxia Yu, Zhichun Guo, Meng Jiang

Multi-task learning (MTL) has become increasingly popular in natural language processing (NLP) because it improves the performance of related tasks by exploiting their commonalities and differences.

Multi-Task Learning

Knowledge-Aware Procedural Text Understanding with Multi-Stage Training

no code implementations28 Sep 2020 Zhihan Zhang, Xiubo Geng, Tao Qin, Yunfang Wu, Daxin Jiang

In this work, we focus on the task of procedural text understanding, which aims to comprehend such documents and track entities' states and locations during a process.

Procedural Text Understanding

SPAN: Spatial Pyramid Attention Network forImage Manipulation Localization

no code implementations1 Sep 2020 Xuefeng Hu, Zhihan Zhang, Zhenye Jiang, Syomantak Chaudhuri, Zhenheng Yang, Ram Nevatia

We present a novel framework, Spatial Pyramid Attention Network (SPAN) for detection and localization of multiple types of image manipulations.


DCA: Diversified Co-Attention towards Informative Live Video Commenting

no code implementations7 Nov 2019 Zhihan Zhang, Zhiyi Yin, Shuhuai Ren, Xinhang Li, Shicheng Li

In this paper, we aim to collect diversified information from video and text for informative comment generation.

Comment Generation Metric Learning

Cross-Modal Commentator: Automatic Machine Commenting Based on Cross-Modal Information

1 code implementation ACL 2019 Pengcheng Yang, Zhihan Zhang, Fuli Luo, Lei LI, Chengyang Huang, Xu sun

Automatic commenting of online articles can provide additional opinions and facts to the reader, which improves user experience and engagement on social media platforms.

Comment Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.