Search Results for author: Ji Ma

Found 34 papers, 8 papers with code

Multi-stage Training with Improved Negative Contrast for Neural Passage Retrieval

no code implementations EMNLP 2021 Jing Lu, Gustavo Hernandez Abrego, Ji Ma, Jianmo Ni, Yinfei Yang

In the context of neural passage retrieval, we study three promising techniques: synthetic data generation, negative sampling, and fusion.

Passage Retrieval Retrieval +1

Memory-Reduced Meta-Learning with Guaranteed Convergence

no code implementations16 Dec 2024 Honglin Yang, Ji Ma, Xiao Yu

The optimization-based meta-learning approach is gaining increased traction because of its unique ability to quickly adapt to a new task using only small amounts of data.

Meta-Learning

Pruning All-Rounder: Rethinking and Improving Inference Efficiency for Large Vision Language Models

no code implementations9 Dec 2024 Wei Suo, Ji Ma, Mengyang Sun, Lin Yuanbo Wu, Peng Wang, Yanning Zhang

Although Large Vision-Language Models (LVLMs) have achieved impressive results, their high computational cost poses a significant barrier to wider application.

Self-Supervised Learning

Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games

no code implementations28 Oct 2024 Ji Ma

As Large Language Model (LLM)-based agents increasingly undertake real-world tasks and engage with human society, how well do we understand their behaviors?

Decision Making Language Modeling +2

Content-decoupled Contrastive Learning-based Implicit Degradation Modeling for Blind Image Super-Resolution

no code implementations10 Aug 2024 Jiang Yuan, Ji Ma, Bo wang, Weiming Hu

Implicit degradation modeling-based blind super-resolution (SR) has attracted more increasing attention in the community due to its excellent generalization to complex degradation scenarios and wide application range.

Blind Super-Resolution Contrastive Learning +1

AHMF: Adaptive Hybrid-Memory-Fusion Model for Driver Attention Prediction

no code implementations24 Jul 2024 Dongyang Xu, Qingfan Wang, Ji Ma, Xiangyun Zeng, Lei Chen

Accurate driver attention prediction can serve as a critical reference for intelligent vehicles in understanding traffic scenes and making informed driving decisions.

Domain Adaptation Driver Attention Monitoring +1

C3L: Content Correlated Vision-Language Instruction Tuning Data Generation via Contrastive Learning

no code implementations21 May 2024 Ji Ma, Wei Suo, Peng Wang, Yanning Zhang

Vision-Language Instruction Tuning (VLIT) is a critical training phase for Large Vision-Language Models (LVLMs).

Contrastive Learning

DOZE: A Dataset for Open-Vocabulary Zero-Shot Object Navigation in Dynamic Environments

no code implementations29 Feb 2024 Ji Ma, Hongming Dai, Yao Mu, Pengying Wu, Hao Wang, Xiaowei Chi, Yang Fei, Shanghang Zhang, Chang Liu

Zero-Shot Object Navigation (ZSON) requires agents to autonomously locate and approach unseen objects in unfamiliar environments and has emerged as a particularly challenging task within the domain of Embodied AI.

Attribute Collision Avoidance +3

VoroNav: Voronoi-based Zero-shot Object Navigation with Large Language Model

no code implementations5 Jan 2024 Pengying Wu, Yao Mu, Bingxian Wu, Yi Hou, Ji Ma, Shanghang Zhang, Chang Liu

In the realm of household robotics, the Zero-Shot Object Navigation (ZSON) task empowers agents to adeptly traverse unfamiliar environments and locate objects from novel categories without prior explicit training.

Language Modeling Language Modelling +1

OpenMSD: Towards Multilingual Scientific Documents Similarity Measurement

1 code implementation19 Sep 2023 Yang Gao, Ji Ma, Ivan Korotkov, Keith Hall, Dana Alon, Don Metzler

We propose the first multilingual scientific documents dataset, Open-access Multilingual Scientific Documents (OpenMSD), which has 74M papers in 103 languages and 778M citation pairs.

HYRR: Hybrid Infused Reranking for Passage Retrieval

no code implementations20 Dec 2022 Jing Lu, Keith Hall, Ji Ma, Jianmo Ni

We present Hybrid Infused Reranking for Passages Retrieval (HYRR), a framework for training rerankers based on a hybrid of BM25 and neural retrieval models.

Passage Retrieval Retrieval

QAmeleon: Multilingual QA with Only 5 Examples

1 code implementation15 Nov 2022 Priyanka Agrawal, Chris Alberti, Fantine Huot, Joshua Maynez, Ji Ma, Sebastian Ruder, Kuzman Ganchev, Dipanjan Das, Mirella Lapata

The availability of large, high-quality datasets has been one of the main drivers of recent progress in question answering (QA).

Few-Shot Learning Question Answering

RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses

no code implementations12 Oct 2022 Honglei Zhuang, Zhen Qin, Rolf Jagerman, Kai Hui, Ji Ma, Jing Lu, Jianmo Ni, Xuanhui Wang, Michael Bendersky

Recently, substantial progress has been made in text ranking based on pretrained language models such as BERT.

Decoder

Promptagator: Few-shot Dense Retrieval From 8 Examples

no code implementations23 Sep 2022 Zhuyun Dai, Vincent Y. Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, Ming-Wei Chang

To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages large language models (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data.

Information Retrieval Natural Questions +1

Large Dual Encoders Are Generalizable Retrievers

2 code implementations15 Dec 2021 Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y. Zhao, Yi Luan, Keith B. Hall, Ming-Wei Chang, Yinfei Yang

With multi-stage training, surprisingly, scaling up the model size brings significant improvement on a variety of retrieval tasks, especially for out-of-domain generalization.

Domain Generalization Retrieval +1

Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models

2 code implementations Findings (ACL) 2022 Jianmo Ni, Gustavo Hernández Ábrego, Noah Constant, Ji Ma, Keith B. Hall, Daniel Cer, Yinfei Yang

To support our investigation, we establish a new sentence representation transfer benchmark, SentGLUE, which extends the SentEval toolkit to nine tasks from the GLUE benchmark.

Contrastive Learning Decoder +4

Neural Passage Retrieval with Improved Negative Contrast

no code implementations23 Oct 2020 Jing Lu, Gustavo Hernandez Abrego, Ji Ma, Jianmo Ni, Yinfei Yang

In this paper we explore the effects of negative sampling in dual encoder models used to retrieve passages for automatic question answering.

Open-Domain Question Answering Passage Retrieval +3

QURIOUS: Question Generation Pretraining for Text Generation

no code implementations23 Apr 2020 Shashi Narayan, Gonçalo Simoes, Ji Ma, Hannah Craighead, Ryan Mcdonald

Recent trends in natural language processing using pretraining have shifted focus towards pretraining and fine-tuning approaches for text generation.

Abstractive Text Summarization Language Modeling +4

The Practical Challenges of Active Learning: Lessons Learned from Live Experimentation

no code implementations28 Jun 2019 Jean-François Kagy, Tolga Kayadelen, Ji Ma, Afshin Rostamizadeh, Jana Strnadova

We tested in a live setting the use of active learning for selecting text sentences for human annotations used in training a Thai segmentation machine learning model.

Active Learning

State-of-the-art Chinese Word Segmentation with Bi-LSTMs

1 code implementation EMNLP 2018 Ji Ma, Kuzman Ganchev, David Weiss

A wide variety of neural-network architectures have been proposed for the task of Chinese word segmentation.

Chinese Word Segmentation

Natural Language Processing with Small Feed-Forward Networks

1 code implementation EMNLP 2017 Jan A. Botha, Emily Pitler, Ji Ma, Anton Bakalov, Alex Salcianu, David Weiss, Ryan Mcdonald, Slav Petrov

We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models.

Cannot find the paper you are looking for? You can Submit a new open access paper.