Search Results for author: Michinari Momma

Found 6 papers, 1 papers with code

Improving Relevance Quality in Product Search using High-Precision Query-Product Semantic Similarity

no code implementations ECNLP (ACL) 2022 Alireza Bagheri Garakani, Fan Yang, Wen-Yu Hua, Yetian Chen, Michinari Momma, Jingyuan Deng, Yan Gao, Yi Sun

Ensuring relevance quality in product search is a critical task as it impacts the customer’s ability to find intended products in the short-term as well as the general perception and trust of the e-commerce system in the long term.

Re-Ranking Semantic Similarity +1

Federated Multi-Objective Learning

no code implementations NeurIPS 2023 Haibo Yang, Zhuqing Liu, Jia Liu, Chaosheng Dong, Michinari Momma

In recent years, multi-objective optimization (MOO) emerges as a foundational problem underpinning many multi-agent multi-task learning applications.

Federated Learning Multi-Task Learning

AdaSelection: Accelerating Deep Learning Training through Data Subsampling

no code implementations19 Jun 2023 Minghe Zhang, Chaosheng Dong, Jinmiao Fu, Tianchen Zhou, Jia Liang, Jia Liu, Bo Liu, Michinari Momma, Bryan Wang, Yan Gao, Yi Sun

In this paper, we introduce AdaSelection, an adaptive sub-sampling method to identify the most informative sub-samples within each minibatch to speed up the training of large-scale deep learning models without sacrificing model performance.

FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data

no code implementations13 Dec 2022 Minghong Fang, Jia Liu, Michinari Momma, Yi Sun

In this paper, we propose a new approach called fair recommendation with optimized antidote data (FairRoad), which aims to improve the fairness performances of recommender systems through the construction of a small and carefully crafted antidote dataset.

Fairness Recommendation Systems

Multi-Label Learning to Rank through Multi-Objective Optimization

no code implementations7 Jul 2022 Debabrata Mahapatra, Chaosheng Dong, Yetian Chen, Deqiang Meng, Michinari Momma

Moreover, it formulates multiple goals that may be conflicting yet important to optimize for simultaneously, e. g., in product search, a ranking model can be trained based on product quality and purchase likelihood to increase revenue.

Information Retrieval Learning-To-Rank +2

Multi-objective Ranking via Constrained Optimization

1 code implementation13 Feb 2020 Michinari Momma, Alireza Bagheri Garakani, Nanxun Ma, Yi Sun

In this paper, we introduce an Augmented Lagrangian based method to incorporate the multiple objectives (MO) in a search ranking algorithm.

Cannot find the paper you are looking for? You can Submit a new open access paper.