Search Results for author: Yu Yu

Found 10 papers, 2 papers with code

Measuring Robustness for NLP

no code implementations COLING 2022 Yu Yu, Abdul Rafae Khan, Jia Xu

The quality of Natural Language Processing (NLP) models is typically measured by the accuracy or error rate of a predefined test set.

Machine Translation Sentiment Analysis

Can Data Diversity Enhance Learning Generalization?

no code implementations COLING 2022 Yu Yu, Shahram Khadivi, Jia Xu

This paper introduces our Diversity Advanced Actor-Critic reinforcement learning (A2C) framework (DAAC) to improve the generalization and accuracy of Natural Language Processing (NLP).

Domain Adaptation Language Modelling +7

Investigating Training Strategies and Model Robustness of Low-Rank Adaptation for Language Modeling in Speech Recognition

no code implementations19 Jan 2024 Yu Yu, Chao-Han Huck Yang, Tuan Dinh, Sungho Ryu, Jari Kolehmainen, Roger Ren, Denis Filimonov, Prashanth G. Shivakumar, Ankur Gandhe, Ariya Rastow, Jia Xu, Ivan Bulyko, Andreas Stolcke

The use of low-rank adaptation (LoRA) with frozen pretrained language models (PLMs) has become increasing popular as a mainstream, resource-efficient modeling approach for memory-constrained hardware.

Language Modelling speech-recognition +1

Type-Aware Decomposed Framework for Few-Shot Named Entity Recognition

2 code implementations13 Feb 2023 Yongqi Li, Yu Yu, Tieyun Qian

Despite the recent success achieved by several two-stage prototypical networks in few-shot named entity recognition (NER) task, the overdetected false spans at the span detection stage and the inaccurate and unstable prototypes at the type classification stage remain to be challenging problems.

Contrastive Learning Few-shot NER +3

Using AntiPatterns to avoid MLOps Mistakes

no code implementations30 Jun 2021 Nikhil Muralidhar, Sathappah Muthiah, Patrick Butler, Manish Jain, Yu Yu, Katy Burne, Weipeng Li, David Jones, Prakash Arunachalam, Hays 'Skip' McCormick, Naren Ramakrishnan

We describe lessons learned from developing and deploying machine learning models at scale across the enterprise in a range of financial analytics applications.

The Weak Lensing Peak Statistics in the Mocks by the inverse-Gaussianization Method

1 code implementation29 Jan 2020 Zhao Chen, Yu Yu, Xiangkun Liu, Zuhui Fan

We apply the inverse-Gaussianization method proposed in \citealt{arXiv:1607. 05007} to fast produce weak lensing convergence maps and investigate the peak statistics, including the peak height counts and peak steepness counts, in these mocks.

Cosmology and Nongalactic Astrophysics

Unsupervised Representation Learning for Gaze Estimation

no code implementations CVPR 2020 Yu Yu, Jean-Marc Odobez

Although automatic gaze estimation is very important to a large variety of application areas, it is difficult to train accurate and robust gaze models, in great part due to the difficulty in collecting large and diverse data (annotating 3D gaze is expensive and existing datasets use different setups).

Gaze Estimation gaze redirection +2

Improving Few-Shot User-Specific Gaze Adaptation via Gaze Redirection Synthesis

no code implementations CVPR 2019 Yu Yu, Gang Liu, Jean-Marc Odobez

In this work, we address the problem of person-specific gaze model adaptation from only a few reference training samples.

Domain Adaptation Gaze Estimation +1

A Differential Approach for Gaze Estimation

no code implementations20 Apr 2019 Gang Liu, Yu Yu, Kenneth A. Funes Mora, Jean-Marc Odobez

Non-invasive gaze estimation methods usually regress gaze directions directly from a single face or eye image.

Gaze Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.