Search Results for author: Boyang Yu

Found 9 papers, 3 papers with code

4D Facial Expression Diffusion Model

1 code implementation29 Mar 2023 Kaifeng Zou, Sylvain Faisan, Boyang Yu, Sébastien Valette, Hyewon Seo

In this paper, we introduce a generative framework for generating 3D facial expression sequences (i. e. 4D faces) that can be conditioned on different inputs to animate an arbitrary 3D face mesh.

Denoising Facial expression generation

Implicit Regularization Effects of Unbiased Random Label Noises with SGD

no code implementations1 Jan 2021 Haoyi Xiong, Xuhong LI, Boyang Yu, Dejing Dou, Dongrui Wu, Zhanxing Zhu

Random label noises (or observational noises) widely exist in practical machinelearning settings.

Gravitational perturbations from NHEK to Kerr

no code implementations16 Feb 2021 Alejandra Castro, Victor Godet, Joan Simón, Wei Song, Boyang Yu

Our aim is to characterise those perturbations that are responsible for the deviations away from extremality, and to contrast them with the linearized perturbations treated in the Newman-Penrose formalism.

High Energy Physics - Theory General Relativity and Quantum Cosmology

Deep Probability Estimation

no code implementations21 Nov 2021 Sheng Liu, Aakash Kaku, Weicheng Zhu, Matan Leibovich, Sreyas Mohan, Boyang Yu, Haoxiang Huang, Laure Zanna, Narges Razavian, Jonathan Niles-Weed, Carlos Fernandez-Granda

Reliable probability estimation is of crucial importance in many real-world applications where there is inherent (aleatoric) uncertainty.

Autonomous Vehicles Binary Classification +2

Doubly Stochastic Models: Learning with Unbiased Label Noises and Inference Stability

no code implementations1 Apr 2023 Haoyi Xiong, Xuhong LI, Boyang Yu, Zhanxing Zhu, Dongrui Wu, Dejing Dou

While previous studies primarily focus on the affects of label noises to the performance of learning, our work intends to investigate the implicit regularization effects of the label noises, under mini-batch sampling settings of stochastic gradient descent (SGD), with assumptions that label noises are unbiased.

Improved selective background Monte Carlo simulation at Belle II with graph attention networks and weighted events

no code implementations12 Jul 2023 Boyang Yu, Nikolai Hartmann, Luca Schinnerl, Thomas Kuhr

When measuring rare processes at Belle II, a huge luminosity is required, which means a large number of simulations are necessary to determine signal efficiencies and background contributions.

Graph Attention

Benchmarking Large Language Model Volatility

no code implementations26 Nov 2023 Boyang Yu

The impact of non-deterministic outputs from Large Language Models (LLMs) is not well examined for financial text understanding tasks.

Benchmarking Decision Making +5

Cannot find the paper you are looking for? You can Submit a new open access paper.