Search Results for author: Li Meng

Found 12 papers, 3 papers with code

A Manifold Representation of the Key in Vision Transformers

no code implementations1 Feb 2024 Li Meng, Morten Goodwin, Anis Yazidi, Paal Engelstad

The query, key, and value are often intertwined and generated within those blocks via a single, shared linear transformation.

Instance Segmentation object-detection +2

State Representation Learning Using an Unbalanced Atlas

no code implementations17 May 2023 Li Meng, Morten Goodwin, Anis Yazidi, Paal Engelstad

The manifold hypothesis posits that high-dimensional data often lies on a lower-dimensional manifold and that utilizing this manifold as the target space yields more efficient representations.

Dimensionality Reduction Representation Learning +1

Unsupervised Representation Learning in Partially Observable Atari Games

1 code implementation13 Mar 2023 Li Meng, Morten Goodwin, Anis Yazidi, Paal Engelstad

Contrastive methods have performed better than generative models in previous state representation learning research.

Atari Games Representation Learning

Deep Reinforcement Learning with Swin Transformers

1 code implementation30 Jun 2022 Li Meng, Morten Goodwin, Anis Yazidi, Paal Engelstad

Transformers are neural network models that utilize multiple layers of self-attention heads and have exhibited enormous potential in natural language processing tasks.

Atari Games reinforcement-learning +1

improving the Diversity of Bootstrapped DQN by Replacing Priors With Noise

no code implementations2 Mar 2022 Li Meng, Morten Goodwin, Anis Yazidi, Paal Engelstad

In this article, we further explore the possibility of replacing priors with noise and sample the noise from a Gaussian distribution to introduce more diversity into this algorithm.

Atari Games Q-Learning

Nana-HDR: A Non-attentive Non-autoregressive Hybrid Model for TTS

no code implementations28 Sep 2021 Shilun Lin, Wenchao Su, Li Meng, Fenglong Xie, Xinhui Li, Li Lu

Thirdly, a duration predictor instead of an attention model that connects the above hybrid encoder and decoder.

Expert Q-learning: Deep Reinforcement Learning with Coarse State Values from Offline Expert Examples

no code implementations28 Jun 2021 Li Meng, Anis Yazidi, Morten Goodwin, Paal Engelstad

Using the board game Othello, we compare our algorithm with the baseline Q-learning algorithm, which is a combination of Double Q-learning and Dueling Q-learning.

Imitation Learning Q-Learning +2

Triple M: A Practical Text-to-speech Synthesis System With Multi-guidance Attention And Multi-band Multi-time LPCNet

no code implementations30 Jan 2021 Shilun Lin, Fenglong Xie, Li Meng, Xinhui Li, Li Lu

In this work, a robust and efficient text-to-speech (TTS) synthesis system named Triple M is proposed for large-scale online application.

Sentence Speech Synthesis +1

Face Recognition: From Traditional to Deep Learning Methods

2 code implementations31 Oct 2018 Daniel Sáez Trigueros, Li Meng, Margaret Hartnett

Starting in the seventies, face recognition has become one of the most researched topics in computer vision and biometrics.

BIG-bench Machine Learning Face Recognition

Enhancing Convolutional Neural Networks for Face Recognition with Occlusion Maps and Batch Triplet Loss

no code implementations25 Jul 2017 Daniel Sáez Trigueros, Li Meng, Margaret Hartnett

Despite the recent success of convolutional neural networks for computer vision applications, unconstrained face recognition remains a challenge.

Face Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.