Search Results for author: Wenzhe Shi

Found 18 papers, 5 papers with code

Is the deconvolution layer the same as a convolutional layer?

6 code implementations22 Sep 2016 Wenzhe Shi, Jose Caballero, Lucas Theis, Ferenc Huszar, Andrew Aitken, Christian Ledig, Zehan Wang

In this note, we want to focus on aspects related to two questions most people asked us at CVPR about the network we presented.

Lossy Image Compression with Compressive Autoencoders

4 code implementations1 Mar 2017 Lucas Theis, Wenzhe Shi, Andrew Cunningham, Ferenc Huszár

We propose a new approach to the problem of optimizing autoencoders for lossy image compression.

Image Compression

Checkerboard artifact free sub-pixel convolution: A note on sub-pixel convolution, resize convolution and convolution resize

3 code implementations10 Jul 2017 Andrew Aitken, Christian Ledig, Lucas Theis, Jose Caballero, Zehan Wang, Wenzhe Shi

Compared to sub-pixel convolution initialized with schemes designed for standard convolution kernels, it is free from checkerboard artifacts immediately after initialization.

Frame Interpolation with Multi-Scale Deep Loss Functions and Generative Adversarial Networks

no code implementations16 Nov 2017 Joost van Amersfoort, Wenzhe Shi, Alejandro Acosta, Francisco Massa, Johannes Totz, Zehan Wang, Jose Caballero

To improve the quality of synthesised intermediate video frames, our network is jointly supervised at different levels with a perceptual loss function that consists of an adversarial and two content losses.

Generative Adversarial Network

Fast Face-swap Using Convolutional Neural Networks

no code implementations ICCV 2017 Iryna Korshunova, Wenzhe Shi, Joni Dambre, Lucas Theis

We consider the problem of face swapping in images, where an input identity is transformed into a target identity while preserving pose, facial expression, and lighting.

Face Swapping Style Transfer

Amortised MAP Inference for Image Super-resolution

no code implementations14 Oct 2016 Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, Ferenc Huszár

We show that, using this architecture, the amortised MAP inference problem reduces to minimising the cross-entropy between two distributions, similar to training generative models.

Denoising Image Super-Resolution +1

Patch-based Evaluation of Image Segmentation

no code implementations CVPR 2014 Christian Ledig, Wenzhe Shi, Wenjia Bai, Daniel Rueckert

The ideal similarity measure should be unbiased to segmentations of different volume and complexity, and be able to quantify and visualise segmentation bias.

Hippocampus Image Segmentation +2

Addressing Delayed Feedback for Continuous Training with Neural Networks in CTR prediction

no code implementations15 Jul 2019 Sofia Ira Ktena, Alykhan Tejani, Lucas Theis, Pranay Kumar Myana, Deepak Dilipkumar, Ferenc Huszar, Steven Yoo, Wenzhe Shi

The focus of this paper is to identify the best combination of loss functions and models that enable large-scale learning from a continuous stream of data in the presence of delayed labels.

Click-Through Rate Prediction

Smile, Be Happy :) Emoji Embedding for Visual Sentiment Analysis

no code implementations14 Jul 2019 Ziad Al-Halah, Andrew Aitken, Wenzhe Shi, Jose Caballero

Additionally, we introduce a novel emoji representation based on their visual emotional response which supports a deeper understanding of the emoji modality and their usage on social media.

Sentiment Analysis Transfer Learning

Deep Bayesian Bandits: Exploring in Online Personalized Recommendations

no code implementations3 Aug 2020 Dalin Guo, Sofia Ira Ktena, Ferenc Huszar, Pranay Kumar Myana, Wenzhe Shi, Alykhan Tejani

Recommender systems trained in a continuous learning fashion are plagued by the feedback loop problem, also known as algorithmic bias.

Recommendation Systems

On Gradient Boosted Decision Trees and Neural Rankers: A Case-Study on Short-Video Recommendations at ShareChat

no code implementations4 Dec 2023 Olivier Jeunen, Hitesh Sagtani, Himanshu Doi, Rasul Karimov, Neeti Pokharna, Danish Kalim, Aleksei Ustimenko, Christopher Green, Wenzhe Shi, Rishabh Mehrotra

We highlight (1) neural networks' ability to handle large training data size, user- and item-embeddings allows for more accurate models than GBDTs in this setting, and (2) because GBDTs are less reliant on specialised hardware, they can provide an equally accurate model at a lower cost.

Cannot find the paper you are looking for? You can Submit a new open access paper.