Search Results for author: Luke Melas-Kyriazi

Found 24 papers, 14 papers with code

Intrinsic Gradient Compression for Scalable and Efficient Federated Learning

no code implementations FL4NLP (ACL) 2022 Luke Melas-Kyriazi, Franklyn Wang

Federated learning is a rapidly growing area of research, holding the promise of privacy-preserving distributed training on edge devices.

Federated Learning Learning Theory +1

GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering

1 code implementation15 Feb 2024 Abdullah Hamdi, Luke Melas-Kyriazi, Guocheng Qian, Jinjie Mai, Ruoshi Liu, Carl Vondrick, Bernard Ghanem, Andrea Vedaldi

With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks while requiring less than half the memory storage of Gaussian Splatting and increasing the rendering speed by up to 39%.

3D Reconstruction Novel View Synthesis

IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation

no code implementations13 Feb 2024 Luke Melas-Kyriazi, Iro Laina, Christian Rupprecht, Natalia Neverova, Andrea Vedaldi, Oran Gafni, Filippos Kokkinos

A mitigation is to fine-tune the 2D generator to be multi-view aware, which can help distillation or can be combined with reconstruction networks to output 3D objects directly.

3D Generation 3D Reconstruction +1

Fixed Point Diffusion Models

1 code implementation16 Jan 2024 Xingjian Bai, Luke Melas-Kyriazi

We introduce the Fixed Point Diffusion Model (FPDM), a novel approach to image generation that integrates the concept of fixed point solving into the framework of diffusion-based generative modeling.

Denoising Image Generation

A Benchmark for Learning to Translate a New Language from One Grammar Book

no code implementations28 Sep 2023 Garrett Tanzer, Mirac Suzgun, Eline Visser, Dan Jurafsky, Luke Melas-Kyriazi

In this paper, we introduce MTOB (Machine Translation from One Book), a benchmark for learning to translate between English and Kalamang -- a language with less than 200 speakers and therefore virtually no presence on the web -- using several hundred pages of field linguistics reference materials.

In-Context Learning Machine Translation +1

Augmenting medical image classifiers with synthetic data from latent diffusion models

no code implementations23 Aug 2023 Luke W. Sagers, James A. Diao, Luke Melas-Kyriazi, Matthew Groh, Pranav Rajpurkar, Adewole S. Adamson, Veronica Rotemberg, Roxana Daneshjou, Arjun K. Manrai

While hundreds of artificial intelligence (AI) algorithms are now approved or cleared by the US Food and Drugs Administration (FDA), many studies have shown inconsistent generalization or latent bias, particularly for underrepresented populations.

Attribute Image Generation

$PC^2$: Projection-Conditioned Point Cloud Diffusion for Single-Image 3D Reconstruction

2 code implementations21 Feb 2023 Luke Melas-Kyriazi, Christian Rupprecht, Andrea Vedaldi

Reconstructing the 3D shape of an object from a single RGB image is a long-standing and highly challenging problem in computer vision.

3D Reconstruction Denoising

RealFusion: 360° Reconstruction of Any Object from a Single Image

3 code implementations21 Feb 2023 Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, Andrea Vedaldi

We consider the problem of reconstructing a full 360{\deg} photographic model of an object from a single image of it.

3D Reconstruction Object

Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding

1 code implementation14 Nov 2022 Mirac Suzgun, Luke Melas-Kyriazi, Dan Jurafsky

In open-ended natural-language generation, existing text decoding methods typically struggle to produce text which is both diverse and high-quality.

Style Transfer Text Generation

The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications

1 code implementation NeurIPS 2023 Mirac Suzgun, Luke Melas-Kyriazi, Suproteem K. Sarkar, Scott Duke Kominers, Stuart M. Shieber

Innovation is a major driver of economic and social development, and information about many kinds of innovation is embedded in semi-structured data from patents and patent applications.

Binary Classification Language Modelling +1

Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual Style Transfer with Small Language Models

1 code implementation23 May 2022 Mirac Suzgun, Luke Melas-Kyriazi, Dan Jurafsky

We propose a method for arbitrary textual style transfer (TST)--the task of transforming a text into any given style--utilizing general-purpose pre-trained language models.

Style Transfer

Intrinisic Gradient Compression for Federated Learning

no code implementations5 Dec 2021 Luke Melas-Kyriazi, Franklyn Wang

Federated learning is a rapidly-growing area of research which enables a large number of clients to jointly train a machine learning model on privately-held data.

BIG-bench Machine Learning Federated Learning +1

Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet

2 code implementations6 May 2021 Luke Melas-Kyriazi

These results indicate that aspects of vision transformers other than attention, such as the patch embedding, may be more responsible for their strong performance than previously thought.

Image Classification

The Mathematical Foundations of Manifold Learning

no code implementations30 Oct 2020 Luke Melas-Kyriazi

Manifold learning is a popular and quickly-growing subfield of machine learning based on the assumption that one's observed data lie on a low-dimensional manifold embedded in a higher-dimensional space.

BIG-bench Machine Learning Dimensionality Reduction

Show, Edit and Tell: A Framework for Editing Image Captions

1 code implementation CVPR 2020 Fawaz Sammani, Luke Melas-Kyriazi

Specifically, our caption-editing model consisting of two sub-modules: (1) EditNet, a language module with an adaptive copy mechanism (Copy-LSTM) and a Selective Copy Memory Attention mechanism (SCMA), and (2) DCNet, an LSTM-based denoising auto-encoder.

Denoising Image Captioning +1

Generation-Distillation for Efficient Natural Language Understanding in Low-Data Settings

no code implementations WS 2019 Luke Melas-Kyriazi, George Han, Celine Liang

Recent research points to knowledge distillation as a potential solution, showing that when training data for a given task is abundant, it is possible to distill a large (teacher) LM into a small task-specific (student) network with minimal loss of performance.

General Classification Knowledge Distillation +4

Encoder-Agnostic Adaptation for Conditional Language Generation

1 code implementation19 Aug 2019 Zachary M. Ziegler, Luke Melas-Kyriazi, Sebastian Gehrmann, Alexander M. Rush

Large pretrained language models have changed the way researchers approach discriminative natural language understanding tasks, leading to the dominance of approaches that adapt a pretrained model for arbitrary downstream tasks.

Conditional Text Generation Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.