Search Results for author: Iordanis Fostiropoulos

Found 12 papers, 8 papers with code

Multimodal Phased Transformer for Sentiment Analysis

1 code implementation EMNLP 2021 Junyan Cheng, Iordanis Fostiropoulos, Barry Boehm, Mohammad Soleymani

We evaluate our model with three sentiment analysis datasets and achieve comparable or superior performance compared with the existing methods, with a 90% reduction in the number of parameters.

Sentiment Analysis

Stellar: Systematic Evaluation of Human-Centric Personalized Text-to-Image Methods

no code implementations11 Dec 2023 Panos Achlioptas, Alexandros Benetatos, Iordanis Fostiropoulos, Dimitris Skourtis

In this work, we systematically study the problem of personalized text-to-image generation, where the output image is expected to portray information about specific human subjects.

Text-to-Image Generation

Batch Model Consolidation: A Multi-Task Model Consolidation Framework

1 code implementation CVPR 2023 Iordanis Fostiropoulos, Jiaye Zhu, Laurent Itti

During the $\textit{consolidation}$ phase, we combine the learned knowledge on 'batches' of $\textit{expert models}$ using a $\textit{batched consolidation loss}$ in $\textit{memory}$ data that aggregates all buffers.

Continual Learning Image Classification

Lightweight Learner for Shared Knowledge Lifelong Learning

1 code implementation24 May 2023 Yunhao Ge, Yuecheng Li, Di wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, Shixian Wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti

We propose a new Shared Knowledge Lifelong Learning (SKILL) challenge, which deploys a decentralized population of LL agents that each sequentially learn different tasks, with all agents operating independently and in parallel.

Image Classification

Reproducibility Requires Consolidated Artifacts

no code implementations21 May 2023 Iordanis Fostiropoulos, Bowman Brown, Laurent Itti

Machine learning is facing a 'reproducibility crisis' where a significant number of works report failures when attempting to reproduce previously published results.

Supervised Contrastive Prototype Learning: Augmentation Free Robust Neural Network

no code implementations26 Nov 2022 Iordanis Fostiropoulos, Laurent Itti

Inspired by the recent success of prototypical and contrastive learning frameworks for both improving robustness and learning nuance invariant representations, we propose a training framework, $\textbf{Supervised Contrastive Prototype Learning}$ (SCPL).

Classification Contrastive Learning

Implicit Feature Decoupling with Depthwise Quantization

1 code implementation CVPR 2022 Iordanis Fostiropoulos, Barry Boehm

We use DQ in the context of Hierarchical Auto-Encoder and train end-to-end on an image feature representation.

Quantization

Graph Conditioned Sparse-Attention for Improved Source Code Understanding

1 code implementation1 Dec 2021 Junyan Cheng, Iordanis Fostiropoulos, Barry Boehm

The fusion between a graph representation like Abstract Syntax Tree (AST) and a source code sequence makes the use of current approaches computationally intractable for large input sequence lengths.

Code Summarization Variable misuse

Learning Hyperbolic Representations of Topological Features

1 code implementation ICLR 2021 Panagiotis Kyriakis, Iordanis Fostiropoulos, Paul Bogdan

Learning task-specific representations of persistence diagrams is an important problem in topological data analysis and machine learning.

Image Classification Topological Data Analysis

GN-Transformer: Fusing AST and Source Code information in Graph Networks

no code implementations1 Jan 2021 Junyan Cheng, Iordanis Fostiropoulos, Barry Boehm

As opposed to natural languages, source code understanding is influenced by grammar relations between tokens regardless of their identifier name.

Code Summarization Source Code Summarization

Depthwise Discrete Representation Learning

1 code implementation11 Apr 2020 Iordanis Fostiropoulos

Recent advancements in learning Discrete Representations as opposed to continuous ones have led to state of art results in tasks that involve Language, Audio and Vision.

Quantization Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.