Search Results for author: Yuki M. Asano

Found 58 papers, 31 papers with code

Self-supervised Normality Learning and Divergence Vector-guided Model Merging for Zero-shot Congenital Heart Disease Detection in Fetal Ultrasound Videos

no code implementations10 Mar 2025 Pramit Saha, Divyanshu Mishra, Netzahualcoyotl Hernandez-Cruz, Olga Patey, Aris Papageorghiou, Yuki M. Asano, J. Alison Noble

To address these challenges, we introduce, for the first time, a novel privacy-preserving, zero-shot CHD detection framework that formulates CHD detection as a normality modeling problem integrated with model merging.

Anomaly Detection Privacy Preserving +1

Redefining Normal: A Novel Object-Level Approach for Multi-Object Novelty Detection

1 code implementation15 Dec 2024 Mohammadreza Salehi, Nikolaos Apostolikas, Efstratios Gavves, Cees G. M. Snoek, Yuki M. Asano

Adapting to our object-level definition of `normal', we modify knowledge distillation frameworks, where a student network learns from a pre-trained teacher network.

Knowledge Distillation Novelty Detection +1

TULIP: Token-length Upgraded CLIP

1 code implementation13 Oct 2024 Ivona Najdenkoska, Mohammad Mahdi Derakhshani, Yuki M. Asano, Nanne van Noord, Marcel Worring, Cees G. M. Snoek

By effectively encoding captions longer than the default 77 tokens, our model outperforms baselines on cross-modal tasks such as retrieval and text-to-image generation.

Position Text-to-Image Generation

TVBench: Redesigning Video-Language Evaluation

no code implementations10 Oct 2024 Daniel Cores, Michael Dorkenwald, Manuel Mucientes, Cees G. M. Snoek, Yuki M. Asano

Large language models have demonstrated impressive performance when integrated with vision models even enabling video understanding.

Multiple-choice Open-Ended Question Answering +3

Do better language models have crisper vision?

no code implementations9 Oct 2024 Jona Ruthardt, Gertjan J. Burghouts, Serge Belongie, Yuki M. Asano

To this end, we propose the Visual Text Representation Benchmark (ViTeRB) to isolate key properties that make language models well-aligned with the visual world.

Decoder

Self-Masking Networks for Unsupervised Adaptation

1 code implementation11 Sep 2024 Alfonso Taboada Warmerdam, Mathilde Caron, Yuki M. Asano

We validate the usefulness of learning binary masks as a fine-tuning method on 8 datasets and 3 model architectures, and we demonstrate the effectiveness of SMNs in 3 label-efficient settings.

Foundation Model or Finetune? Evaluation of few-shot semantic segmentation for river pollution

1 code implementation5 Sep 2024 Marga Don, Stijn Pinson, Blanca Guillen Cebrian, Yuki M. Asano

In this work, we compare the performance of FMs to finetuned pre-trained supervised models in the task of semantic segmentation on an entirely new dataset.

Few-Shot Semantic Segmentation Semantic Segmentation

SelEx: Self-Expertise in Fine-Grained Generalized Category Discovery

2 code implementations26 Aug 2024 Sarah Rastegar, Mohammadreza Salehi, Yuki M. Asano, Hazel Doughty, Cees G. M. Snoek

In this paper, we address Generalized Category Discovery, aiming to simultaneously uncover novel categories and accurately classify known ones.

Contrastive Learning

Near, far: Patch-ordering enhances vision foundation models' scene understanding

no code implementations20 Aug 2024 Valentinos Pariza, Mohammadreza Salehi, Gertjan Burghouts, Francesco Locatello, Yuki M. Asano

We introduce NeCo: Patch Neighbor Consistency, a novel self-supervised training loss that enforces patch-level nearest neighbor consistency across a student and teacher model.

Scene Understanding Self-Supervised Learning +1

Scaling Backwards: Minimal Synthetic Pre-training?

1 code implementation1 Aug 2024 Ryo Nakamura, Ryu Tadokoro, Ryosuke Yamada, Yuki M. Asano, Iro Laina, Christian Rupprecht, Nakamasa Inoue, Rio Yokota, Hirokatsu Kataoka

To this end, we search for a minimal, purely synthetic pre-training dataset that allows us to achieve performance similar to the 1 million images of ImageNet-1k.

Transfer Learning

SIGMA:Sinkhorn-Guided Masked Video Modeling

no code implementations22 Jul 2024 Mohammadreza Salehi, Michael Dorkenwald, Fida Mohammad Thoker, Efstratios Gavves, Cees G. M. Snoek, Yuki M. Asano

To tackle this, we present Sinkhorn-guided Masked Video Modelling (SIGMA), a novel video pretraining method that jointly learns the video model in addition to a target feature space using a projection network.

No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations

1 code implementation15 Jul 2024 Walter Simoncini, Spyros Gidaris, Andrei Bursuc, Yuki M. Asano

This paper introduces FUNGI, Features from UNsupervised GradIents, a method to enhance the features of transformer encoders by leveraging self-supervised gradients.

All Image Retrieval +3

Federated Learning with a Single Shared Image

1 code implementation18 Jun 2024 Sunny Soni, Aaqib Saeed, Yuki M. Asano

To this end, in this paper, we introduce a new method that improves this knowledge distillation method to only rely on a single shared image between clients and server.

Federated Learning Knowledge Distillation

Privacy-Aware Visual Language Models

no code implementations27 May 2024 Laurens Samson, Nimrod Barazani, Sennay Ghebreab, Yuki M. Asano

This paper aims to advance our understanding of how Visual Language Models (VLMs) handle privacy-sensitive information, a crucial concern as these technologies become integral to everyday life.

Visual Question Answering (VQA)

Bitune: Bidirectional Instruction-Tuning

no code implementations23 May 2024 Dawid J. Kopiczko, Tijmen Blankevoort, Yuki M. Asano

We introduce Bitune, a method that improves instruction-tuning of pretrained decoder-only large language models, leading to consistent gains on downstream tasks.

Decoder

Self-supervised visual learning in the low-data regime: a comparative evaluation

no code implementations26 Apr 2024 Sotirios Konstantakos, Jorgen Cani, Ioannis Mademlis, Despina Ioanna Chalkiadaki, Yuki M. Asano, Efstratios Gavves, Georgios Th. Papadopoulos

Self-Supervised Learning (SSL) is a valuable and robust training methodology for contemporary Deep Neural Networks (DNNs), enabling unsupervised pretraining on a 'pretext task' that does not require ground-truth labels/annotation.

Representation Learning Self-Supervised Learning +1

The Common Stability Mechanism behind most Self-Supervised Learning Approaches

1 code implementation22 Feb 2024 Abhishek Jha, Matthew B. Blaschko, Yuki M. Asano, Tinne Tuytelaars

Last couple of years have witnessed a tremendous progress in self-supervised learning (SSL), the success of which can be attributed to the introduction of useful inductive biases in the learning process to learn meaningful visual representations while avoiding collapse.

Self-Supervised Learning

PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs

no code implementations CVPR 2024 Michael Dorkenwald, Nimrod Barazani, Cees G. M. Snoek, Yuki M. Asano

Vision-Language Models (VLMs), such as Flamingo and GPT-4V, have shown immense potential by integrating large language models with vision systems.

Object-Centric Diffusion for Efficient Video Editing

no code implementations11 Jan 2024 Kumara Kahatapitiya, Adil Karjauv, Davide Abati, Fatih Porikli, Yuki M. Asano, Amirhossein Habibian

Both techniques are readily applicable to a given video editing model without retraining, and can drastically reduce its memory and computational cost.

Knowledge Distillation Object +5

The LLM Surgeon

1 code implementation28 Dec 2023 Tycho F. A. van der Ouderaa, Markus Nagel, Mart van Baalen, Yuki M. Asano, Tijmen Blankevoort

Experimentally, our method can prune rows and columns from a range of OPT models and Llamav2-7B by 20%-30%, with a negligible loss in performance, and achieve state-of-the-art results in unstructured and semi-structured pruning of large language models.

Protect Your Score: Contact Tracing With Differential Privacy Guarantees

no code implementations18 Dec 2023 Rob Romijnders, Christos Louizos, Yuki M. Asano, Max Welling

The pandemic in 2020 and 2021 had enormous economic and societal consequences, and studies show that contact tracing algorithms can be key in the early containment of the virus.

VaLID: Variable-Length Input Diffusion for Novel View Synthesis

no code implementations14 Dec 2023 Shijie Li, Farhad G. Zanjani, Haitam Ben Yahia, Yuki M. Asano, Juergen Gall, Amirhossein Habibian

This is because the source-view images and corresponding poses are processed separately and injected into the model at different stages.

Image Generation Novel View Synthesis +1

Guided Diffusion from Self-Supervised Diffusion Features

no code implementations14 Dec 2023 Vincent Tao Hu, Yunlu Chen, Mathilde Caron, Yuki M. Asano, Cees G. M. Snoek, Bjorn Ommer

However, recent studies have revealed that the feature representation derived from diffusion model itself is discriminative for numerous downstream tasks as well, which prompts us to propose a framework to extract guidance from, and specifically for, diffusion models.

Self-Supervised Learning

VeRA: Vector-based Random Matrix Adaptation

no code implementations17 Oct 2023 Dawid J. Kopiczko, Tijmen Blankevoort, Yuki M. Asano

Low-rank adapation (LoRA) is a popular method that reduces the number of trainable parameters when finetuning large language models, but still faces acute storage challenges when scaling to even larger models or deploying numerous per-user or per-task adapted models.

Image Classification Instruction Following

Self-Supervised Open-Ended Classification with Small Visual Language Models

no code implementations30 Sep 2023 Mohammad Mahdi Derakhshani, Ivona Najdenkoska, Cees G. M. Snoek, Marcel Worring, Yuki M. Asano

We present Self-Context Adaptation (SeCAt), a self-supervised approach that unlocks few-shot abilities for open-ended classification with small visual language models.

Few-Shot Learning Image Captioning

Efficient Neural PDE-Solvers using Quantization Aware Training

no code implementations14 Aug 2023 Winfried van den Dool, Tijmen Blankevoort, Max Welling, Yuki M. Asano

In the past years, the application of neural networks as an alternative to classical numerical methods to solve Partial Differential Equations has emerged as a potential paradigm shift in this century-old mathematical field.

Quantization

Learning to Count without Annotations

1 code implementation CVPR 2024 Lukas Knobel, Tengda Han, Yuki M. Asano

While recent supervised methods for reference-based object counting continue to improve the performance on benchmark datasets, they have to rely on small datasets due to the cost associated with manually annotating dozens of objects in images.

Object Counting

BISCUIT: Causal Representation Learning from Binary Interactions

1 code implementation16 Jun 2023 Phillip Lippe, Sara Magliacane, Sindy Löwe, Yuki M. Asano, Taco Cohen, Efstratios Gavves

Identifying the causal variables of an environment and how to intervene on them is of core value in applications such as robotics and embodied AI.

Causal Discovery Causal Identification +1

Self-Ordering Point Clouds

no code implementations ICCV 2023 Pengwan Yang, Cees G. M. Snoek, Yuki M. Asano

In this paper we address the task of finding representative subsets of points in a 3D point cloud by means of a point-wise ordering.

Towards Label-Efficient Incremental Learning: A Survey

1 code implementation1 Feb 2023 Mert Kilickaya, Joost Van de Weijer, Yuki M. Asano

The current dominant paradigm when building a machine learning model is to iterate over a dataset over and over until convergence.

Incremental Learning Self-Supervised Learning +1

VTC: Improving Video-Text Retrieval with User Comments

1 code implementation19 Oct 2022 Laura Hanu, James Thewlis, Yuki M. Asano, Christian Rupprecht

In this paper, we a) introduce a new dataset of videos, titles and comments; b) present an attention-based mechanism that allows the model to learn from sometimes irrelevant data such as comments; c) show that by using comments, our method is able to learn better, more contextualised, representations for image, video and audio representations.

Representation Learning Text Retrieval +2

Prompt Generation Networks for Input-Space Adaptation of Frozen Vision Transformers

1 code implementation12 Oct 2022 Jochem Loedeman, Maarten C. Stol, Tengda Han, Yuki M. Asano

With the introduction of the transformer architecture in computer vision, increasing model scale has been demonstrated as a clear path to achieving performance and robustness gains.

Prompt Learning Transfer Learning

Self-Guided Diffusion Models

1 code implementation CVPR 2023 Vincent Tao Hu, David W Zhang, Yuki M. Asano, Gertjan J. Burghouts, Cees G. M. Snoek

Diffusion models have demonstrated remarkable progress in image generation quality, especially when guidance is used to control the generative process.

Image Generation

Causal Representation Learning for Instantaneous and Temporal Effects in Interactive Systems

1 code implementation13 Jun 2022 Phillip Lippe, Sara Magliacane, Sindy Löwe, Yuki M. Asano, Taco Cohen, Efstratios Gavves

To address this issue, we propose iCITRIS, a causal representation learning method that allows for instantaneous effects in intervened temporal sequences when intervention targets can be observed, e. g., as actions of an agent.

Causal Discovery Representation Learning +1

Self-Supervised Learning of Object Parts for Semantic Segmentation

1 code implementation CVPR 2022 Adrian Ziegler, Yuki M. Asano

However, learning dense representations is challenging, as in the unsupervised context it is not clear how to guide the model to learn representations that correspond to various potential object categories.

Ranked #5 on Unsupervised Semantic Segmentation on PASCAL VOC 2012 val (using extra training data)

Community Detection Image Segmentation +6

Less than Few: Self-Shot Video Instance Segmentation

no code implementations19 Apr 2022 Pengwan Yang, Yuki M. Asano, Pascal Mettes, Cees G. M. Snoek

The goal of this paper is to bypass the need for labelled examples in few-shot video understanding at run time.

Few-Shot Learning Instance Segmentation +5

CITRIS: Causal Identifiability from Temporal Intervened Sequences

2 code implementations7 Feb 2022 Phillip Lippe, Sara Magliacane, Sindy Löwe, Yuki M. Asano, Taco Cohen, Efstratios Gavves

Understanding the latent causal factors of a dynamical system from visual observations is considered a crucial step towards agents reasoning in complex environments.

Representation Learning Temporal Sequences

The Augmented Image Prior: Distilling 1000 Classes by Extrapolating from a Single Image

1 code implementation1 Dec 2021 Yuki M. Asano, Aaqib Saeed

What can neural networks learn about the visual world when provided with only a single image as input?

Knowledge Distillation

PASS: An ImageNet replacement for self-supervised pretraining without humans

1 code implementation NeurIPS Workshop ImageNet_PPF 2021 Yuki M. Asano, Christian Rupprecht, Andrew Zisserman, Andrea Vedaldi

On the other hand, state-of-the-art pretraining is nowadays obtained with unsupervised methods, meaning that labelled datasets such as ImageNet may not be necessary, or perhaps not even optimal, for model pretraining.

Benchmarking Ethics +2

Space-Time Crop & Attend: Improving Cross-modal Video Representation Learning

1 code implementation ICCV 2021 Mandela Patrick, Yuki M. Asano, Bernie Huang, Ishan Misra, Florian Metze, Joao Henriques, Andrea Vedaldi

First, for space, we show that spatial augmentations such as cropping do work well for videos too, but that previous implementations, due to the high processing and memory cost, could not do this at a scale sufficient for it to work well.

Representation Learning Self-Supervised Learning

Privacy-preserving Object Detection

no code implementations11 Mar 2021 Peiyang He, Charlie Griffin, Krzysztof Kacprzyk, Artjom Joosen, Michael Collyer, Aleksandar Shtedritski, Yuki M. Asano

Privacy considerations and bias in datasets are quickly becoming high-priority issues that the computer vision community needs to face.

Object object-detection +2

Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models

1 code implementation NeurIPS 2021 Hannah Kirk, Yennie Jun, Haider Iqbal, Elias Benussi, Filippo Volpin, Frederic A. Dreyer, Aleksandar Shtedritski, Yuki M. Asano

Using a template-based data collection pipeline, we collect 396K sentence completions made by GPT-2 and find: (i) The machine-predicted jobs are less diverse and more stereotypical for women than for men, especially for intersections; (ii) Intersectional interactions are highly relevant for occupational associations, which we quantify by fitting 262 logistic models; (iii) For most occupations, GPT-2 reflects the skewed gender and ethnicity distribution found in US Labor Bureau data, and even pulls the societally-skewed distribution towards gender parity in cases where its predictions deviate from real labor market observations.

Language Modelling Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.