Search Results for author: James J. Clark

Found 30 papers, 9 papers with code

Automatic Pruning of Fine-tuning Datasets for Transformer-based Language Models

1 code implementation11 Jul 2024 Mohammadreza Tayaranian, Seyyed Hasan Mozafari, Brett H. Meyer, James J. Clark, Warren J. Gross

Our experiments on 5 downstream tasks and 2 language models show that, on average, fine-tuning on the winning ticket subsets results in a $0. 1 \%$ increase in the evaluation performance of the model.

Natural Language Understanding Navigate

Design Editing for Offline Model-based Optimization

no code implementations22 May 2024 Ye Yuan, Youyuan Zhang, Can Chen, Haolun Wu, Zixuan Li, Jianmo Li, James J. Clark, Xue Liu

Offline model-based optimization (MBO) aims to maximize a black-box objective function using only an offline dataset of designs and scores.

Denoising

FastVideoEdit: Leveraging Consistency Models for Efficient Text-to-Video Editing

no code implementations10 Mar 2024 Youyuan Zhang, Xuan Ju, James J. Clark

By leveraging the self-consistency property of CMs, we eliminate the need for time-consuming inversion or additional condition extraction, reducing editing time.

Image Generation Text-to-Video Editing +3

Faster Inference of Integer SWIN Transformer by Removing the GELU Activation

no code implementations2 Feb 2024 Mohammadreza Tayaranian, Seyyed Hasan Mozafari, James J. Clark, Brett Meyer, Warren Gross

In this work, we improve upon the inference latency of the state-of-the-art methods by removing the floating-point operations, which are associated with the GELU activation in Swin Transformer.

Image Classification Knowledge Distillation +1

Robustness to distribution shifts of compressed networks for edge devices

no code implementations22 Jan 2024 Lulan Shen, Ali Edalati, Brett Meyer, Warren Gross, James J. Clark

It is important to investigate the robustness of compressed networks in two types of data distribution shifts: domain shifts and adversarial perturbations.

Knowledge Distillation Quantization

BD-KD: Balancing the Divergences for Online Knowledge Distillation

no code implementations25 Dec 2022 Ibtihel Amara, Nazanin Sepahvand, Brett H. Meyer, Warren J. Gross, James J. Clark

We show that adaptively balancing between the reverse and forward divergences shifts the focus of the training strategy to the compact student network without limiting the teacher network's learning process.

Knowledge Distillation Model Compression +1

KronA: Parameter Efficient Tuning with Kronecker Adapter

no code implementations20 Dec 2022 Ali Edalati, Marzieh Tahaei, Ivan Kobyzev, Vahid Partovi Nia, James J. Clark, Mehdi Rezagholizadeh

We apply the proposed methods for fine-tuning T5 on the GLUE benchmark to show that incorporating the Kronecker-based modules can outperform state-of-the-art PET methods.

Language Modelling

Predicting Visual Attention and Distraction During Visual Search Using Convolutional Neural Networks

1 code implementation27 Oct 2022 Manoosh Samiei, James J. Clark

Our second approach is object-based and predicts the distractor and target objects during visual search.

Target Features Affect Visual Search, A Study of Eye Fixations

1 code implementation28 Sep 2022 Manoosh Samiei, James J. Clark

Visual Search is referred to the task of finding a target object among a set of distracting objects in a visual display.

Object

CES-KD: Curriculum-based Expert Selection for Guided Knowledge Distillation

no code implementations15 Sep 2022 Ibtihel Amara, Maryam Ziaeefard, Brett H. Meyer, Warren Gross, James J. Clark

Knowledge distillation (KD) is an effective tool for compressing deep classification models for edge devices.

Knowledge Distillation

Efficient Fine-Tuning of Compressed Language Models with Learners

no code implementations3 Aug 2022 Danilo Vucetic, Mohammadreza Tayaranian, Maryam Ziaeefard, James J. Clark, Brett H. Meyer, Warren J. Gross

We introduce Learner modules and priming, novel methods for fine-tuning that exploit the overparameterization of pre-trained language models to gain benefits in convergence speed and resource utilization.

CoLA Navigate

Clustered Saliency Prediction

no code implementations5 Jul 2022 Rezvan Sherkati, James J. Clark

We present a new method for image salience prediction, Clustered Saliency Prediction.

Clustering Image-to-Image Translation +1

Efficient Fine-Tuning of BERT Models on the Edge

no code implementations3 May 2022 Danilo Vucetic, Mohammadreza Tayaranian, Maryam Ziaeefard, James J. Clark, Brett H. Meyer, Warren J. Gross

FAR reduces fine-tuning time on the DistilBERT model and CoLA dataset by 30%, and time spent on memory operations by 47%.

CoLA

Consistency driven Sequential Transformers Attention Model for Partially Observable Scenes

1 code implementation CVPR 2022 Samrudhdhi B. Rangrej, Chetan L. Srinidhi, James J. Clark

Most hard attention models initially observe a complete scene to locate and sense informative glimpses, and predict class-label of a scene based on glimpses.

Hard Attention

Standard Deviation-Based Quantization for Deep Neural Networks

no code implementations24 Feb 2022 Amir Ardakani, Arash Ardakani, Brett Meyer, James J. Clark, Warren J. Gross

Quantization of deep neural networks is a promising approach that reduces the inference cost, making it feasible to run deep networks on resource-restricted devices.

Quantization

A Probabilistic Hard Attention Model For Sequentially Observed Scenes

1 code implementation15 Nov 2021 Samrudhdhi B. Rangrej, James J. Clark

A visual hard attention model actively selects and observes a sequence of subregions in an image to make a prediction.

Hard Attention

Kronecker Decomposition for GPT Compression

no code implementations ACL 2022 Ali Edalati, Marzieh Tahaei, Ahmad Rashid, Vahid Partovi Nia, James J. Clark, Mehdi Rezagholizadeh

GPT is an auto-regressive Transformer-based pre-trained language model which has attracted a lot of attention in the natural language processing (NLP) domain due to its state-of-the-art performance in several downstream tasks.

Knowledge Distillation Language Modelling +1

Visual Attention in Imaginative Agents

no code implementations1 Apr 2021 Samrudhdhi B. Rangrej, James J. Clark

The next fixation is planned using uncertainty in the content of the imagined scenes.

HAD-Net: A Hierarchical Adversarial Knowledge Distillation Network for Improved Enhanced Tumour Segmentation Without Post-Contrast Images

1 code implementation30 Mar 2021 Saverio Vadacchino, Raghav Mehta, Nazanin Mohammadi Sepahvand, Brennan Nichyporuk, James J. Clark, Tal Arbel

The proposed network is trained and tested on the BraTS 2019 brain tumour segmentation challenge dataset, where it achieves performance improvements in the ranges of 16% - 26% over (a) recent modality-agnostic segmentation methods (U-HeMIS, U-HVED), (b) KD-Net adapted to this problem, (c) the pre-trained student network and (d) a non-hierarchical version of the network (AD-Net), in terms of Dice scores for enhancing tumour (ET).

Knowledge Distillation Segmentation +1

Achieving Explainability in a Visual Hard Attention Model through Content Prediction

no code implementations1 Jan 2021 Samrudhdhi Bharatkumar Rangrej, James J. Clark

Unlike in the deep convolution network, in hard attention it is explainable which regions of the image contributed to the prediction.

Hard Attention Image Classification

Grow-Push-Prune: aligning deep discriminants for effective structural network compression

no code implementations29 Sep 2020 Qing Tian, Tal Arbel, James J. Clark

We also show that our grown Inception nets (without hard-coded dimension alignment) clearly outperform residual nets of similar complexities.

Instance Segmentation based Semantic Matting for Compositing Applications

1 code implementation10 Apr 2019 Guanqing Hu, James J. Clark

In order to achieve automatic compositing in natural scenes, we propose a fully automated method that integrates instance segmentation and image matting processes to generate high-quality semantic mattes that can be used for image editing task.

Instance Segmentation Semantic Image Matting +1

Going From Image to Video Saliency: Augmenting Image Salience With Dynamic Attentional Push

no code implementations CVPR 2018 Siavash Gorji, James J. Clark

We evaluate our model by comparing the performance of several augmented static saliency models with state-of-the-art in spatiotemporal saliency on three largest dynamic eye tracking datasets, HOLLYWOOD2, UCF-Sport and DIEM.

Task dependent Deep LDA pruning of neural networks

1 code implementation21 Mar 2018 Qing Tian, Tal Arbel, James J. Clark

Moreover, we examine our approach's potential in network architecture search for specific tasks and analyze the influence of our pruning on model robustness to noises and adversarial attacks.

WAYLA - Generating Images from Eye Movements

no code implementations21 Nov 2017 Bingqing Yu, James J. Clark

The WAYLA approach is based on the Conditional Generative Adversarial Network (Conditional GAN) image-to-image translation technique of Isola et al. We consider two specific applications - the first, of reconstructing newspaper images from gaze heat maps, and the second, of detailed reconstruction of images containing only text.

Generative Adversarial Network Image Reconstruction +2

Personalization of Saliency Estimation

no code implementations21 Nov 2017 Bingqing Yu, James J. Clark

The discriminator also has the observer label as an input, which contributes to the personalization ability of our approach.

Saliency Prediction

Attentional Push: A Deep Convolutional Network for Augmenting Image Salience With Shared Attention Modeling in Social Scenes

no code implementations CVPR 2017 Siavash Gorji, James J. Clark

The Attentional Push CNN is then fine-tuned along with the augmented saliency CNN to minimize the Euclidean distance between the augmented saliency and ground truth fixations using an eye-tracking dataset, annotated with the head and the gaze location of the scene actors.

Transfer Learning

Efficient Gender Classification Using a Deep LDA-Pruned Net

1 code implementation20 Apr 2017 Qing Tian, Tal Arbel, James J. Clark

Many real-time tasks, such as human-computer interaction, require fast and efficient facial gender classification.

Classification Gender Classification +1

Attentional Push: Augmenting Salience with Shared Attention Modeling

no code implementations1 Sep 2016 Siavash Gorji, James J. Clark

We present a novel visual attention tracking technique based on Shared Attention modeling.

Cannot find the paper you are looking for? You can Submit a new open access paper.