Search Results for author: Robert P. Dick

Found 14 papers, 10 papers with code

Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks

no code implementations21 Nov 2023 Samyak Jain, Robert Kirk, Ekdeep Singh Lubana, Robert P. Dick, Hidenori Tanaka, Edward Grefenstette, Tim Rocktäschel, David Scott Krueger

Fine-tuning large pre-trained models has become the de facto strategy for developing both task-specific and general-purpose machine learning systems, including developing models that are safe to deploy.

Network Pruning

Compositional Capabilities of Autoregressive Transformers: A Study on Synthetic, Interpretable Tasks

no code implementations21 Nov 2023 Rahul Ramesh, Ekdeep Singh Lubana, Mikail Khona, Robert P. Dick, Hidenori Tanaka

Transformers trained on huge text corpora exhibit a remarkable set of capabilities, e. g., performing basic arithmetic.

In-Context Learning Dynamics with Random Binary Sequences

1 code implementation26 Oct 2023 Eric J. Bigelow, Ekdeep Singh Lubana, Robert P. Dick, Hidenori Tanaka, Tomer D. Ullman

Large language models (LLMs) trained on huge corpora of text datasets demonstrate intriguing capabilities, achieving state-of-the-art performance on tasks they were not explicitly trained for.

In-Context Learning

Compositional Abilities Emerge Multiplicatively: Exploring Diffusion Models on a Synthetic Task

1 code implementation NeurIPS 2023 Maya Okawa, Ekdeep Singh Lubana, Robert P. Dick, Hidenori Tanaka

Motivated by this, we perform a controlled study for understanding compositional generalization in conditional diffusion models in a synthetic setting, varying different attributes of the training data and measuring the model's ability to generate samples out-of-distribution.

Mechanistic Mode Connectivity

1 code implementation15 Nov 2022 Ekdeep Singh Lubana, Eric J. Bigelow, Robert P. Dick, David Krueger, Hidenori Tanaka

We study neural network loss landscapes through the lens of mode connectivity, the observation that minimizers of neural networks retrieved via training on a dataset are connected via simple paths of low loss.

Do Smart Glasses Dream of Sentimental Visions? Deep Emotionship Analysis for Eyewear Devices

1 code implementation24 Jan 2022 Yingying Zhao, Yuhu Chang, Yutian Lu, Yujiang Wang, Mingzhi Dong, Qin Lv, Robert P. Dick, Fan Yang, Tun Lu, Ning Gu, Li Shang

Experimental studies with 20 participants demonstrate that, thanks to the emotionship awareness, EMOShip not only achieves superior emotion recognition accuracy over existing methods (80. 2% vs. 69. 4%), but also provides a valuable understanding of the cause of emotions.

Emotion Recognition

MemX: An Attention-Aware Smart Eyewear System for Personalized Moment Auto-capture

no code implementations3 May 2021 Yuhu Chang, Yingying Zhao, Mingzhi Dong, Yujiang Wang, Yutian Lu, Qin Lv, Robert P. Dick, Tun Lu, Ning Gu, Li Shang

MemX captures human visual attention on the fly, analyzes the salient visual content, and records moments of personal interest in the form of compact video snippets.

A Reinforcement-Learning-Based Energy-Efficient Framework for Multi-Task Video Analytics Pipeline

no code implementations9 Apr 2021 Yingying Zhao, Mingzhi Dong, Yujiang Wang, Da Feng, Qin Lv, Robert P. Dick, Dongsheng Li, Tun Lu, Ning Gu, Li Shang

By monitoring the impact of varying resolution on the quality of high-dimensional video analytics features, hence the accuracy of video analytics results, the proposed end-to-end optimization framework learns the best non-myopic policy for dynamically controlling the resolution of input video streams to globally optimize energy efficiency.

Instance Segmentation Optical Flow Estimation +4

HVAQ: A High-Resolution Vision-Based Air Quality Dataset

1 code implementation18 Feb 2021 Zuohui Chen, Tony Zhang, Zhuangzhi Chen, Yun Xiang, Qi Xuan, Robert P. Dick

The main contribution of this paper is that to the best of our knowledge, it is the first publicly available, high temporal and spatial resolution air quality dataset containing simultaneous point sensor measurements and corresponding images.

Vocal Bursts Intensity Prediction

How do Quadratic Regularizers Prevent Catastrophic Forgetting: The Role of Interpolation

2 code implementations4 Feb 2021 Ekdeep Singh Lubana, Puja Trivedi, Danai Koutra, Robert P. Dick

Catastrophic forgetting undermines the effectiveness of deep neural networks (DNNs) in scenarios such as continual learning and lifelong learning.

Continual Learning

A Gradient Flow Framework For Analyzing Network Pruning

1 code implementation ICLR 2021 Ekdeep Singh Lubana, Robert P. Dick

We use this framework to determine the relationship between pruning measures and evolution of model parameters, establishing several results related to pruning models early-on in training: (i) magnitude-based pruning removes parameters that contribute least to reduction in loss, resulting in models that converge faster than magnitude-agnostic methods; (ii) loss-preservation based pruning preserves first-order model evolution dynamics and is therefore appropriate for pruning minimally trained models; and (iii) gradient-norm based pruning affects second-order model evolution dynamics, such that increasing gradient norm via pruning can produce poorly performing models.

Network Pruning

OrthoReg: Robust Network Pruning Using Orthonormality Regularization

1 code implementation10 Sep 2020 Ekdeep Singh Lubana, Puja Trivedi, Conrad Hougen, Robert P. Dick, Alfred O. Hero

To address this issue, we propose OrthoReg, a principled regularization strategy that enforces orthonormality on a network's filters to reduce inter-filter correlation, thereby allowing reliable, efficient determination of group importance estimates, improved trainability of pruned networks, and efficient, simultaneous pruning of large groups of filters.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.