Search Results for author: Yoshitomo Matsubara

Found 18 papers, 15 papers with code

Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems

2 code implementations1 Oct 2019 Yoshitomo Matsubara, Sabur Baidya, Davide Callegaro, Marco Levorato, Sameer Singh

Offloading the execution of complex Deep Neural Networks (DNNs) models to compute-capable devices at the network edge, that is, edge servers, can significantly reduce capture-to-output delay.

Edge-computing Image Classification +2

Split Computing for Complex Object Detectors: Challenges and Preliminary Results

2 code implementations27 Jul 2020 Yoshitomo Matsubara, Marco Levorato

Following the trends of mobile and edge computing for DNN models, an intermediate option, split computing, has been attracting attentions from the research community.

Edge-computing Image Classification +1

Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks

2 code implementations31 Jul 2020 Yoshitomo Matsubara, Marco Levorato

However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading.

Edge-computing object-detection +1

Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing Systems

2 code implementations20 Nov 2020 Yoshitomo Matsubara, Davide Callegaro, Sabur Baidya, Marco Levorato, Sameer Singh

In this paper, we propose to modify the structure and training process of DNN models for complex image classification tasks to achieve in-network compression in the early network layers.

Edge-computing Image Classification +2

torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation

1 code implementation25 Nov 2020 Yoshitomo Matsubara

While knowledge distillation (transfer) has been attracting attentions from the research community, the recent development in the fields has heightened the need for reproducible studies and highly generalized frameworks to lower barriers to such high-quality, reproducible deep learning research.

Image Classification Instance Segmentation +3

Split Computing and Early Exiting for Deep Learning Applications: Survey and Research Challenges

19 code implementations8 Mar 2021 Yoshitomo Matsubara, Marco Levorato, Francesco Restuccia

Mobile devices such as smartphones and autonomous vehicles increasingly rely on deep neural networks (DNNs) to execute complex inference tasks such as image classification and speech recognition, among others.

Autonomous Vehicles Image Classification +4

Supervised Compression for Resource-Constrained Edge Computing Systems

2 code implementations21 Aug 2021 Yoshitomo Matsubara, Ruihan Yang, Marco Levorato, Stephan Mandt

There has been much interest in deploying deep learning algorithms on low-powered devices, including smartphones, drones, and medical sensors.

Data Compression Edge-computing +2

Ensemble Transformer for Efficient and Accurate Ranking Tasks: an Application to Question Answering Systems

1 code implementation15 Jan 2022 Yoshitomo Matsubara, Luca Soldaini, Eric Lind, Alessandro Moschitti

CERBERUS consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; unlike traditional distillation technique, each of them is trained by distilling a different large transformer architecture in a way that preserves the diversity of the ensemble members.

Efficient Neural Network Question Answering +1

SC2 Benchmark: Supervised Compression for Split Computing

1 code implementation16 Mar 2022 Yoshitomo Matsubara, Ruihan Yang, Marco Levorato, Stephan Mandt

With the increasing demand for deep learning models on mobile devices, splitting neural network computation between the device and a more powerful edge server has become an attractive solution.

Data Compression Edge-computing +2

Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery

1 code implementation21 Jun 2022 Yoshitomo Matsubara, Naoya Chiba, Ryo Igarashi, Yoshitaka Ushiku

For each of the 120 SRSD datasets, we carefully review the properties of the formula and its variables to design reasonably realistic sampling ranges of values so that our new SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method can (re)discover physical laws from such datasets.

regression Symbolic Regression +1

SRSD: Rethinking Datasets of Symbolic Regression for Scientific Discovery

1 code implementation NeurIPS 2022 AI for Science: Progress and Promises 2022 Yoshitomo Matsubara, Naoya Chiba, Ryo Igarashi, Yoshitaka Ushiku

Symbolic Regression (SR) is a task of recovering mathematical expressions from given data and has been attracting attention from the research community to discuss its potential for scientific discovery.

regression Symbolic Regression +1

Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages

no code implementations25 May 2023 Shivanshu Gupta, Yoshitomo Matsubara, Ankit Chadha, Alessandro Moschitti

While impressive performance has been achieved on the task of Answer Sentence Selection (AS2) for English, the same does not hold for languages that lack large labeled datasets.

Knowledge Distillation Machine Translation +2

SplitBeam: Effective and Efficient Beamforming in Wi-Fi Networks Through Split Computing

no code implementations12 Oct 2023 Niloofar Bahadori, Yoshitomo Matsubara, Marco Levorato, Francesco Restuccia

However, the size of the matrix grows with the number of antennas and subcarriers, resulting in an increasing amount of airtime overhead and computational load at the station.

torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP

1 code implementation26 Oct 2023 Yoshitomo Matsubara

Reproducibility in scientific work has been becoming increasingly important in research communities such as machine learning, natural language processing, and computer vision communities due to the rapid development of the research domains supported by recent advances in deep learning.

Image Classification Knowledge Distillation +4

A Transformer Model for Symbolic Regression towards Scientific Discovery

1 code implementation7 Dec 2023 Florian Lalande, Yoshitomo Matsubara, Naoya Chiba, Tatsunori Taniai, Ryo Igarashi, Yoshitaka Ushiku

Once trained, we apply our best model to the SRSD datasets (Symbolic Regression for Scientific Discovery datasets) which yields state-of-the-art results using the normalized tree-based edit distance, at no extra computational cost.

regression Symbolic Regression

Citations Beyond Self Citations: Identifying Authors, Affiliations, and Nationalities in Scientific Papers

1 code implementation WOSP 2020 Yoshitomo Matsubara, Sameer Singh

Our models are accurate; we identify at least one of authors, affiliations, and nationalities of held-out papers with 40. 3%, 47. 9% and 86. 0% accuracy respectively, from the top-10 guesses of our models.

COVIDLies: Detecting COVID-19 Misinformation on Social Media

no code implementations EMNLP (NLP-COVID19) 2020 Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, Sameer Singh

The ongoing pandemic has heightened the need for developing tools to flag COVID-19-related misinformation on the internet, specifically on social media such as Twitter.

Misconceptions Misinformation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.