Search Results for author: Yoshitomo Matsubara

Found 13 papers, 10 papers with code

COVIDLies: Detecting COVID-19 Misinformation on Social Media

no code implementations EMNLP (NLP-COVID19) 2020 Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, Sameer Singh

The ongoing pandemic has heightened the need for developing tools to flag COVID-19-related misinformation on the internet, specifically on social media such as Twitter.

Misconceptions Misinformation +2

Citations Beyond Self Citations: Identifying Authors, Affiliations, and Nationalities in Scientific Papers

1 code implementation WOSP 2020 Yoshitomo Matsubara, Sameer Singh

Our models are accurate; we identify at least one of authors, affiliations, and nationalities of held-out papers with 40. 3%, 47. 9% and 86. 0% accuracy respectively, from the top-10 guesses of our models.

Rethinking Symbolic Regression Datasets and Benchmarks for Scientific Discovery

1 code implementation21 Jun 2022 Yoshitomo Matsubara, Naoya Chiba, Ryo Igarashi, Tatsunori Taniai, Yoshitaka Ushiku

Focused on a set of formulas used in the existing datasets based on Feynman Lectures on Physics, we recreate 120 datasets to discuss the performance of symbolic regression for scientific discovery (SRSD).

regression Symbolic Regression

SC2: Supervised Compression for Split Computing

1 code implementation16 Mar 2022 Yoshitomo Matsubara, Ruihan Yang, Marco Levorato, Stephan Mandt

Split computing distributes the execution of a neural network (e. g., for a classification task) between a mobile device and a more powerful edge server.

Data Compression Edge-computing +2

Ensemble Transformer for Efficient and Accurate Ranking Tasks: an Application to Question Answering Systems

no code implementations15 Jan 2022 Yoshitomo Matsubara, Luca Soldaini, Eric Lind, Alessandro Moschitti

CERBERUS consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; unlike traditional distillation technique, each of them is trained by distilling a different large transformer architecture in a way that preserves the diversity of the ensemble members.

Efficient Neural Network Question Answering

Supervised Compression for Resource-Constrained Edge Computing Systems

2 code implementations21 Aug 2021 Yoshitomo Matsubara, Ruihan Yang, Marco Levorato, Stephan Mandt

There has been much interest in deploying deep learning algorithms on low-powered devices, including smartphones, drones, and medical sensors.

Data Compression Edge-computing +2

Split Computing and Early Exiting for Deep Learning Applications: Survey and Research Challenges

no code implementations8 Mar 2021 Yoshitomo Matsubara, Marco Levorato, Francesco Restuccia

Mobile devices such as smartphones and autonomous vehicles increasingly rely on deep neural networks (DNNs) to execute complex inference tasks such as image classification and speech recognition, among others.

Autonomous Vehicles Image Classification +2

torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation

1 code implementation25 Nov 2020 Yoshitomo Matsubara

While knowledge distillation (transfer) has been attracting attentions from the research community, the recent development in the fields has heightened the need for reproducible studies and highly generalized frameworks to lower barriers to such high-quality, reproducible deep learning research.

Image Classification Instance Segmentation +3

Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing Systems

2 code implementations20 Nov 2020 Yoshitomo Matsubara, Davide Callegaro, Sabur Baidya, Marco Levorato, Sameer Singh

In this paper, we propose to modify the structure and training process of DNN models for complex image classification tasks to achieve in-network compression in the early network layers.

Edge-computing Image Classification +2

Neural Compression and Filtering for Edge-assisted Real-time Object Detection in Challenged Networks

3 code implementations31 Jul 2020 Yoshitomo Matsubara, Marco Levorato

However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading.

Edge-computing object-detection +1

Split Computing for Complex Object Detectors: Challenges and Preliminary Results

2 code implementations27 Jul 2020 Yoshitomo Matsubara, Marco Levorato

Following the trends of mobile and edge computing for DNN models, an intermediate option, split computing, has been attracting attentions from the research community.

Edge-computing Image Classification

Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems

2 code implementations1 Oct 2019 Yoshitomo Matsubara, Sabur Baidya, Davide Callegaro, Marco Levorato, Sameer Singh

Offloading the execution of complex Deep Neural Networks (DNNs) models to compute-capable devices at the network edge, that is, edge servers, can significantly reduce capture-to-output delay.

Edge-computing Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.