no code implementations • EMNLP (NLP-COVID19) 2020 • Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, Sameer Singh
The ongoing pandemic has heightened the need for developing tools to flag COVID-19-related misinformation on the internet, specifically on social media such as Twitter.
1 code implementation • WOSP 2020 • Yoshitomo Matsubara, Sameer Singh
Our models are accurate; we identify at least one of authors, affiliations, and nationalities of held-out papers with 40. 3%, 47. 9% and 86. 0% accuracy respectively, from the top-10 guesses of our models.
1 code implementation • 21 Jun 2022 • Yoshitomo Matsubara, Naoya Chiba, Ryo Igarashi, Tatsunori Taniai, Yoshitaka Ushiku
Focused on a set of formulas used in the existing datasets based on Feynman Lectures on Physics, we recreate 120 datasets to discuss the performance of symbolic regression for scientific discovery (SRSD).
1 code implementation • 16 Mar 2022 • Yoshitomo Matsubara, Ruihan Yang, Marco Levorato, Stephan Mandt
Split computing distributes the execution of a neural network (e. g., for a classification task) between a mobile device and a more powerful edge server.
no code implementations • 15 Jan 2022 • Yoshitomo Matsubara, Luca Soldaini, Eric Lind, Alessandro Moschitti
CERBERUS consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; unlike traditional distillation technique, each of them is trained by distilling a different large transformer architecture in a way that preserves the diversity of the ensemble members.
2 code implementations • 7 Jan 2022 • Yoshitomo Matsubara, Davide Callegaro, Sameer Singh, Marco Levorato, Francesco Restuccia
We show that BottleFit decreases power consumption and latency respectively by up to 49% and 89% with respect to (w. r. t.)
2 code implementations • 21 Aug 2021 • Yoshitomo Matsubara, Ruihan Yang, Marco Levorato, Stephan Mandt
There has been much interest in deploying deep learning algorithms on low-powered devices, including smartphones, drones, and medical sensors.
no code implementations • 8 Mar 2021 • Yoshitomo Matsubara, Marco Levorato, Francesco Restuccia
Mobile devices such as smartphones and autonomous vehicles increasingly rely on deep neural networks (DNNs) to execute complex inference tasks such as image classification and speech recognition, among others.
1 code implementation • 25 Nov 2020 • Yoshitomo Matsubara
While knowledge distillation (transfer) has been attracting attentions from the research community, the recent development in the fields has heightened the need for reproducible studies and highly generalized frameworks to lower barriers to such high-quality, reproducible deep learning research.
Ranked #94 on Instance Segmentation on COCO test-dev
2 code implementations • 20 Nov 2020 • Yoshitomo Matsubara, Davide Callegaro, Sabur Baidya, Marco Levorato, Sameer Singh
In this paper, we propose to modify the structure and training process of DNN models for complex image classification tasks to achieve in-network compression in the early network layers.
3 code implementations • 31 Jul 2020 • Yoshitomo Matsubara, Marco Levorato
However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading.
2 code implementations • 27 Jul 2020 • Yoshitomo Matsubara, Marco Levorato
Following the trends of mobile and edge computing for DNN models, an intermediate option, split computing, has been attracting attentions from the research community.
2 code implementations • 1 Oct 2019 • Yoshitomo Matsubara, Sabur Baidya, Davide Callegaro, Marco Levorato, Sameer Singh
Offloading the execution of complex Deep Neural Networks (DNNs) models to compute-capable devices at the network edge, that is, edge servers, can significantly reduce capture-to-output delay.