1 code implementation • WOSP 2020 • Yoshitomo Matsubara, Sameer Singh
Our models are accurate; we identify at least one of authors, affiliations, and nationalities of held-out papers with 40. 3%, 47. 9% and 86. 0% accuracy respectively, from the top-10 guesses of our models.
no code implementations • EMNLP (NLP-COVID19) 2020 • Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, Sameer Singh
The ongoing pandemic has heightened the need for developing tools to flag COVID-19-related misinformation on the internet, specifically on social media such as Twitter.
1 code implementation • 7 Dec 2023 • Florian Lalande, Yoshitomo Matsubara, Naoya Chiba, Tatsunori Taniai, Ryo Igarashi, Yoshitaka Ushiku
Once trained, we apply our best model to the SRSD datasets (Symbolic Regression for Scientific Discovery datasets) which yields state-of-the-art results using the normalized tree-based edit distance, at no extra computational cost.
1 code implementation • 26 Oct 2023 • Yoshitomo Matsubara
Reproducibility in scientific work has been becoming increasingly important in research communities such as machine learning, natural language processing, and computer vision communities due to the rapid development of the research domains supported by recent advances in deep learning.
no code implementations • 12 Oct 2023 • Niloofar Bahadori, Yoshitomo Matsubara, Marco Levorato, Francesco Restuccia
However, the size of the matrix grows with the number of antennas and subcarriers, resulting in an increasing amount of airtime overhead and computational load at the station.
no code implementations • 25 May 2023 • Shivanshu Gupta, Yoshitomo Matsubara, Ankit Chadha, Alessandro Moschitti
While impressive performance has been achieved on the task of Answer Sentence Selection (AS2) for English, the same does not hold for languages that lack large labeled datasets.
1 code implementation • NeurIPS 2022 AI for Science: Progress and Promises 2022 • Yoshitomo Matsubara, Naoya Chiba, Ryo Igarashi, Yoshitaka Ushiku
Symbolic Regression (SR) is a task of recovering mathematical expressions from given data and has been attracting attention from the research community to discuss its potential for scientific discovery.
1 code implementation • 21 Jun 2022 • Yoshitomo Matsubara, Naoya Chiba, Ryo Igarashi, Yoshitaka Ushiku
For each of the 120 SRSD datasets, we carefully review the properties of the formula and its variables to design reasonably realistic sampling ranges of values so that our new SRSD datasets can be used for evaluating the potential of SRSD such as whether or not an SR method can (re)discover physical laws from such datasets.
1 code implementation • 16 Mar 2022 • Yoshitomo Matsubara, Ruihan Yang, Marco Levorato, Stephan Mandt
With the increasing demand for deep learning models on mobile devices, splitting neural network computation between the device and a more powerful edge server has become an attractive solution.
1 code implementation • 15 Jan 2022 • Yoshitomo Matsubara, Luca Soldaini, Eric Lind, Alessandro Moschitti
CERBERUS consists of two components: a stack of transformer layers that is used to encode inputs, and a set of ranking heads; unlike traditional distillation technique, each of them is trained by distilling a different large transformer architecture in a way that preserves the diversity of the ensemble members.
2 code implementations • 7 Jan 2022 • Yoshitomo Matsubara, Davide Callegaro, Sameer Singh, Marco Levorato, Francesco Restuccia
We show that BottleFit decreases power consumption and latency respectively by up to 49% and 89% with respect to (w. r. t.)
2 code implementations • 21 Aug 2021 • Yoshitomo Matsubara, Ruihan Yang, Marco Levorato, Stephan Mandt
There has been much interest in deploying deep learning algorithms on low-powered devices, including smartphones, drones, and medical sensors.
19 code implementations • 8 Mar 2021 • Yoshitomo Matsubara, Marco Levorato, Francesco Restuccia
Mobile devices such as smartphones and autonomous vehicles increasingly rely on deep neural networks (DNNs) to execute complex inference tasks such as image classification and speech recognition, among others.
1 code implementation • 25 Nov 2020 • Yoshitomo Matsubara
While knowledge distillation (transfer) has been attracting attentions from the research community, the recent development in the fields has heightened the need for reproducible studies and highly generalized frameworks to lower barriers to such high-quality, reproducible deep learning research.
Ranked #98 on Instance Segmentation on COCO test-dev
2 code implementations • 20 Nov 2020 • Yoshitomo Matsubara, Davide Callegaro, Sabur Baidya, Marco Levorato, Sameer Singh
In this paper, we propose to modify the structure and training process of DNN models for complex image classification tasks to achieve in-network compression in the early network layers.
2 code implementations • 31 Jul 2020 • Yoshitomo Matsubara, Marco Levorato
However, poor conditions of the wireless channel connecting the mobile devices to the edge servers may degrade the overall capture-to-output delay achieved by edge offloading.
2 code implementations • 27 Jul 2020 • Yoshitomo Matsubara, Marco Levorato
Following the trends of mobile and edge computing for DNN models, an intermediate option, split computing, has been attracting attentions from the research community.
2 code implementations • 1 Oct 2019 • Yoshitomo Matsubara, Sabur Baidya, Davide Callegaro, Marco Levorato, Sameer Singh
Offloading the execution of complex Deep Neural Networks (DNNs) models to compute-capable devices at the network edge, that is, edge servers, can significantly reduce capture-to-output delay.