1 code implementation • 4 Oct 2024 • Emil Vatai, Aleksandr Drozd, Ivan R. Ivanov, Yinghao Ren, Mohamed Wahib
Frameworks and DSLs auto-generating code have traditionally relied on human experts developing them to have in place rigorous methods to assure the legality of the applied code transformations.
no code implementations • 22 Jul 2024 • Yu Xue, Chenchen Zhu, Mengchu Zhou, Mohamed Wahib, Moncef Gabbouj
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
no code implementations • 15 Apr 2024 • Enzhi Zhang, Isaac Lyngaas, Peng Chen, Xiao Wang, Jun Igarashi, Yuankai Huo, Mohamed Wahib, Masaharu Munetomo
For high-resolution images, e. g. microscopic pathology images, the quadratic compute and memory cost prohibits the use of an attention-based model, if we are to use smaller patch sizes that are favorable in segmentation.
no code implementations • 1 Apr 2024 • Cheng Chen, Shoki Ohta, Takayuki Nishio, Mohamed Wahib
This study explores the feasibility of adapting CSI-guided imaging across varied environments.
no code implementations • 4 Nov 2023 • Xiao Wang, Isaac Lyngaas, Aristeidis Tsaris, Peng Chen, Sajal Dash, Mayanka Chandra Shekar, Tao Luo, Hong-Jun Yoon, Mohamed Wahib, John Gouley
This paper presents a novel and efficient distributed training method, the Long Short-Sequence Transformer (LSS Transformer), for training transformer with long sequences.
no code implementations • 9 May 2023 • Cheng Chen, Shoki Ohta, Takayuki Nishio, Mehdi Bennis, Jihong Park, Mohamed Wahib
Introducing CSI-Inpainter, a pioneering approach for occlusion removal using Channel State Information (CSI) time sequences, this work propels the application of wireless signal processing into the realm of visual scene recovery.
no code implementations • 6 Jan 2023 • Satoshi Matsuoka, Jens Domke, Mohamed Wahib, Aleksandr Drozd, Torsten Hoefler
While some laws end, new directions are emerging, such as algorithmic scaling or novel architecture research.
no code implementations • 12 May 2022 • Xiao Wang, Aristeidis Tsaris, Debangshu Mukherjee, Mohamed Wahib, Peng Chen, Mark Oxley, Olga Ovchinnikova, Jacob Hinkle
In this paper, we propose a novel image gradient decomposition method that significantly reduces the memory footprint for ptychographic reconstruction by tessellating image gradients and diffraction measurements into tiles.
no code implementations • 21 Oct 2021 • Steven Farrell, Murali Emani, Jacob Balma, Lukas Drescher, Aleksandr Drozd, Andreas Fink, Geoffrey Fox, David Kanter, Thorsten Kurth, Peter Mattson, Dawei Mu, Amit Ruhela, Kento Sato, Koichi Shirahata, Tsuguchika Tabaru, Aristeidis Tsaris, Jan Balewski, Ben Cumming, Takumi Danjo, Jens Domke, Takaaki Fukai, Naoto Fukumoto, Tatsuya Fukushi, Balazs Gerofi, Takumi Honda, Toshiyuki Imamura, Akihiko Kasagi, Kentaro Kawakami, Shuhei Kudo, Akiyoshi Kuroda, Maxime Martinasso, Satoshi Matsuoka, Henrique Mendonça, Kazuki Minami, Prabhat Ram, Takashi Sawada, Mallikarjun Shankar, Tom St. John, Akihiro Tabuchi, Venkatram Vishwanath, Mohamed Wahib, Masafumi Yamazaki, Junqi Yin
Scientific communities are increasingly adopting machine learning and deep learning models in their applications to accelerate scientific insights.
no code implementations • 19 Apr 2021 • Albert Njoroge Kahira, Truong Thao Nguyen, Leonardo Bautista Gomez, Ryousei Takano, Rosa M Badia, Mohamed Wahib
Deep Neural Network (DNN) frameworks use distributed training to enable faster time to convergence and alleviate memory capacity limitations when training large models and/or using high dimension inputs.
no code implementations • 15 Oct 2020 • Martin Schlueter, Mehdi Neshat, Mohamed Wahib, Masaharu Munetomo, Markus Wagner
This contribution introduces the GTOPX space mission benchmark collection, which is an extension of GTOP database published by the European Space Agency (ESA).
no code implementations • 26 Aug 2020 • Mohamed Wahib, Haoyu Zhang, Truong Thao Nguyen, Aleksandr Drozd, Jens Domke, Lingqi Zhang, Ryousei Takano, Satoshi Matsuoka
An alternative solution is to use out-of-core methods instead of, or in addition to, data parallelism.