no code implementations • 21 Oct 2024 • Lele Zheng, Yang Cao, Renhe Jiang, Kenjiro Taura, Yulong Shen, Sheng Li, Masatoshi Yoshikawa
To understand privacy risks in spatiotemporal federated learning, we first propose Spatiotemporal Gradient Inversion Attack (ST-GIA), a gradient attack algorithm tailored to spatiotemporal data that successfully reconstructs the original location from gradients.
1 code implementation • 13 May 2024 • Shun Takagi, Li Xiong, Fumiyuki Kato, Yang Cao, Masatoshi Yoshikawa
Human mobility data offers valuable insights for many applications such as urban planning and pandemic response, but its use also raises privacy concerns.
1 code implementation • 23 Aug 2023 • Fumiyuki Kato, Li Xiong, Shun Takagi, Yang Cao, Masatoshi Yoshikawa
In this study, we present Uldp-FL, a novel FL framework designed to guarantee user-level DP in cross-silo FL where a single user's data may belong to multiple silos.
no code implementations • 20 Dec 2022 • Hisaichi Shibata, Shouhei Hanaoka, Yang Cao, Masatoshi Yoshikawa, Tomomi Takenaga, Yukihiro Nomura, Naoto Hayashi, Osamu Abe
To release and use medical images, we need an algorithm that can simultaneously protect privacy and preserve pathologies in medical images.
no code implementations • 27 Apr 2022 • Jiexin Wang, Adam Jatowt, Masatoshi Yoshikawa, Yi Cai
Time is an important aspect of documents and is used in a range of NLP and IR tasks.
no code implementations • 8 Apr 2022 • Seng Pei Liew, Tsubasa Takahashi, Shun Takagi, Fumiyuki Kato, Yang Cao, Masatoshi Yoshikawa
However, introducing a centralized entity to the originally local privacy model loses some appeals of not having any centralized entity as in local differential privacy.
1 code implementation • 15 Feb 2022 • Fumiyuki Kato, Yang Cao, Masatoshi Yoshikawa
First, we theoretically analyze the leakage of memory access patterns, revealing the risk of sparsified gradients, which are commonly used in FL to enhance communication efficiency and model accuracy.
no code implementations • 8 Sep 2021 • Jiexin Wang, Adam Jatowt, Masatoshi Yoshikawa
In the last few years, open-domain question answering (ODQA) has advanced rapidly due to the development of deep learning techniques and the availability of large-scale QA datasets.
no code implementations • ACL 2021 • Yi Yu, Adam Jatowt, Antoine Doucet, Kazunari Sugiyama, Masatoshi Yoshikawa
In this paper, we address a novel task, Multiple TimeLine Summarization (MTLS), which extends the flexibility and versatility of Time-Line Summarization (TLS).
no code implementations • 13 Jun 2021 • Yaowei Han, Yang Cao, Masatoshi Yoshikawa
Federated Learning (FL) is emerging as a promising paradigm of privacy-preserving machine learning, which trains an algorithm across multiple clients without exchanging their data samples.
1 code implementation • 8 Jun 2021 • Shuyuan Zheng, Yang Cao, Masatoshi Yoshikawa, Huizhong Li, Qiang Yan
FL-Market decouples ML from the need to centrally gather training data on the broker's side using federated learning, an emerging privacy-preserving ML paradigm in which data owners collaboratively train an ML model by uploading local gradients (to be aggregated into a global gradient for model updating).
no code implementations • 24 Dec 2020 • Patrick Ocheja, Yang Cao, Shiyao Ding, Masatoshi Yoshikawa
How to contain the spread of the COVID-19 virus is a major concern for most countries.
Computers and Society Cryptography and Security 68P27 H.3.4
1 code implementation • 17 Sep 2020 • Ruixuan Liu, Yang Cao, Hong Chen, Ruoyang Guo, Masatoshi Yoshikawa
In this work, by leveraging the \textit{privacy amplification} effect in the recently proposed shuffle model of differential privacy, we achieve the best of two worlds, i. e., accuracy in the curator model and strong privacy without relying on any trusted party.
2 code implementations • 22 Jun 2020 • Shun Takagi, Tsubasa Takahashi, Yang Cao, Masatoshi Yoshikawa
The state-of-the-art approach for this problem is to build a generative model under differential privacy, which offers a rigorous privacy guarantee.
3 code implementations • 4 May 2020 • Yang Cao, Yonghui Xiao, Shun Takagi, Li Xiong, Masatoshi Yoshikawa, Yilin Shen, Jinfei Liu, Hongxia Jin, Xiaofeng Xu
Third, we design a private location trace release framework that pipelines the detection of location exposure, policy graph repair, and private trajectory release with customizable and rigorous location privacy.
Cryptography and Security Computers and Society
3 code implementations • 1 May 2020 • Yang Cao, Shun Takagi, Yonghui Xiao, Li Xiong, Masatoshi Yoshikawa
Our system has three primary functions for epidemic surveillance: location monitoring, epidemic analysis, and contact tracing.
Databases Cryptography and Security
no code implementations • LREC 2020 • Sora Lim, Adam Jatowt, Michael F{\"a}rber, Masatoshi Yoshikawa
In this paper, we propose a novel news bias dataset which facilitates the development and evaluation of approaches for detecting subtle bias in news articles and for understanding the characteristics of biased sentences.
no code implementations • 24 Mar 2020 • Ruixuan Liu, Yang Cao, Masatoshi Yoshikawa, Hong Chen
To prevent privacy leakages from gradients that are calculated on users' sensitive data, local differential privacy (LDP) has been considered as a privacy guarantee in federated SGD recently.
3 code implementations • 23 Apr 2018 • Bei Liu, Jianlong Fu, Makoto P. Kato, Masatoshi Yoshikawa
Extensive experiments are conducted with 8K images, among which 1. 5K image are randomly picked for evaluation.
2 code implementations • 29 Nov 2017 • Yang Cao, Masatoshi Yoshikawa, Yonghui Xiao, Li Xiong
Our analysis reveals that, the event-level privacy loss of a DP mechanism may \textit{increase over time}.
Databases
2 code implementations • 24 Oct 2016 • Yang Cao, Masatoshi Yoshikawa, Yonghui Xiao, Li Xiong
Our analysis reveals that the privacy leakage of a DP mechanism may accumulate and increase over time.
Databases Cryptography and Security