no code implementations • 14 Apr 2025 • Shokichi Takakura, Seng Pei Liew, Satoshi Hasegawa
Most of the existing methods to accelerate DP-FL require 1) additional hyperparameters or 2) additional computational cost for clients, which is not desirable since 1) hyperparameter tuning is computationally expensive and data-dependent choice of hyperparameters raises the risk of privacy leakage, and 2) clients are often resource-constrained.
no code implementations • 6 Jul 2022 • Ryuichi Ito, Seng Pei Liew, Tsubasa Takahashi, Yuya Sasaki, Makoto Onizuka
Applying Differentially Private Stochastic Gradient Descent (DPSGD) to training modern, large-scale neural networks such as transformer-based models is a challenging task, as the magnitude of noise added to the gradients at each iteration scales with model dimension, hindering the learning capability significantly.
1 code implementation • 20 Jun 2022 • Seng Pei Liew, Tsubasa Takahashi
We study Gaussian mechanism in the shuffle model of differential privacy (DP).
1 code implementation • 7 Jun 2022 • Seng Pei Liew, Satoshi Hasegawa, Tsubasa Takahashi
We study a protocol for distributed computation called shuffled check-in, which achieves strong privacy guarantees without requiring any further trust assumptions beyond a trusted shuffler.
no code implementations • 8 Apr 2022 • Seng Pei Liew, Tsubasa Takahashi, Shun Takagi, Fumiyuki Kato, Yang Cao, Masatoshi Yoshikawa
However, introducing a centralized entity to the originally local privacy model loses some appeals of not having any centralized entity as in local differential privacy.
no code implementations • 2 Oct 2021 • Takuma Amada, Seng Pei Liew, Kazuya Kakizaki, Toshinori Araki
We assess the vulnerabilities of deep face recognition systems for images that falsify/spoof multiple identities simultaneously.
1 code implementation • ICLR 2022 • Seng Pei Liew, Tsubasa Takahashi, Michihiko Ueno
We propose a new framework of synthesizing data using deep generative models in a differentially private manner.
1 code implementation • 16 May 2021 • Anna Stakia, Tommaso Dorigo, Giovanni Banelli, Daniela Bortoletto, Alessandro Casa, Pablo de Castro, Christophe Delaere, Julien Donini, Livio Finos, Michele Gallinaro, Andrea Giammanco, Alexander Held, Fabricio Jiménez Morales, Grzegorz Kotkowski, Seng Pei Liew, Fabio Maltoni, Giovanna Menardi, Ioanna Papavergou, Alessia Saggio, Bruno Scarpa, Giles C. Strong, Cecilia Tosciri, João Varela, Pietro Vischia, Andreas Weiler
Between the years 2015 and 2019, members of the Horizon 2020-funded Innovative Training Network named "AMVA4NewPhysics" studied the customization and application of advanced multivariate analysis methods and statistical learning tools to high-energy physics problems, as well as developed entirely new ones.
no code implementations • 27 Oct 2020 • Seng Pei Liew, Tsubasa Takahashi
We investigate if one can leak or infer such private information without interacting with the teacher model directly.
1 code implementation • 3 Jul 2015 • Daniel Abercrombie, Nural Akchurin, Ece Akilli, Juan Alcaraz Maestre, Brandon Allen, Barbara Alvarez Gonzalez, Jeremy Andrea, Alexandre Arbey, Georges Azuelos, Patrizia Azzi, Mihailo Backović, Yang Bai, Swagato Banerjee, James Beacham, Alexander Belyaev, Antonio Boveia, Amelia Jean Brennan, Oliver Buchmueller, Matthew R. Buckley, Giorgio Busoni, Michael Buttignol, Giacomo Cacciapaglia, Regina Caputo, Linda Carpenter, Nuno Filipe Castro, Guillelmo Gomez Ceballos, Yangyang Cheng, John Paul Chou, Arely Cortes Gonzalez, Chris Cowden, Francesco D'Eramo, Annapaola De Cosa, Michele De Gruttola, Albert De Roeck, Andrea De Simone, Aldo Deandrea, Zeynep Demiragli, Anthony DiFranzo, Caterina Doglioni, Tristan du Pree, Robin Erbacher, Johannes Erdmann, Cora Fischer, Henning Flaecher, Patrick J. Fox, Benjamin Fuks, Marie-Helene Genest, Bhawna Gomber, Andreas Goudelis, Johanna Gramling, John Gunion, Kristian Hahn, Ulrich Haisch, Roni Harnik, Philip C. Harris, Kerstin Hoepfner, Siew Yan Hoh, Dylan George Hsu, Shih-Chieh Hsu, Yutaro Iiyama, Valerio Ippolito, Thomas Jacques, Xiangyang Ju, Felix Kahlhoefer, Alexis Kalogeropoulos, Laser Seymour Kaplan, Lashkar Kashif, Valentin V. Khoze, Raman Khurana, Khristian Kotov, Dmytro Kovalskyi, Suchita Kulkarni, Shuichi Kunori, Viktor Kutzner, Hyun Min Lee, Sung-Won Lee, Seng Pei Liew, Tongyan Lin, Steven Lowette, Romain Madar, Sarah Malik, Fabio Maltoni, Mario Martinez Perez, Olivier Mattelaer, Kentarou Mawatari, Christopher McCabe, Théo Megy, Enrico Morgante, Stephen Mrenna, Siddharth M. Narayanan, Andy Nelson, Sérgio F. Novaes, Klaas Ole Padeken, Priscilla Pani, Michele Papucci, Manfred Paulini, Christoph Paus, Jacopo Pazzini, Björn Penning, Michael E. Peskin, Deborah Pinna, Massimiliano Procura, Shamona F. Qazi, Davide Racco, Emanuele Re, Antonio Riotto, Thomas G. Rizzo, Rainer Roehrig, David Salek, Arturo Sanchez Pineda, Subir Sarkar, Alexander Schmidt, Steven Randolph Schramm, William Shepherd, Gurpreet Singh, Livia Soffi, Norraphat Srimanobhas, Kevin Sung, Tim M. P. Tait, Timothee Theveneaux-Pelzer, Marc Thomas, Mia Tosi, Daniele Trocino, Sonaina Undleeb, Alessandro Vichi, Fuquan Wang, Lian-Tao Wang, Ren-Jie Wang, Nikola Whallon, Steven Worm, Mengqing Wu, Sau Lan Wu, Hongtao Yang, Yong Yang, Shin-Shan Yu, Bryan Zaldivar, Marco Zanetti, Zhiqing Zhang, Alberto Zucchetta
This document is the final report of the ATLAS-CMS Dark Matter Forum, a forum organized by the ATLAS and CMS collaborations with the participation of experts on theories of Dark Matter, to select a minimal basis set of dark matter simplified models that should support the design of the early LHC Run-2 searches.
High Energy Physics - Experiment High Energy Physics - Phenomenology