no code implementations • 31 Jan 2025 • Kai Yi, Peter Richtárik
Popular post-training pruning methods such as Wanda and RIA are known for their simple, yet effective, designs that have shown exceptional empirical performance.
no code implementations • 10 Dec 2024 • Hao Chen, Kai Yi, Lin Liu, Yu Guang Wang
To enhance the scalability of score matching, we have developed a new parent-finding subroutine for leaf nodes in DAGs, significantly accelerating the most time-consuming part of the process: the pruning step.
no code implementations • 6 Dec 2024 • Lei Fan, Dongdong Fan, Zhiguang Hu, Yiwen Ding, Donglin Di, Kai Yi, Maurice Pagnucco, Yang song
We present MANTA, a visual-text anomaly detection dataset for tiny objects.
1 code implementation • 4 Nov 2024 • Xiaozhu Yu, Kai Yi, Yu Guang Wang, Yiqing Shen
kcatDiffuser is a graph diffusion model guided by a regressor, enabling the prediction of amino acid mutations at multiple random positions simultaneously.
no code implementations • 3 Jun 2024 • Kai Yi, Timur Kharisov, Igor Sokolov, Peter Richtárik
Virtually all federated learning (FL) methods, including FedAvg, operate in the following manner: i) an orchestrating server sends the current model parameters to a cohort of clients selected via certain rule, ii) these clients then independently perform a local training procedure (e. g., via SGD or Adam) using their own training data, and iii) the resulting models are shipped to the server for aggregation.
no code implementations • 31 May 2024 • Georg Meinhardt, Kai Yi, Laurent Condat, Peter Richtárik
In Federated Learning (FL), both client resource constraints and communication costs pose major problems for training large models.
1 code implementation • 23 May 2024 • Vladimir Malinovskii, Denis Mazur, Ivan Ilin, Denis Kuznedelev, Konstantin Burlachenko, Kai Yi, Dan Alistarh, Peter Richtarik
In this work, we question the use of STE for extreme LLM compression, showing that it can be sub-optimal, and perform a systematic study of quantization-aware fine-tuning strategies for LLMs.
no code implementations • 15 Apr 2024 • Kai Yi, Nidham Gazagnadou, Peter Richtárik, Lingjuan Lyu
The interest in federated learning has surged in recent research due to its unique ability to train a global model using privacy-secured information held locally on each client.
no code implementations • 20 Mar 2024 • Yizhu Wen, Kai Yi, Jing Ke, Yiqing Shen
Specifically, DiffImpute is trained on complete tabular datasets, ensuring that it can produce credible imputations for missing entries without undermining the authenticity of the existing data.
no code implementations • 14 Mar 2024 • Kai Yi, Georg Meinhardt, Laurent Condat, Peter Richtárik
Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server, while being respectful of privacy.
1 code implementation • ICCV 2023 • Wenxuan Zhang, Paul Janson, Kai Yi, Ivan Skorokhodov, Mohamed Elhoseiny
The GRW loss augments the training by continually encouraging the model to generate realistic and characterized samples to represent the unseen space.
1 code implementation • NeurIPS 2023 • Kai Yi, Bingxin Zhou, Yiqing Shen, Pietro Liò, Yu Guang Wang
In contrast, diffusion probabilistic models, as an emerging genre of generative approaches, offer the potential to generate a diverse set of sequence candidates for determined protein backbones.
1 code implementation • 22 May 2023 • Kai Yi, Laurent Condat, Peter Richtárik
Federated Learning is an evolving machine learning paradigm, in which multiple clients perform computations based on their individual private data, interspersed by communication with a remote server.
no code implementations • 13 Apr 2023 • Bingxin Zhou, Outongyi Lv, Kai Yi, Xinye Xiong, Pan Tan, Liang Hong, Yu Guang Wang
Directed evolution as a widely-used engineering strategy faces obstacles in finding desired mutants from the massive size of candidate modifications.
1 code implementation • 9 Jul 2022 • Grigory Malinovsky, Kai Yi, Peter Richtárik
We study distributed optimization methods based on the {\em local training (LT)} paradigm: achieving communication efficiency by performing richer local gradient-based training on the clients before parameter averaging.
no code implementations • 17 Jun 2022 • Kai Yi, Jialin Chen, Yu Guang Wang, Bingxin Zhou, Pietro Liò, Yanan Fan, Jan Hamann
This paper develops a rotation-invariant needlet convolution for rotation group SO(3) to distill multiscale information of spherical signals.
1 code implementation • 11 Jun 2022 • Yuelin Wang, Kai Yi, Xinliang Liu, Yu Guang Wang, Shi Jin
Neural message passing is a basic feature extraction unit for graph-structured data considering neighboring node features in network propagation from one layer to the next.
1 code implementation • 9 May 2022 • Laurent Condat, Kai Yi, Peter Richtárik
Our general approach works with a new, larger class of compressors, which has two parameters, the bias and the variance, and includes unbiased and biased compressors as particular cases.
1 code implementation • 2 Mar 2022 • Kai Yi, Xiaoqian Shen, Yunhao Gou, Mohamed Elhoseiny
The main question we address in this paper is how to scale up visual recognition of unseen classes, also known as zero-shot learning, to tens of thousands of categories as in the ImageNet-21K benchmark.
no code implementations • 24 Dec 2021 • Kai Yi, Paul Janson, Wenxuan Zhang, Mohamed Elhoseiny
Accordingly, we propose a Domain-Invariant Network (DIN) to learn factorized features for shifting domains and improved textual representation for unseen classes.
no code implementations • 27 Jun 2021 • Kai Yi, Jianye Pang, Yungeng Zhang, Xiangrui Zeng, Min Xu
Cryo-electron tomography (Cryo-ET) is a 3D imaging technique that enables the systemic study of shape, abundance, and distribution of macromolecular structures in single cells in near-atomic resolution.
1 code implementation • 20 Apr 2021 • Divyansh Jha, Kai Yi, Ivan Skorokhodov, Mohamed Elhoseiny
By generating representations of unseen classes based on their semantic descriptions, e. g., attributes or text, generative ZSL attempts to differentiate unseen from seen categories.
1 code implementation • CVPR 2022 • Jun Chen, Han Guo, Kai Yi, Boyang Li, Mohamed Elhoseiny
To the best of our knowledge, this is the first work that improves data efficiency of image captioning by utilizing LM pretrained on unimodal data.
2 code implementations • 1 Jan 2021 • Mohamed Elhoseiny, Kai Yi, Mohamed Elfeki
To improve the discriminative power of ZSL, we model the visual learning process of unseen categories with inspiration from the psychology of human creativity for producing novel art.
no code implementations • 12 Aug 2020 • Jianye Pang, Kai Yi, Wanguang Yin, Min Xu
In this technical report, we analyze Legendre decomposition for non-negative tensor in theory and application.
2 code implementations • 31 Jan 2020 • Nicole Hallett, Kai Yi, Josef Dick, Christopher Hodge, Gerard Sutton, Yu Guang Wang, Jingjing You
Currently, there is no cure for keratoconus other than corneal transplantation for advanced stage keratoconus or corneal cross-linking, which can only halt KC progression.
1 code implementation • 31 Jan 2020 • Kai Yi, Yi Guo, Yanan Fan, Jan Hamann, Yu Guang Wang
The noise of the CMB map has a significant impact on the estimation precision for cosmological parameters.
no code implementations • 13 Mar 2018 • Kai Yi, Zhiqiang Jian, Shitao Chen, Nanning Zheng
At present, the performance of deep neural network in general object detection is comparable to or even surpasses that of human beings.