no code implementations • 14 May 2023 • Ke Ma, Yiliang Sang, Yang Ming, Jin Lian, Chang Tian, Zhaocheng Wang
In contrast to its counterpart in Release 16, the Type-II codebook in Release 17 (R17) exploits the angular-delay-domain partial reciprocity between uplink and downlink channels and selects part of angular-delay-domain ports for measuring and feeding back the downlink CSI, where the performance of the conventional deep learning methods is limited due to the deficiency of sparse structures.
no code implementations • 11 May 2023 • Ke Ma, Hongkai Chen, Shan Lin
Artificial pancreas (AP) systems have been developed as a solution for type 1 diabetic patients to mimic the behavior of the pancreas and regulate blood glucose levels.
no code implementations • 1 Apr 2023 • Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
This initialization is generated by using high-quality adversarial perturbations from the historical training process.
no code implementations • 21 Mar 2023 • Jingwei Zhang, Saarthak Kapse, Ke Ma, Prateek Prasanna, Joel Saltz, Maria Vakalopoulou, Dimitris Samaras
Compared to conventional full fine-tuning approaches, we fine-tune less than 1. 3% of the parameters, yet achieve a relative improvement of 1. 29%-13. 61% in accuracy and 3. 22%-27. 18% in AUROC and reduce GPU memory consumption by 38%-45% while training 21%-27% faster.
no code implementations • 26 Dec 2022 • Peng Xiao, Xiaodong Hu, Ke Ma, Gengyuan Wang, Ziqing Feng, Yuancong Huang, Jin Yuan
The lack of efficient segmentation methods and fully-labeled datasets limits the comprehensive assessment of optical coherence tomography angiography (OCTA) microstructures like retinal vessel network (RVN) and foveal avascular zone (FAZ), which are of great value in ophthalmic and systematic diseases evaluation.
1 code implementation • 23 Dec 2022 • Jingwei Zhang, Saarthak Kapse, Ke Ma, Prateek Prasanna, Maria Vakalopoulou, Joel Saltz, Dimitris Samaras
Our method outperforms previous dense matching methods by up to 7. 2% in average precision for detection and 5. 6% in average precision for instance segmentation tasks.
1 code implementation • 13 Sep 2022 • Ke Ma, Qianqian Xu, Jinshan Zeng, Guorong Li, Xiaochun Cao, Qingming Huang
From the perspective of the dynamical system, the attack behavior with a target ranking list is a fixed point belonging to the composition of the adversary and the victim.
1 code implementation • 18 Jul 2022 • Xiaojun Jia, Yong Zhang, Xingxing Wei, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
Based on the observation, we propose a prior-guided FGSM initialization method to avoid overfitting after investigating several initialization strategies, improving the quality of the AEs during the whole training process.
1 code implementation • 17 Jul 2022 • Jingwei Zhang, Xin Zhang, Ke Ma, Rajarsi Gupta, Joel Saltz, Maria Vakalopoulou, Dimitris Samaras
Histopathology whole slide images (WSIs) play a very important role in clinical studies and serve as the gold standard for many cancer diagnoses.
1 code implementation • CVPR 2022 • Xiaojun Jia, Yong Zhang, Baoyuan Wu, Ke Ma, Jue Wang, Xiaochun Cao
In this paper, we propose a novel framework for adversarial training by introducing the concept of "learnable attack strategy", dubbed LAS-AT, which learns to automatically produce attack strategies to improve the model robustness.
no code implementations • 17 Feb 2022 • Souradeep Chakraborty, Ke Ma, Rajarsi Gupta, Beatrice Knudsen, Gregory J. Zelinsky, Joel H. Saltz, Dimitris Samaras
To quantify the relationship between a pathologist's attention and evidence for cancer in the WSI, we obtained tumor annotations from a genitourinary specialist.
1 code implementation • 18 Nov 2021 • Xiang Bai, Hanchen Wang, Liya Ma, Yongchao Xu, Jiefeng Gan, Ziwei Fan, Fan Yang, Ke Ma, Jiehua Yang, Song Bai, Chang Shu, Xinyu Zou, Renhao Huang, Changzheng Zhang, Xiaowu Liu, Dandan Tu, Chuou Xu, Wenqing Zhang, Xi Wang, Anguo Chen, Yu Zeng, Dehua Yang, Ming-Wei Wang, Nagaraj Holalkere, Neil J. Halin, Ihab R. Kamel, Jia Wu, Xuehua Peng, Xiang Wang, Jianbo Shao, Pattanasak Mongkolwat, Jianjun Zhang, Weiyang Liu, Michael Roberts, Zhongzhao Teng, Lucian Beer, Lorena Escudero Sanchez, Evis Sala, Daniel Rubin, Adrian Weller, Joan Lasenby, Chuangsheng Zheng, Jianming Wang, Zhen Li, Carola-Bibiane Schönlieb, Tian Xia
Artificial intelligence (AI) provides a promising substitution for streamlining COVID-19 diagnoses.
no code implementations • 29 Sep 2021 • Sagnik Das, Ke Ma, Zhixin Shu, Dimitris Samaras
We also demonstrate the usefulness of our system by applying it to document texture editing.
no code implementations • 29 Sep 2021 • Huidong Liu, Ke Ma, Lei Zhou, Dimitris Samaras
If the \texttt{MRE} is smaller than 1, then every target point is guaranteed to have an area in the source distribution that is mapped to it.
1 code implementation • 5 Jul 2021 • Ke Ma, Qianqian Xu, Jinshan Zeng, Xiaochun Cao, Qingming Huang
In this paper, to the best of our knowledge, we initiate the first systematic investigation of data poisoning attacks on pairwise ranking algorithms, which can be formalized as the dynamic and static games between the ranker and the attacker and can be modeled as certain kinds of integer programming problems.
no code implementations • CVPR 2021 • Jingwei Zhang, Ke Ma, John Van Arnam, Rajarsi Gupta, Joel Saltz, Maria Vakalopoulou, Dimitris Samaras
To tackle these problems, we propose a novel spatial and magnification based attention sampling strategy.
no code implementations • 22 Feb 2021 • David Puljiz, BoWen Zhou, Ke Ma, Björn Hein
In this paper we propose an intention recognition system that is based purely on a portable head-mounted display.
Intent Detection
Robotics
Human-Computer Interaction
no code implementations • 28 Jan 2021 • Sicong Liu, Bin Guo, Ke Ma, Zhiwen Yu, Junzhao Du
There are many deep learning (e. g., DNN) powered mobile and wearable applications today continuously and unobtrusively sensing the ambient surroundings to enhance all aspects of human lives.
1 code implementation • 8 Jan 2021 • Ke Ma, Dongxuan He, Hancun Sun, Zhaocheng Wang, Sheng Chen
To tackle this problem, the second scheme adopts long-short term memory (LSTM) network for tracking the movement of users and calibrating the beam direction according to the received signals of prior beam training, in order to enhance the robustness to noise.
no code implementations • 1 Jan 2021 • Jinshan Zeng, Yixuan Zha, Ke Ma, Yuan YAO
In this paper, we fill this gap via exploiting a new semi-stochastic variant of the original SVRG with Option I adapted to the semidefinite optimization.
1 code implementation • 29 Nov 2020 • Sagnik Das, Hassan Ahmed Sial, Ke Ma, Ramon Baldrich, Maria Vanrell, Dimitris Samaras
However, document shadow or shading removal results still suffer because: (a) prior methods rely on uniformity of local color statistics, which limit their application on real-scenarios with complex document shapes and textures and; (b) synthetic or hybrid datasets with non-realistic, simulated lighting conditions are used to train the models.
Intrinsic Image Decomposition
Optical Character Recognition (OCR)
+1
no code implementations • 30 Oct 2020 • Ke Ma, Dongxuan He, Hancun Sun, Zhaocheng Wang
Huge overhead of beam training poses a significant challenge to mmWave communications.
2 code implementations • 27 Aug 2020 • Ke Ma, Bo Zhao, Leonid Sigal
Also, the generated images from our model have higher resolution, object classification accuracy and consistency, as compared to the previous state-of-the-art.
no code implementations • 1 Dec 2019 • Ke Ma, Jinshan Zeng, Qianqian Xu, Xiaochun Cao, Wei Liu, Yuan YAO
Learning representation from relative similarity comparisons, often called ordinal embedding, gains rising attention in recent years.
1 code implementation • 5 Dec 2018 • Ke Ma, Qianqian Xu, Xiaochun Cao
Existing ordinal embedding methods usually follow a two-stage routine: outlier detection is first employed to pick out the inconsistent comparisons; then an embedding is learned from the clean data.
1 code implementation • 5 Dec 2018 • Ke Ma, Qianqian Xu, Zhiyong Yang, Xiaochun Cao
To address the issue of insufficient training samples, we propose a margin distribution learning paradigm for ordinal embedding, entitled Distributional Margin based Ordinal Embedding (\textit{DMOE}).
1 code implementation • CVPR 2018 • Ke Ma, Zhixin Shu, Xue Bai, Jue Wang, Dimitris Samaras
The network is trained on this dataset with various data augmentations to improve its generalization ability.
Ranked #4 on
SSIM
on DocUNet
(using extra training data)
1 code implementation • 17 Nov 2017 • Ke Ma, Jinshan Zeng, Jiechao Xiong, Qianqian Xu, Xiaochun Cao, Wei Liu, Yuan YAO
Learning representation from relative similarity comparisons, often called ordinal embedding, gains rising attention in recent years.