no code implementations • ACL 2022 • Yi Zhou, Masahiro Kaneko, Danushka Bollegala
Sense embedding learning methods learn different embeddings for the different senses of an ambiguous word.
no code implementations • LT4HALA (LREC) 2022 • Yutong Shen, Jiahuan Li, ShuJian Huang, Yi Zhou, Xiaopeng Xie, Qinxin Zhao
Although SikuRoberta significantly boosts performance on WSG and POS tasks on ancient Chinese texts, the lack of labeled data still limits the performance of the model.
no code implementations • EMNLP 2021 • Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, Kai-Wei Chang
Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models.
no code implementations • 30 Nov 2023 • Jiaxin Mei, Tao Zhou, Kaiwen Huang, Yizhe Zhang, Yi Zhou, Ye Wu, Huazhu Fu
This paper provides a comprehensive review of polyp segmentation algorithms.
no code implementations • 16 Nov 2023 • Wei zhang, Dai Li, Chen Liang, Fang Zhou, Zhongke Zhang, Xuewei Wang, Ru Li, Yi Zhou, Yaning Huang, Dong Liang, Kai Wang, Zhangyuan Wang, Zhengxing Chen, Min Li, Fenggang Wu, Minghai Chen, Huayu Li, Yunnan Wu, Zhan Shu, Mindi Yuan, Sri Reddy
To address these challenges, we present Scaling User Modeling (SUM), a framework widely deployed in Meta's ads ranking system, designed to facilitate efficient and scalable sharing of online user representation across hundreds of ads models.
1 code implementation • 9 Nov 2023 • HaoYi Wu, Wenyang Hui, Yezeng Chen, Weiqi Wu, Kewei Tu, Yi Zhou
Since the dataset only involves a narrow range of knowledge, it is easy to separately analyse the knowledge a model possesses and the reasoning ability it has.
no code implementations • 30 Oct 2023 • Swanand Ravindra Kadhe, Heiko Ludwig, Nathalie Baracaldo, Alan King, Yi Zhou, Keith Houck, Ambrish Rawat, Mark Purcell, Naoise Holohan, Mikio Takeuchi, Ryo Kawahara, Nir Drucker, Hayim Shaul, Eyal Kushnir, Omri Soceanu
The effective detection of evidence of financial anomalies requires collaboration among multiple entities who own a diverse set of data, such as a payment network system (PNS) and its partner banks.
no code implementations • 19 Oct 2023 • Yi Zhou, Jose Camacho-Collados, Danushka Bollegala
Various types of social biases have been reported with pretrained Masked Language Models (MLMs) in prior work.
1 code implementation • 16 Oct 2023 • Xiaohang Tang, Yi Zhou, Taichi Aida, Procheta Sen, Danushka Bollegala
Given this relationship between WSD and SCD, we explore the possibility of predicting whether a target word has its meaning changed between two corpora collected at different time steps, by comparing the distributions of senses of that word in each corpora.
no code implementations • 12 Oct 2023 • Rui Yang, Li Fang, Yi Zhou
To mitigate hallucination in LLM-generated text in this paper, we introduce a constraint-based prompt that utilizes the entity and its textual description as contextual constraints to enhance data quality.
no code implementations • 9 Oct 2023 • Junkang Yang, Hongqing Liu, Lu Gan, Yi Zhou
Speech super-resolution (SSR) aims to predict a high resolution (HR) speech signal from its low resolution (LR) corresponding part.
no code implementations • 9 Oct 2023 • Yuxiang Lai, Xinghong Liu, Tao Zhou, Yi Zhou
To address these issues, we propose a novel Memory-Assisted Sub-Prototype Mining (MemSPM) method that can learn the differences between samples belonging to the same category and mine sub-classes when there exists significant concept shift between them.
no code implementations • 27 Sep 2023 • Yi Zhou
This document facilitates understanding of core concepts about uniform B-spline and its matrix representation.
no code implementations • 19 Sep 2023 • Tao Zhou, Yizhe Zhang, Geng Chen, Yi Zhou, Ye Wu, Deng-Ping Fan
Besides, a Scale-aware Convolution Module (SCM) is proposed to learn scale-aware features by using dilated convolutions with different ratios, in order to effectively deal with scale variation.
no code implementations • 22 Aug 2023 • Omid Taheri, Yi Zhou, Dimitrios Tzionas, Yang Zhou, Duygu Ceylan, Soren Pirk, Michael J. Black
In contrast, we introduce GRIP, a learning-based method that takes, as input, the 3D motion of the body and the object, and synthesizes realistic motion for both hands before, during, and after object interaction.
no code implementations • 21 Aug 2023 • Xinghong Liu, Yi Zhou, Tao Zhou, Chun-Mei Feng, Ling Shao
Essentially, we present a novel paradigm based on the vision-language model to learn SF-UniDA and hugely reduce the labeling costs on the source domain.
no code implementations • 18 Aug 2023 • Pengbo Hu, Ji Qi, Xingyu Li, Hong Li, Xinqi Wang, Bing Quan, Ruiyu Wang, Yi Zhou
Our approach succeeds in performance while significantly saving inference steps.
1 code implementation • ICCV 2023 • Hong Li, Xingyu Li, Pengbo Hu, Yinuo Lei, Chunxiao Li, Yi Zhou
In addition, we find that the jointly trained model typically has a preferred modality on which the competition is weaker than other modalities.
1 code implementation • 23 Jun 2023 • Zhengren Wang, Yi Zhou, Chunyu Luo, Mingyu Xiao, Jin-Kao Hao
We define a novel parameter of the input instance, $g_k(G)$, the gap between the degeneracy bound and the size of the maximum $k$-plex in the given graph, and present an exact algorithm parameterized by this $g_k(G)$, which has a worst-case running time polynomial in the size of the input graph and exponential in $g_k(G)$.
no code implementations • 15 Jun 2023 • Liang Wan, Hongqing Liu, Yi Zhou, Jie Ji
By combining the DPRNN module with Convolution Recurrent Network (CRN), the DPCRN obtained a promising performance in speech separation with a limited model size.
1 code implementation • 30 May 2023 • Haochen Luo, Yi Zhou, Danushka Bollegala
Our proposed method can combine source sense embeddings that cover different sets of word senses.
1 code implementation • 24 May 2023 • Asahi Ushio, Yi Zhou, Jose Camacho-Collados
Multilingual language model (LM) have become a powerful tool in NLP especially for non-English languages.
1 code implementation • 17 May 2023 • Saeth Wannasuphoprasit, Yi Zhou, Danushka Bollegala
This similarity underestimation problem is particularly severe for highly frequent words.
no code implementations • 8 May 2023 • Xuehao Zhou, Mingyang Zhang, Yi Zhou, Zhizheng Wu, Haizhou Li
Both objective and subjective evaluation results show that the accented TTS front-end fine-tuned with a small accented phonetic lexicon (5k words) effectively handles the phonetic variation of accents, while the accented TTS acoustic model fine-tuned with a limited amount of accented speech data (approximately 3 minutes) effectively improves the prosodic rendering including pitch and duration.
1 code implementation • 7 May 2023 • Wencong Wu, Shijie Liu, Yi Zhou, Yungang Zhang, Yu Xiang
The proposed DRANet includes two different parallel branches, which can capture complementary features to enhance the learning ability of the model.
Ranked #1 on
Image Denoising
on SIDD
(Average PSNR metric)
no code implementations • 3 May 2023 • Timothy Castiglia, Yi Zhou, Shiqiang Wang, Swanand Kadhe, Nathalie Baracaldo, Stacy Patterson
As part of the training, the parties wish to remove unimportant features in the system to improve generalization, efficiency, and explainability.
no code implementations • 15 Apr 2023 • Tao Zhou, Yizhe Zhang, Yi Zhou, Ye Wu, Chen Gong
Recently, Meta AI Research releases a general Segment Anything Model (SAM), which has demonstrated promising performance in several segmentation tasks.
no code implementations • 12 Apr 2023 • Zexi Li, Qunwei Li, Yi Zhou, Wenliang Zhong, Guannan Zhang, Chao Wu
Federated learning (FL) is a popular way of edge computing that doesn't compromise users' privacy.
no code implementations • 8 Apr 2023 • Meng Wang, Tian Lin, Lianyu Wang, Aidi Lin, Ke Zou, Xinxing Xu, Yi Zhou, Yuanyuan Peng, Qingquan Meng, Yiming Qian, Guoyao Deng, Zhiqun Wu, Junhong Chen, Jianhong Lin, Mingzhi Zhang, Weifang Zhu, Changqing Zhang, Daoqiang Zhang, Rick Siow Mong Goh, Yong liu, Chi Pui Pang, Xinjian Chen, Haoyu Chen, Huazhu Fu
Failure to recognize samples from the classes unseen during training is a major limitation of artificial intelligence in the real-world implementation for recognition and classification of retinal anomalies.
1 code implementation • 13 Mar 2023 • Shuangping Jin, Bingbing Yu, Minhao Jing, Yi Zhou, Jiajun Liang, Renhe Ji
To handle this, we propose a new RGB-NIR fusion algorithm called Dark Vision Net (DVN) with two technical novelties: Deep Structure and Deep Inconsistency Prior (DIP).
no code implementations • CVPR 2023 • Yasamin Jafarian, Tuanfeng Y. Wang, Duygu Ceylan, Jimei Yang, Nathan Carr, Yi Zhou, Hyun Soo Park
To edit human videos in a physically plausible way, a texture map must take into account not only the garment transformation induced by the body movements and clothes fitting, but also its 3D fine-grained surface geometry.
no code implementations • 10 Mar 2023 • Xinghong Liu, Yi Zhou, Tao Zhou, Jie Qin, Shengcai Liao
Open-set domain adaptation aims to not only recognize target samples belonging to common classes shared by source and target domains but also perceive unknown class samples.
1 code implementation • 3 Feb 2023 • Zaixiang Zheng, Yifan Deng, Dongyu Xue, Yi Zhou, Fei Ye, Quanquan Gu
This paper demonstrates that language models are strong structure-based protein designers.
no code implementations • 20 Dec 2022 • Yi Zhou, Zhizheng Wu, Mingyang Zhang, Xiaohai Tian, Haizhou Li
Specifically, a text-to-speech (TTS) system is first pretrained with target-accented speech data.
1 code implementation • 8 Dec 2022 • Xiaohan Zhang, Xingyu Li, Waqas Sultani, Yi Zhou, Safwan Wshah
We attribute this deficiency to the lack of ability to extract the spatial configuration of visual feature layouts and models' overfitting on low-level details from the training set.
2 code implementations • 2 Dec 2022 • Yonghao Li, Tao Zhou, Kelei He, Yi Zhou, Dinggang Shen
To take advantage of both paired and unpaired data, in this paper, we propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis.
1 code implementation • 2 Dec 2022 • Tao Zhou, Yi Zhou, Chen Gong, Jian Yang, Yu Zhang
In this paper, we propose a novel Feature Aggregation and Propagation Network (FAP-Net) for camouflaged object detection.
1 code implementation • 14 Nov 2022 • Elias Stengel-Eskin, Jimena Guallar-Blasco, Yi Zhou, Benjamin Van Durme
Natural language is ambiguous.
no code implementations • 26 Oct 2022 • Yi Zhou, Danushka Bollegala
We show that the $\ell_2$ norm of a static sense embedding encodes information related to the frequency of that sense in the training corpus used to learn the sense embeddings.
1 code implementation • 7 Oct 2022 • Jiangtao Feng, Yi Zhou, Jun Zhang, Xian Qian, Liwei Wu, Zhexi Zhang, Yanming Liu, Mingxuan Wang, Lei LI, Hao Zhou
PARAGEN is a PyTorch-based NLP toolkit for further development on parallel generation.
no code implementations • 6 Sep 2022 • Yue Wang, Yi Zhou, Shaofeng Zou
Our techniques in this paper provide a general approach for finite-sample analysis of non-convex two timescale value-based reinforcement learning algorithms.
no code implementations • 3 Sep 2022 • Katelinh Jones, Yuya Jeremy Ong, Yi Zhou, Nathalie Baracaldo
Federated Learning (FL) is a paradigm for jointly training machine learning algorithms in a decentralized manner which allows for parties to communicate with an aggregator to create and train a model, without exposing the underlying raw data distribution of the local parties involved in the training process.
1 code implementation • 23 Aug 2022 • Chun-Han Yao, Jimei Yang, Duygu Ceylan, Yi Zhou, Yang Zhou, Ming-Hsuan Yang
An alternative approach is to estimate dense vertices of a predefined template body in the image space.
1 code implementation • 23 Aug 2022 • Xiaohang Tang, Yi Zhou, Danushka Bollegala
We then generate prompts by filling manually compiled templates using the extracted pivot and anchor terms.
no code implementations • 28 Jul 2022 • Qingyang Tan, Yi Zhou, Tuanfeng Wang, Duygu Ceylan, Xin Sun, Dinesh Manocha
Despite recent success, deep learning-based methods for predicting 3D garment deformation under body motion suffer from interpenetration problems between the garment and the body.
no code implementations • 4 Jun 2022 • Chengan He, Jun Saito, James Zachary, Holly Rushmeier, Yi Zhou
We present an implicit neural representation to learn the spatio-temporal space of kinematic motions.
no code implementations • 18 May 2022 • Jiahao Zhu, Huajun Zhou, Zixuan Chen, Yi Zhou, Xiaohua Xie
3D deep models consuming point clouds have achieved sound application effects in computer vision.
1 code implementation • 1 May 2022 • Xiyuan Chen, Xingyu Li, Yi Zhou, Tianming Yang
The mechanism is well described by the Drift-Diffusion Model (DDM).
1 code implementation • 30 Apr 2022 • Pengbo Hu, Xingyu Li, Yi Zhou
Our experiments suggest that for some tasks where different modalities are complementary, the multi-modal models still tend to use the dominant modality alone and ignore the cooperation across modalities.
no code implementations • 31 Mar 2022 • Shaocong Ma, Ziyi Chen, Yi Zhou, Kaiyi Ji, Yingbin Liang
Moreover, we show that online SGD with mini-batch sampling can further substantially improve the sample complexity over online SGD with periodic data-subsampling over highly dependent data.
no code implementations • 30 Mar 2022 • Ziyi Chen, Bhavya Kailkhura, Yi Zhou
In this work, we study a proximal gradient-type algorithm that adopts the approximate implicit differentiation (AID) scheme for nonconvex bi-level optimization with possibly nonconvex and nonsmooth regularizers.
1 code implementation • CVPR 2022 • Lei Huang, Yi Zhou, Tian Wang, Jie Luo, Xianglong Liu
We define the estimation shift magnitude of BN to quantitatively measure the difference between its estimated population statistics and expected ones.
1 code implementation • 14 Mar 2022 • Yi Zhou, Masahiro Kaneko, Danushka Bollegala
Sense embedding learning methods learn different embeddings for the different senses of an ambiguous word.
no code implementations • 6 Mar 2022 • Yiwei Qiu, Buxiang Zhou, Tianlei Zang, Yi Zhou, Ruomei Qi, Jin Lin
The operational flexibility of industrial power-to-hydrogen (P2H) plants enables admittance of volatile renewable power and provides auxiliary regulatory services for the power grid.
1 code implementation • 17 Feb 2022 • Zhengren Wang, Yi Zhou, Mingyu Xiao, Bakhadyr Khoussainov
Our first contribution is algorithm ListPlex that lists all maximal $k$-plexes in $O^*(\gamma^D)$ time for each constant $k$, where $\gamma$ is a value related to $k$ but strictly smaller than 2, and $D$ is the degeneracy of the graph that is far less than the vertex number $n$ in real-word graphs.
no code implementations • 16 Feb 2022 • Yi Zhou, Parikshit Ram, Theodoros Salonidis, Nathalie Baracaldo, Horst Samulowitz, Heiko Ludwig
We address the relatively unexplored problem of hyper-parameter optimization (HPO) for federated learning (FL-HPO).
no code implementations • 8 Feb 2022 • Quang Minh Nguyen, Hoang H. Nguyen, Yi Zhou, Lam M. Nguyen
To this end, we further present a novel approach of OT retrieval from UOT, which is based on GEM-UOT with fine tuned $\tau$ and a post-process projection step.
no code implementations • 30 Jan 2022 • Yi Zhou, Liangcai Zhou, Di Shi, Xiaoying Zhao
With widespread deployment of renewables, the electric power grids are experiencing increasing dynamics and uncertainties, with its secure operation being threatened.
no code implementations • 22 Dec 2021 • Ziyi Chen, Shaocong Ma, Yi Zhou
Alternating gradient-descent-ascent (AltGDA) is an optimization algorithm that has been widely used for model training in various machine learning applications, which aims to solve a nonconvex minimax optimization problem.
no code implementations • CVPR 2022 • Yi Zhou, HUI ZHANG, Hana Lee, Shuyang Sun, Pingjun Li, Yangguang Zhu, ByungIn Yoo, Xiaojuan Qi, Jae-Joon Han
We encode all panoptic entities in a video, including both foreground instances and background semantics, with a unified representation called panoptic slots.
no code implementations • 15 Dec 2021 • Yi Zhou, Parikshit Ram, Theodoros Salonidis, Nathalie Baracaldo, Horst Samulowitz, Heiko Ludwig
We address the relatively unexplored problem of hyper-parameter optimization (HPO) for federated learning (FL-HPO).
no code implementations • 21 Nov 2021 • Kaiyuan Liu, Xingyu Li, Yurui Lai, Ge Zhang, Hang Su, Jiachen Wang, Chunxu Guo, Jisong Guan, Yi Zhou
Despite its great success, deep learning severely suffers from robustness; that is, deep neural networks are very vulnerable to adversarial attacks, even the simplest ones.
no code implementations • 5 Nov 2021 • Xiuyuan Lu, Yi Zhou, Shaojie Shen
In this paper, we present a cascaded two-level multi-model fitting method for identifying independently moving objects (i. e., the motion segmentation problem) with a monocular event camera.
no code implementations • 14 Oct 2021 • Ziyi Chen, Zhengyang Hu, Qunwei Li, Zhe Wang, Yi Zhou
However, GDA has been proved to converge to stationary points for nonconvex minimax optimization, which are suboptimal compared with local minimax points.
no code implementations • PACLIC 2021 • Yi Zhou, Danushka Bollegala
Contextualised word embeddings generated from Neural Language Models (NLMs), such as BERT, represent a word with a vector that considers the semantics of the target word as well its context.
no code implementations • 29 Sep 2021 • Cheng Chen, Jiaying Zhou, Jie Ding, Yi Zhou, Bhavya Kailkhura
We develop an assisted learning framework for assisting organization-level learners to improve their learning performance with limited and imbalanced data.
no code implementations • 29 Sep 2021 • Shaocong Ma, Ziyi Chen, Yi Zhou, Kaiyi Ji, Yingbin Liang
Specifically, with a $\phi$-mixing model that captures both exponential and polynomial decay of the data dependence over time, we show that SGD with periodic data-subsampling achieves an improved sample complexity over the standard SGD in the full spectrum of the $\phi$-mixing data dependence.
no code implementations • 29 Sep 2021 • Ziyi Chen, Qunwei Li, Yi Zhou
Our result shows that Cubic-GDA achieves an orderwise faster convergence rate than the standard GDA for a wide spectrum of gradient dominant geometry.
no code implementations • ICLR 2022 • Ziyi Chen, Shaocong Ma, Yi Zhou
Two-player zero-sum Markov game is a fundamental problem in reinforcement learning and game theory.
no code implementations • WMT (EMNLP) 2021 • Lihua Qian, Yi Zhou, Zaixiang Zheng, Yaoming Zhu, Zehui Lin, Jiangtao Feng, Shanbo Cheng, Lei LI, Mingxuan Wang, Hao Zhou
This paper describes the Volctrans' submission to the WMT21 news translation shared task for German->English translation.
no code implementations • 20 Sep 2021 • Cheng Chen, Jiaying Zhou, Jie Ding, Yi Zhou
We develop an assisted learning framework for assisting organization-level learners to improve their learning performance with limited and imbalanced data.
no code implementations • 8 Sep 2021 • Ziyi Chen, Yi Zhou, Rongrong Chen, Shaofeng Zou
Actor-critic (AC) algorithms have been widely adopted in decentralized multi-agent systems to learn the optimal joint control policy.
3 code implementations • ICCV 2021 • Tao Zhou, Deng-Ping Fan, Geng Chen, Yi Zhou, Huazhu Fu
To effectively fuse cross-modal features in the shared learning network, we propose a cross-enhanced integration module (CIM) and then propagate the fused feature to the next layer for integrating cross-level information.
no code implementations • ICCV 2021 • Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito, Jimei Yang, Yi Zhou, Michael Black
A long-standing goal in computer vision is to capture, model, and realistically synthesize human behavior.
1 code implementation • ACL 2021 • Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang
Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples.
no code implementations • 26 Jul 2021 • Kamala Varma, Yi Zhou, Nathalie Baracaldo, Ali Anwar
This global model can be corrupted when Byzantine workers send malicious gradients, which necessitates robust methods for aggregating gradients that mitigate the adverse effects of Byzantine inputs.
no code implementations • CVPR 2021 • Huiyuan Yang, Lijun Yin, Yi Zhou, Jiuxiang Gu
The learned AU semantic embeddings are then used as guidance for the generation of attention maps through a cross-modality attention network.
1 code implementation • 17 Jun 2021 • Yi Zhou, Heikki Huttunen, Tapio Elomaa
Age estimation is an essential challenge in computer vision.
1 code implementation • ACL 2020 • Feng Hou, Ruili Wang, Jun He, Yi Zhou
We propose a simple yet effective method, FGS2EE, to inject fine-grained semantic information into entity embeddings to reduce the distinctiveness and facilitate the learning of contextual commonality.
no code implementations • NeurIPS 2021 • Yue Wang, Shaofeng Zou, Yi Zhou
Temporal-difference learning with gradient correction (TDC) is a two time-scale algorithm for policy evaluation in reinforcement learning.
no code implementations • ICLR 2021 • Shaocong Ma, Ziyi Chen, Yi Zhou, Shaofeng Zou
Greedy-GQ is a value-based reinforcement learning (RL) algorithm for optimal control.
no code implementations • 30 Mar 2021 • Cheng Chen, Bhavya Kailkhura, Ryan Goldhahn, Yi Zhou
Federated learning is an emerging data-private distributed learning framework, which, however, is vulnerable to adversarial attacks.
no code implementations • 24 Mar 2021 • Ziyi Chen, Yi Zhou, Rongrong Chen
Under Markovian sampling and linear function approximation, we proved that the finite-time sample complexity of both algorithms for achieving an $\epsilon$-accurate solution is in the order of $\mathcal{O}(\epsilon^{-1}\ln \epsilon^{-1})$, matching the near-optimal sample complexity of centralized TD(0) and TDC.
no code implementations • 5 Mar 2021 • Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar, James Joshi, Heiko Ludwig
We empirically demonstrate the applicability for multiple types of ML models and show a reduction of 10%-70% of training time and 80% to 90% in data transfer with respect to the state-of-the-art approaches.
no code implementations • CVPR 2021 • Mianlun Zheng, Yi Zhou, Duygu Ceylan, Jernej Barbič
Being a local method, our network is independent of the mesh topology and generalizes to arbitrarily shaped 3D character meshes at test time.
no code implementations • 26 Feb 2021 • Yi Zhou, Lei Huang, Tianfei Zhou, Ling Shao
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
no code implementations • ICLR 2021 • Ziyi Chen, Yi Zhou, Tengyu Xu, Yingbin Liang
By leveraging this Lyapunov function and the K{\L} geometry that parameterizes the local geometries of general nonconvex functions, we formally establish the variable convergence of proximal-GDA to a critical point $x^*$, i. e., $x_t\to x^*, y_t\to y^*(x^*)$.
no code implementations • 1 Feb 2021 • Syed Zawad, Ahsan Ali, Pin-Yu Chen, Ali Anwar, Yi Zhou, Nathalie Baracaldo, Yuan Tian, Feng Yan
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
no code implementations • 1 Feb 2021 • Ning-An Lai, Yi Zhou
Global existence for small data Cauchy problem of semilinear wave equations with scaling invariant damping in 3-D is established in this work, assuming that the data are radial and the constant in front of the damping belongs to $[1. 5, 2)$.
Analysis of PDEs
no code implementations • ICCV 2021 • Yi Zhou, Lei Huang, Tao Zhou, Ling Shao
A category-invariant cross-domain transfer (CCT) method is proposed to address this single-to-multiple extension.
no code implementations • ICCV 2021 • Yi Zhou, Lei Huang, Tao Zhou, Huazhu Fu, Ling Shao
Second, the progressive report decoder consists of a sentence decoder and a word decoder, where we propose image-sentence matching and description accuracy losses to constrain the visual-textual semantic consistency.
no code implementations • 17 Dec 2020 • Zhenyu Guo, Mingyu Xiao, Yi Zhou, Dongxiang Zhang, Kian-Lee Tan
The graph edge partition problem, which is to split the edge set into multiple balanced parts to minimize the total number of copied vertices, has been widely studied from the view of optimization and algorithms.
1 code implementation • 16 Dec 2020 • Yi Zhou, Guillermo Gallego, Xiuyuan Lu, SiQi Liu, Shaojie Shen
We develop a method to identify independently moving objects acquired with an event-based camera, i. e., to solve the event-based motion segmentation problem.
no code implementations • 15 Dec 2020 • Yi Zhou, Hongdong Li, Laurent Kneip
The present paper reviews the classical problem of free-form curve registration and applies it to an efficient RGBD visual odometry system called Canny-VO, as it efficiently tracks all Canny edge features extracted from the images.
no code implementations • 11 Dec 2020 • Yuya Jeremy Ong, Yi Zhou, Nathalie Baracaldo, Heiko Ludwig
This approach makes the use of gradient boosted trees practical in enterprise federated learning.
1 code implementation • 9 Dec 2020 • Xueyi Li, Tianfei Zhou, Jianwu Li, Yi Zhou, Zhaoxiang Zhang
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths, which can be used for training more accurate segmentation models.
Ranked #29 on
Weakly-Supervised Semantic Segmentation
on COCO 2014 val
(using extra training data)
no code implementations • 4 Dec 2020 • Annie Abay, Yi Zhou, Nathalie Baracaldo, Shashank Rajamoni, Ebube Chuba, Heiko Ludwig
As methods to create discrimination-aware models develop, they focus on centralized ML, leaving federated learning (FL) unexplored.
no code implementations • NeurIPS 2020 • Bhavya Kailkhura, Jayaraman J. Thiagarajan, Qunwei Li, Jize Zhang, Yi Zhou, Timo Bremer
Using this framework, we show that space-filling sample designs, such as blue noise and Poisson disk sampling, which optimize spectral properties, outperform random designs in terms of the generalization gap and characterize this gain in a closed-form.
1 code implementation • 25 Nov 2020 • Yi Zhou, Zhenhao Chen
Memes are used for spreading ideas through social networks.
no code implementations • 17 Nov 2020 • Qiwei Yuan, Weizhe Hua, Yi Zhou, Cunxi Yu
The minibatch stochastic gradient descent method (SGD) is widely applied in deep learning due to its efficiency and scalability that enable training deep networks with a large volume of data.
no code implementations • 17 Nov 2020 • Liping Yuan, Xiaoqing Zheng, Yi Zhou, Cho-Jui Hsieh, Kai-Wei Chang
Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models.
no code implementations • 13 Nov 2020 • Cheng Chen, Junjie Yang, Yi Zhou
Specifically, we find that the optimization trajectories of successful DNN trainings consistently obey a certain regularity principle that regularizes the model update direction to be aligned with the trajectory direction.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Lu Liu, Yi Zhou, Jianhan Xu, Xiaoqing Zheng, Kai-Wei Chang, Xuanjing Huang
The words in each sentence of a source language corpus are rearranged to meet the word order in a target language under the guidance of a part-of-speech based language model (LM).
no code implementations • NeurIPS 2020 • Shaocong Ma, Yi Zhou, Shaofeng Zou
In the Markovian setting, our algorithm achieves the state-of-the-art sample complexity $O(\epsilon^{-1} \log {\epsilon}^{-1})$ that is near-optimal.
no code implementations • 14 Oct 2020 • Yan Zhang, Yi Zhou, Kaiyi Ji, Michael M. Zavlanos
As a result, our regret bounds are much tighter compared to existing regret bounds for ZO with conventional one-point feedback, which suggests that ZO with residual feedback can better track the optimizer of online optimization problems.
no code implementations • 3 Oct 2020 • Jiahui Gao, Yi Zhou, Philip L. H. Yu, Shafiq Joty, Jiuxiang Gu
In this work, we present a novel unpaired cross-lingual method to generate image captions without relying on any caption corpus in the source or the target language.
1 code implementation • CVPR 2021 • Lei Huang, Yi Zhou, Li Liu, Fan Zhu, Ling Shao
Results show that GW consistently improves the performance of different architectures, with absolute gains of $1. 02\%$ $\sim$ $1. 49\%$ in top-1 accuracy on ImageNet and $1. 82\%$ $\sim$ $3. 21\%$ in bounding box AP on COCO.
no code implementations • 27 Sep 2020 • Lei Huang, Jie Qin, Yi Zhou, Fan Zhu, Li Liu, Ling Shao
Normalization techniques are essential for accelerating the training and improving the generalization of deep neural networks (DNNs), and have successfully been used in various applications.
no code implementations • 22 Sep 2020 • Cheng Chen, Ziyi Chen, Yi Zhou, Bhavya Kailkhura
We develop FedCluster--a novel federated learning framework with improved optimization efficiency, and investigate its theoretical convergence properties.
no code implementations • 12 Sep 2020 • Yi Zhou, Shuyang Sun, Chao Zhang, Yikang Li, Wanli Ouyang
By assigning each relationship a single label, current approaches formulate the relationship detection as a classification problem.
1 code implementation • 9 Sep 2020 • Yi Zhou, Yin Cui, Xiaoke Xu, Jidong Suo, Xiaoming Liu
It is challenging to detect small-floating object in the sea clutter for a surface radar.
no code implementations • 22 Aug 2020 • Yi Zhou, Boyang Wang, Lei Huang, Shanshan Cui, Ling Shao
This dataset has 1, 842 images with pixel-level DR-related lesion annotations, and 1, 000 images with image-level labels graded by six board-certified ophthalmologists with intra-rater consistency.
no code implementations • 18 Aug 2020 • Jiaman Li, Yihang Yin, Hang Chu, Yi Zhou, Tingwu Wang, Sanja Fidler, Hao Li
We also introduce new evaluation metrics for the quality of synthesized dance motions, and demonstrate that our system can outperform state-of-the-art methods.
no code implementations • 10 Aug 2020 • Guanqun Cao, Yi Zhou, Danushka Bollegala, Shan Luo
Recently, tactile sensing has attracted great interest in robotics, especially for facilitating exploration of unstructured environments and effective manipulation.
2 code implementations • 30 Jul 2020 • Yi Zhou, Guillermo Gallego, Shaojie Shen
We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.
1 code implementation • 22 Jul 2020 • Heiko Ludwig, Nathalie Baracaldo, Gegi Thomas, Yi Zhou, Ali Anwar, Shashank Rajamoni, Yuya Ong, Jayaram Radhakrishnan, Ashish Verma, Mathieu Sinn, Mark Purcell, Ambrish Rawat, Tran Minh, Naoise Holohan, Supriyo Chakraborty, Shalisha Whitherspoon, Dean Steuer, Laura Wynter, Hifaz Hassan, Sean Laguna, Mikhail Yurochkin, Mayank Agarwal, Ebube Chuba, Annie Abay
Federated Learning (FL) is an approach to conduct machine learning without centralizing training data in a single place, for reasons of privacy, confidentiality or data volume.
no code implementations • ICML 2020 • Shaocong Ma, Yi Zhou
Specifically, minimizer incoherence measures the discrepancy between the global minimizers of a sample loss and those of the total loss and affects the convergence error of SGD with random reshuffle.
no code implementations • ACL 2020 • Xiaoqing Zheng, Jiehang Zeng, Yi Zhou, Cho-Jui Hsieh, Minhao Cheng, Xuanjing Huang
Despite achieving prominent performance on many important tasks, it has been reported that neural networks are vulnerable to adversarial examples.
1 code implementation • 20 Jun 2020 • Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang
Despite neural networks have achieved prominent performance on many natural language processing (NLP) tasks, they are vulnerable to adversarial examples.
no code implementations • 18 Jun 2020 • Yan Zhang, Yi Zhou, Kaiyi Ji, Michael M. Zavlanos
When optimizing a deterministic Lipschitz function, we show that the query complexity of ZO with the proposed one-point residual feedback matches that of ZO with the existing two-point schemes.
1 code implementation • NeurIPS 2020 • Yi Zhou, Chenglei Wu, Zimo Li, Chen Cao, Yuting Ye, Jason Saragih, Hao Li, Yaser Sheikh
Learning latent representations of registered meshes is useful for many 3D tasks.
no code implementations • 31 May 2020 • Pin Tang, Pinli Yang, Yuang Shi, Yi Zhou, Feng Lin, Yan Wang
Named entity recognition (NER) plays an essential role in natural language processing systems.
no code implementations • 18 May 2020 • Yi Zhou, Jingwan Lu, Connelly Barnes, Jimei Yang, Sitao Xiang, Hao Li
We introduce a biomechanically constrained generative adversarial network that performs long-term inbetweening of human motions, conditioned on keyframe constraints.
no code implementations • 15 May 2020 • Ziyi Chen, Yi Zhou
This paper complements the existing literature by developing various momentum schemes with SPIDER-based variance reduction for non-convex composition optimization.
3 code implementations • 22 Apr 2020 • Deng-Ping Fan, Tao Zhou, Ge-Peng Ji, Yi Zhou, Geng Chen, Huazhu Fu, Jianbing Shen, Ling Shao
Coronavirus Disease 2019 (COVID-19) spread globally in early 2020, causing the world to face an existential health crisis.
1 code implementation • CVPR 2020 • Lei Huang, Lei Zhao, Yi Zhou, Fan Zhu, Li Liu, Ling Shao
Our work originates from the observation that while various whitening transformations equivalently improve the conditioning, they show significantly different behaviors in discriminative scenarios and training Generative Adversarial Networks (GANs).
1 code implementation • 17 Mar 2020 • Yiren Li, Zheng Huang, Junchi Yan, Yi Zhou, Fan Ye, Xianhui Liu
Tabular data is a crucial form of information expression, which can organize data in a standard structure for easy information retrieval and comparison.
1 code implementation • 9 Mar 2020 • Tianfei Zhou, Shunzhou Wang, Yi Zhou, Yazhou Yao, Jianwu Li, Ling Shao
In this paper, we present a novel Motion-Attentive Transition Network (MATNet) for zero-shot video object segmentation, which provides a new way of leveraging motion information to reinforce spatio-temporal object representation.
Ranked #8 on
Unsupervised Video Object Segmentation
on FBMS test
Semantic Segmentation
Unsupervised Video Object Segmentation
+2
no code implementations • 26 Feb 2020 • Yi Zhou, Zhe Wang, Kaiyi Ji, Yingbin Liang, Vahid Tarokh
Our APG-restart is designed to 1) allow for adopting flexible parameter restart schemes that cover many existing ones; 2) have a global sub-linear convergence rate in nonconvex and nonsmooth optimization; and 3) have guaranteed convergence to a critical point and have various types of asymptotic convergence rates depending on the parameterization of local geometry in nonconvex and nonsmooth optimization.
no code implementations • 25 Jan 2020 • Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, Yue Cheng
To this end, we propose TiFL, a Tier-based Federated Learning System, which divides clients into tiers based on their training performance and selects clients from the same tier in each training round to mitigate the straggler problem caused by heterogeneity in resource and data quantity.
no code implementations • ICLR 2020 • Tengyu Xu, Zhe Wang, Yi Zhou, Yingbin Liang
Furthermore, the variance error (for both i. i. d.\ and Markovian sampling) and the bias error (for Markovian sampling) of VRTD are significantly reduced by the batch size of variance reduction in comparison to those of vanilla TD.
1 code implementation • 17 Dec 2019 • Yi Zhou, Xiaoqing Zheng, Xuanjing Huang
Inspired by a concept of content-addressable retrieval from cognitive science, we propose a novel fragment-based model augmented with a lexicon-based memory for Chinese NER, in which both the character-level and word-level features are combined to generate better feature representations for possible name candidates.
Chinese Named Entity Recognition
named-entity-recognition
+3
no code implementations • 12 Dec 2019 • Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar, Heiko Ludwig
Participants in a federated learning process cooperatively train a model by exchanging model parameters instead of the actual training data, which they might want to keep private.
no code implementations • 10 Dec 2019 • Yi Zhou, Boyang Wang, Xiaodong He, Shanshan Cui, Ling Shao
In this paper, we propose a diabetic retinopathy generative adversarial network (DR-GAN) to synthesize high-resolution fundus images which can be manipulated with arbitrary grading and lesion information.
no code implementations • NeurIPS 2019 • Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, Vahid Tarokh
SARAH and SPIDER are two recently developed stochastic variance-reduced algorithms, and SPIDER has been shown to achieve a near-optimal first-order oracle complexity in smooth nonconvex optimization.
no code implementations • WS 2019 • Ying Xiong, Yedan Shen, Yuanhang Huang, Shuai Chen, Buzhou Tang, Xiaolong Wang, Qingcai Chen, Jun Yan, Yi Zhou
The Biological Text Mining Unit at BSC and CNIO organized the first shared task on chemical {\&} drug mention recognition from Spanish medical texts called PharmaCoNER (Pharmacological Substances, Compounds and proteins and Named Entity Recognition track) in 2019, which includes two tracks: one for NER offset and entity classification (track 1) and the other one for concept indexing (track 2).
no code implementations • 27 Oct 2019 • Kaiyi Ji, Zhe Wang, Yi Zhou, Yingbin Liang
Two types of zeroth-order stochastic algorithms have recently been designed for nonconvex optimization respectively based on the first-order techniques SVRG and SARAH/SPIDER.
no code implementations • 22 Oct 2019 • Haodi Zhang, Zihang Gao, Yi Zhou, Hao Zhang, Kaishun Wu, Fangzhen Lin
Deep reinforcement learning has been successfully used in many dynamic decision making domains, especially those with very large state spaces.
no code implementations • ICML 2020 • Kaiyi Ji, Zhe Wang, Bowen Weng, Yi Zhou, Wei zhang, Yingbin Liang
In this paper, we propose a novel scheme, which eliminates backtracking line search but still exploits the information along optimization path by adapting the batch size via history stochastic gradients.
1 code implementation • 15 Oct 2019 • Cat P. Le, Yi Zhou, Jie Ding, Vahid Tarokh
Classical supervised classification tasks search for a nonlinear mapping that maps each encoded feature directly to a probability mass over the labels.
no code implementations • 29 Sep 2019 • Jayanth Regatti, Gaurav Tendolkar, Yi Zhou, Abhishek Gupta, Yingbin Liang
The performance of fully synchronized distributed systems has faced a bottleneck due to the big data trend, under which asynchronous distributed systems are becoming a major popularity due to their powerful scalability.
no code implementations • 25 Sep 2019 • Cheng Chen, Junjie Yang, Yi Zhou
In particular, we observe that the trainings that apply the training techniques achieve accelerated convergence and obey the principle with a large $\gamma$, which is consistent with the $\mathcal{O}(1/\gamma K)$ convergence rate result under the optimization principle.
no code implementations • 19 Sep 2019 • Toyotaro Suzumura, Yi Zhou, Natahalie Baracaldo, Guangnan Ye, Keith Houck, Ryo Kawahara, Ali Anwar, Lucia Larise Stavarache, Yuji Watanabe, Pablo Loyola, Daniel Klyashtorny, Heiko Ludwig, Kumar Bhaskaran
Advances in technology used in this domain, including machine learning based approaches, can improve upon the effectiveness of financial institutions' existing processes, however, a key challenge that most financial institutions continue to face is that they address financial crimes in isolation without any insight from other firms.
no code implementations • SEMEVAL 2019 • Yifan Liu, Keyu Ding, Yi Zhou
AiFu has won the first place in the SemEval-2019 Task 10 - {''}Math Question Answering{''}competition.
no code implementations • CVPR 2019 • Yi Zhou, Xiaodong He, Lei Huang, Li Liu, Fan Zhu, Shanshan Cui, Ling Shao
Then, based on initially predicted lesion maps for large quantities of image-level annotated data, a lesion attentive disease grading model is designed to improve the severity classification accuracy.
no code implementations • NeurIPS 2019 • Guanghui Lan, Zhize Li, Yi Zhou
Moreover, Varag is the first accelerated randomized incremental gradient method that benefits from the strong convexity of the data-fidelity term to achieve the optimal linear convergence.
5 code implementations • CVPR 2019 • Lei Huang, Yi Zhou, Fan Zhu, Li Liu, Ling Shao
With the support of SND, we provide natural explanations to several phenomena from the perspective of optimization, e. g., why group-wise whitening of DBN generally outperforms full-whitening and why the accuracy of BN degenerates with reduced batch sizes.
no code implementations • 28 Mar 2019 • Tian Wang, Zichen Miao, Yuxin Chen, Yi Zhou, Guangcun Shan, Hichem Snoussi
It is challenging to detect the anomaly in crowded scenes for quite a long time.
no code implementations • 7 Feb 2019 • Yi Zhou, Zhe Wang, Kaiyi Ji, Yingbin Liang, Vahid Tarokh
In this paper, we develop novel momentum schemes with flexible coefficient settings to accelerate SPIDER for nonconvex and nonsmooth composite optimization, and show that the resulting algorithms achieve the near-optimal gradient oracle complexity for achieving a generalized first-order stationary condition.
1 code implementation • 21 Jan 2019 • Haofan Wang, Zhenghua Chen, Yi Zhou
In this paper, to do the estimation without facial landmarks, we combine the coarse and fine regression output together for a deep network.
Ranked #3 on
Head Pose Estimation
on AFLW
no code implementations • ICLR 2019 • Yi Zhou, Junjie Yang, Huishuai Zhang, Yingbin Liang, Vahid Tarokh
Stochastic gradient descent (SGD) has been found to be surprisingly effective in training a variety of deep neural networks.
5 code implementations • CVPR 2019 • Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, Hao Li
Thus, widely used representations such as quaternions and Euler angles are discontinuous and difficult for neural networks to learn.
no code implementations • 7 Dec 2018 • Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, Yi Zhou
Federated learning facilitates the collaborative training of models without the sharing of raw data.
no code implementations • 22 Nov 2018 • Qunwei Li, Bhavya Kailkhura, Rushil Anirudh, Yi Zhou, Yingbin Liang, Pramod Varshney
Despite the growing interest in generative adversarial networks (GANs), training GANs remains a challenging problem, both from a theoretical and a practical standpoint.
1 code implementation • 25 Oct 2018 • Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, Vahid Tarokh
SARAH and SPIDER are two recently developed stochastic variance-reduced algorithms, and SPIDER has been shown to achieve a near-optimal first-order oracle complexity in smooth nonconvex optimization.
no code implementations • 9 Oct 2018 • Zhe Wang, Yi Zhou, Yingbin Liang, Guanghui Lan
However, such a successful acceleration technique has not yet been proposed for second-order algorithms in nonconvex optimization. In this paper, we apply the momentum scheme to cubic regularized (CR) Newton's method and explore the potential for acceleration.
no code implementations • ICLR 2019 • Wei Dai, Yi Zhou, Nanqing Dong, Hao Zhang, Eric P. Xing
Many distributed machine learning (ML) systems adopt the non-synchronous execution in order to alleviate the network communication bottleneck, resulting in stale parameters that do not reflect the latest updates.
3 code implementations • 1 Oct 2018 • Yi Zhou, Yue Bai, Shuvra S. Bhattacharyya, Heikki Huttunen
In this work we propose a framework for improving the performance of any deep neural network that may suffer from vanishing gradients.
no code implementations • 24 Sep 2018 • Guanghui Lan, Yi Zhou
In this work, we introduce an asynchronous decentralized accelerated stochastic gradient descent type of method for decentralized stochastic optimization, considering communication and synchronization are the major bottlenecks.
no code implementations • 28 Aug 2018 • Shi Yin, Yi Zhou, Chenguang Li, Shangfei Wang, Jianmin Ji, Xiaoping Chen, Ruili Wang
We propose KDSL, a new word sense disambiguation (WSD) framework that utilizes knowledge to automatically generate sense-labeled data for supervised learning.
no code implementations • NeurIPS 2018 • Yi Zhou, Zhe Wang, Yingbin Liang
Cubic-regularized Newton's method (CR) is a popular algorithm that guarantees to produce a second-order stationary solution for solving nonconvex optimization problems.
no code implementations • 22 Aug 2018 • Zhe Wang, Yi Zhou, Yingbin Liang, Guanghui Lan
This note considers the inexact cubic-regularized Newton's method (CR), which has been shown in \cite{Cartis2011a} to achieve the same order-level convergence rate to a secondary stationary point as the exact CR \citep{Nesterov2006}.
no code implementations • COLING 2018 • Xiaomin Chu, Feng Jiang, Yi Zhou, Guodong Zhou, Qiaoming Zhu
Discourse parsing is a challenging task and plays a critical role in discourse analysis.
2 code implementations • ECCV 2018 • Yi Zhou, Guillermo Gallego, Henri Rebecq, Laurent Kneip, Hongdong Li, Davide Scaramuzza
Event cameras are bio-inspired sensors that offer several advantages, such as low latency, high-speed and high dynamic range, to tackle challenging scenarios in computer vision.
1 code implementation • ICLR 2019 • Tengyu Xu, Yi Zhou, Kaiyi Ji, Yingbin Liang
We study the implicit bias of gradient descent methods in solving a binary classification problem over a linearly separable dataset.
no code implementations • CVPR 2018 • Yi Zhou, Ling Shao
Vehicle re-identification (re-ID) has the huge potential to contribute to the intelligent video surveillance.
no code implementations • 20 Feb 2018 • Zhe Wang, Yi Zhou, Yingbin Liang, Guanghui Lan
Cubic regularization (CR) is an optimization method with emerging popularity due to its capability to escape saddle points and converge to second-order stationary solutions for nonconvex optimization.
no code implementations • 19 Feb 2018 • Yi Zhou, Yingbin Liang, Huishuai Zhang
With strongly convex regularizers, we further establish the generalization error bounds for nonconvex loss functions under proximal SGD with high-probability guarantee, i. e., exponential concentration in probability.
no code implementations • ICLR 2018 • Yi Zhou, Yingbin Liang
In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for linear neural networks.
no code implementations • 15 Nov 2017 • Guanghui Lan, Yi Zhou
Furthermore, we demonstrate that for stochastic finite-sum optimization problems, RGEM maintains the optimal ${\cal O}(1/\epsilon)$ complexity (up to a certain logarithmic factor) in terms of the number of stochastic gradient computations, but attains an ${\cal O}(\log(1/\epsilon))$ complexity in terms of communication rounds (each round involves only one agent).
no code implementations • 30 Oct 2017 • Yi Zhou, Yingbin Liang
We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum.
no code implementations • 18 Oct 2017 • Yi Zhou, Yingbin Liang
The past decade has witnessed a successful application of deep learning to solving many challenging problems in machine learning and artificial intelligence.
no code implementations • ICCV 2017 • Kyle Olszewski, Zimo Li, Chao Yang, Yi Zhou, Ronald Yu, Zeng Huang, Sitao Xiang, Shunsuke Saito, Pushmeet Kohli, Hao Li
By retargeting the PCA expression geometry from the source, as well as using the newly inferred texture, we can both animate the face and perform video face replacement on the source video using the target appearance.
no code implementations • ICML 2017 • Pengtao Xie, Yuntian Deng, Yi Zhou, Abhimanu Kumar, Yao-Liang Yu, James Zou, Eric P. Xing
The large model capacity of latent space models (LSMs) enables them to achieve great performance on various applications, but meanwhile renders LSMs to be prone to overfitting.
1 code implementation • ICLR 2018 • Zimo Li, Yi Zhou, Shuangjiu Xiao, Chong He, Zeng Huang, Hao Li
We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN).
no code implementations • 20 May 2017 • Yi Zhou, Jin-Kao Hao
The Maximum Balanced Biclique Problem is a well-known graph model with relevant applications in diverse domains.
no code implementations • ICML 2017 • Qunwei Li, Yi Zhou, Yingbin Liang, Pramod K. Varshney
Then, by exploiting the Kurdyka-{\L}ojasiewicz (\KL) property for a broad class of functions, we establish the linear and sub-linear convergence rates of the function value sequence generated by APGnc.
no code implementations • 26 Apr 2017 • Yi Zhou
In this extended abstract, we propose Structured Production Systems (SPS), which extend traditional production systems with well-formed syntactic structures.
no code implementations • ICML 2017 • Guanghui Lan, Sebastian Pokutta, Yi Zhou, Daniel Zink
In this work we introduce a conditional accelerated lazy stochastic gradient descent algorithm with optimal number of calls to a stochastic first-order oracle and convergence rate $O\left(\frac{1}{\varepsilon^2}\right)$ improving over the projection-free, Online Frank-Wolfe based stochastic gradient descent of Hazan and Kale [2012] with convergence rate $O\left(\frac{1}{\varepsilon^4}\right)$.
no code implementations • 6 Feb 2017 • Yi Zhou, Laurent Kneip, Hongdong Li
This paper presents a robust and efficient semi-dense visual odometry solution for RGB-D cameras.
no code implementations • 14 Jan 2017 • Guanghui Lan, Soomin Lee, Yi Zhou
Our major contribution is to present a new class of decentralized primal-dual type algorithms, namely the decentralized communication sliding (DCS) methods, which can skip the inter-node communications while agents solve the primal subproblems iteratively through linearizations of their local objective functions.