1 code implementation • ICML 2020 • Kimon Fountoulakis, Di Wang, Shenghao Yang
Local graph clustering and the closely related seed set expansion problem are primitives on graphs that are central to a wide range of analytic and learning tasks such as local clustering, community detection, nodes ranking and feature inference.
1 code implementation • 6 Apr 2022 • Di Wang, Jing Zhang, Bo Du, Gui-Song Xia, DaCheng Tao
To this end, we train different networks from scratch with the help of the largest RS scene recognition dataset up to now -- MillionAID, to obtain a series of RS pretrained backbones, including both convolutional neural networks (CNN) and vision transformers such as Swin and ViTAE, which have shown promising performance on computer vision tasks.
no code implementations • 26 Feb 2022 • Junren Chen, Cheng-Long Wang, Michael K. Ng, Di Wang
Compared with data with high precision, one-bit (binary) data are preferable in many applications because of the efficiency in signal storage, processing, transmission, and enhancement of privacy.
no code implementations • 10 Jan 2022 • Di Wang, Jinhui Xu
Firstly, we study the case where the $\ell_2$ norm of data has bounded second order moment.
no code implementations • 29 Dec 2021 • Yizhang Wang, Di Wang, You Zhou, Xiaofeng Zhang, Chai Quek
Furthermore, we divide all data points into different levels according to their local density and propose a unified clustering framework by combining the advantages of both DPC and DBSCAN.
no code implementations • 28 Nov 2021 • Fuxun Yu, Di Wang, Longfei Shangguan, Minjia Zhang, Xulong Tang, ChenChen Liu, Xiang Chen
With both scaling trends, new problems and challenges emerge in DL inference serving systems, which gradually trends towards Large-scale Deep learning Serving systems (LDS).
no code implementations • 28 Nov 2021 • Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, ChenChen Liu, Zhi Tian, Xiang Chen
Federated learning learns from scattered data by fusing collaborative models from local nodes.
no code implementations • 15 Oct 2021 • Muhammad F. A. Chaudhary, Sarah E. Gerard, Di Wang, Gary E. Christensen, Christopher B. Cooper, Joyce D. Schroeder, Eric A. Hoffman, Joseph M. Reinhardt
Once trained, the framework can be used as a registration-free method for predicting local tissue expansion.
no code implementations • 14 Oct 2021 • Soobee Lee, Minindu Weerakoon, Jonghyun Choi, Minjia Zhang, Di Wang, Myeongjae Jeon
In particular, in mobile and IoT devices, real-time data can be stored not just in high-speed RAMs but in internal storage devices as well, which offer significantly larger capacity than the RAMs.
no code implementations • ACL 2021 • Xuepeng Wang, Li Zhao, Bing Liu, Tao Chen, Feng Zhang, Di Wang
In this paper, we propose a novel concept-based label embedding method that can explicitly represent the concept and model the sharing mechanism among classes for the hierarchical text classification.
1 code implementation • ACL 2021 • Shulin Liu, Tao Yang, Tianchi Yue, Feng Zhang, Di Wang
In this paper, we propose a Pre-trained masked Language model with Misspelled knowledgE (PLOME) for CSC, which jointly learns how to understand language and correct spelling errors.
no code implementations • 31 Jul 2021 • Jinyan Su, Lijie Hu, Di Wang
Specifically, we first show that under some mild assumptions on the loss functions, there is an algorithm whose output could achieve an upper bound of $\tilde{O}((\frac{1}{\sqrt{n}}+\frac{\sqrt{d\log \frac{1}{\delta}}}{n\epsilon})^\frac{\theta}{\theta-1})$ for $(\epsilon, \delta)$-DP when $\theta\geq 2$, here $n$ is the sample size and $d$ is the dimension of the space.
no code implementations • 23 Jul 2021 • Lijie Hu, Shuo Ni, Hanshen Xiao, Di Wang
To better understand the challenges arising from irregular data distribution, in this paper we provide the first study on the problem of DP-SCO with heavy-tailed data in the high dimensional space.
no code implementations • 26 Jun 2021 • Di Wang, Bo Du, Liangpei Zhang
At last, by combining the extracted spatial and spectral graph contexts, we obtain the SSGRN to achieve a high accuracy classification.
1 code implementation • Findings (ACL) 2021 • Huanqin Wu, Wei Liu, Lei LI, Dan Nie, Tao Chen, Feng Zhang, Di Wang
Keyphrase Prediction (KP) task aims at predicting several keyphrases that can summarize the main idea of the given document.
no code implementations • 4 Jun 2021 • Youming Tao, Yulian Wu, Peng Zhao, Di Wang
Finally, we establish the lower bound to show that the instance-dependent regret of our improved algorithm is optimal.
no code implementations • 30 May 2021 • Li Chen, Richard Peng, Di Wang
Diffusion is a fundamental graph procedure and has been a basic building block in a wide range of theoretical and empirical applications such as graph partitioning and semi-supervised learning on graphs.
no code implementations • 13 Apr 2021 • Yang Li, Di Wang, José M. F. Moura
This task is challenging as models need not only to capture spatial dependency and temporal dependency within the data, but also to leverage useful auxiliary information for accurate predictions.
no code implementations • 20 Mar 2021 • Jihua Zhu, Di Wang, Jiaxi Mu, Huimin Lu, Zhiqiang Tian, Zhongyu Li
Under the NDT framework, this paper proposes a novel multi-view registration method, named 3D multi-view registration based on the normal distributions transform (3DMNDT), which integrates the K-means clustering and Lie algebra solver to achieve multi-view registration.
no code implementations • 14 Jan 2021 • Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, Di Wang
In the special case of the minimum cost flow problem on $n$-vertex $m$-edge graphs with integer polynomially-bounded costs and capacities we obtain a randomized method which solves the problem in $\tilde{O}(m+n^{1. 5})$ time.
Data Structures and Algorithms Optimization and Control
no code implementations • 15 Dec 2020 • Peixiang Zhong, Di Wang, Pengfei Li, Chen Zhang, Hao Wang, Chunyan Miao
Experimental results on two large-scale datasets support our hypothesis and show that our model can produce more accurate and commonsense-aware emotional responses and achieve better human ratings than state-of-the-art models that only specialize in one aspect.
no code implementations • 22 Nov 2020 • Fuxun Yu, Dimitrios Stamoulis, Di Wang, Dimitrios Lymberopoulos, Xiang Chen
This paper gives an overview of our ongoing work on the design space exploration of efficient deep neural networks (DNNs).
no code implementations • 11 Nov 2020 • Di Wang, Marco Gaboardi, Adam Smith, Jinhui Xu
In our second attempt, we show that for any $1$-Lipschitz generalized linear convex loss function, there is an $(\epsilon, \delta)$-LDP algorithm whose sample complexity for achieving error $\alpha$ is only linear in the dimensionality $p$.
1 code implementation • 7 Nov 2020 • Muhammad Hassan, Yan Wang, Di Wang, Daixi Li, Yanchun Liang, You Zhou, Dong Xu
We collected 100, 000 shoeprints of subjects ranging from 7 to 80 years old and used the data to develop a deep learning end-to-end model ShoeNet to analyze age-related patterns and predict age.
no code implementations • 22 Oct 2020 • Di Wang, Jiahao Ding, Lijie Hu, Zejun Xie, Miao Pan, Jinhui Xu
To address this issue, we propose in this paper the first DP version of (Gradient) EM algorithm with statistical guarantees.
no code implementations • ICML 2020 • Di Wang, Hanshen Xiao, Srini Devadas, Jinhui Xu
For this case, we propose a method based on the sample-and-aggregate framework, which has an excess population risk of $\tilde{O}(\frac{d^3}{n\epsilon^4})$ (after omitting other factors), where $n$ is the sample size and $d$ is the dimensionality of the data.
no code implementations • 19 Oct 2020 • Di Wang, Xiangyu Guo, Chaowen Guan, Shi Li, Jinhui Xu
To the best of our knowledge, this is the first work that studies and provides theoretical guarantees for the stochastic linear combination of non-linear regressions model.
no code implementations • 19 Oct 2020 • Di Wang, Xiangyu Guo, Shi Li, Jinhui Xu
In this paper, we study the problem of estimating latent variable models with arbitrarily corrupted samples in high dimensional space ({\em i. e.,} $d\gg n$) where the underlying parameter is assumed to be sparse.
no code implementations • 16 Oct 2020 • Goran Zuzic, Di Wang, Aranyak Mehta, D. Sivakumar
In this paper, we focus on the AdWords problem, which is a classical online budgeted matching problem of both theoretical and practical significance.
1 code implementation • 8 Sep 2020 • Nanyu Li, Yujuan Si, Di Wang, Tong Liu, Jinrun Yu
In VQ method, a set of dictionaries corresponding to segments of ECG beats is trained, and VQ codes are used to represent each heartbeat.
no code implementations • 15 Aug 2020 • Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, ChenChen Liu, Zhi Tian, Xiang Chen
Specifically, we design a feature-oriented regulation method ({$\Psi$-Net}) to ensure explicit feature information allocation in different neural network structures.
no code implementations • 14 Aug 2020 • Fuxun Yu, ChenChen Liu, Di Wang, Yanzhi Wang, Xiang Chen
Based on the neural network attention mechanism, we propose a comprehensive dynamic optimization framework including (1) testing-phase channel and column feature map pruning, as well as (2) training-phase optimization by targeted dropout.
no code implementations • 24 Jun 2020 • Di Wang, David M Kahn, Jan Hoffmann
The effectiveness of the technique is evaluated by analyzing the sample complexity of discrete distributions and with a novel average-case estimation for deterministic programs that combines expected cost analysis with statistical methods.
Programming Languages
2 code implementations • 20 May 2020 • Kimon Fountoulakis, Di Wang, Shenghao Yang
Local graph clustering and the closely related seed set expansion problem are primitives on graphs that are central to a wide range of analytic and learning tasks such as local clustering, community detection, nodes ranking and feature inference.
no code implementations • 15 May 2020 • Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu
Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of additive noise required by these mechanisms and the lower bounds (criteria).
no code implementations • 27 Mar 2020 • Shao-Bo Lin, Di Wang, Ding-Xuan Zhou
This paper focuses on generalization performance analysis for distributed algorithms in the framework of learning theory.
no code implementations • 25 Nov 2019 • Saman Fahandezh-Saadi, Di Wang, Masayoshi Tomizuka
This paper presents a robust probabilistic point registration method for estimating the rigid transformation (i. e. rotation matrix and translation vector) between two pointcloud dataset.
no code implementations • 17 Nov 2019 • Fuxun Yu, Di Wang, Yinpeng Chen, Nikolaos Karianakis, Tong Shen, Pei Yu, Dimitrios Lymberopoulos, Sidi Lu, Weisong Shi, Xiang Chen
In this work, we show that such adversarial-based methods can only reduce the domain style gap, but cannot address the domain content distribution gap that is shown to be important for object detectors.
no code implementations • NeurIPS 2019 • Yunus Esencayi, Marco Gaboardi, Shi Li, Di Wang
On the negative side, we show that the approximation ratio of any $\epsilon$-DP algorithm is lower bounded by $\Omega(\frac{1}{\sqrt{\epsilon}})$, even for instances on HST metrics with uniform facility cost, under the super-set output setting.
no code implementations • 1 Oct 2019 • Di Wang, Lijie Hu, Huanyu Zhang, Marco Gaboardi, Jinhui Xu
Then with high probability, the sample complexity of the public and private data, for the algorithm to achieve an $\alpha$ estimation error (in $\ell_\infty$-norm), is $O(p^2\alpha^{-2})$ and ${O}(p^2\alpha^{-2}\epsilon^{-2})$, respectively, if $\alpha$ is not too small ({\em i. e.,} $\alpha\geq \Omega(\frac{1}{\sqrt{p}})$), where $p$ is the dimensionality of the data.
no code implementations • 30 Sep 2019 • Wei Zhan, Liting Sun, Di Wang, Haojie Shi, Aubrey Clausse, Maximilian Naumann, Julius Kummerle, Hendrik Konigshof, Christoph Stiller, Arnaud de La Fortelle, Masayoshi Tomizuka
3) The driving behavior is highly interactive and complex with adversarial and cooperative motions of various traffic participants.
no code implementations • NeurIPS 2019 • Digvijay Boob, Saurabh Sawlani, Di Wang
As a special case of our result, we report a $1+\eps$ approximation algorithm for the densest subgraph problem which runs in time $O(md/ \eps)$, where $m$ is the number of edges in the graph and $d$ is the maximum graph degree.
no code implementations • 25 Sep 2019 • Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu
We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the $\ell_2$ and $\ell_\infty$-normed robustness.
no code implementations • 25 Sep 2019 • Goran Zuzic, Di Wang, Aranyak Mehta, D. Sivakumar
To answer this question, we draw insights from classic results in game theory, analysis of algorithms, and online learning to introduce a novel framework.
1 code implementation • IJCNLP 2019 • Peixiang Zhong, Di Wang, Chunyan Miao
Messages in human conversations inherently convey emotions.
Ranked #8 on
Emotion Recognition in Conversation
on EC
no code implementations • 23 Sep 2019 • Yaping Zheng, Shiyi Chen, Xinni Zhang, Xiaofeng Zhang, Xiaofei Yang, Di Wang
Community detection has long been an important yet challenging task to analyze complex networks with a focus on detecting topological structures of graph data.
no code implementations • 10 Sep 2019 • Haidong Rong, Yangzihao Wang, Feihu Zhou, Junjie Zhai, Haiyang Wu, Rui Lan, Fan Li, Han Zhang, Yuekui Yang, Zhenyu Guo, Di Wang
We present Distributed Equivalent Substitution (DES) training, a novel distributed training framework for large-scale recommender systems with dynamic sparse features.
no code implementations • 6 Sep 2019 • Di Wang, Feiqing Huang, Jingyu Zhao, Guodong Li, Guangjian Tian
Autoregressive networks can achieve promising performance in many sequence modeling tasks with short-range dependence.
2 code implementations • 19 Jul 2019 • Shusen Liu, Di Wang, Dan Maljovec, Rushil Anirudh, Jayaraman J. Thiagarajan, Sam Ade Jacobs, Brian C. Van Essen, David Hysom, Jae-Seung Yeom, Jim Gaffney, Luc Peterson, Peter B. Robinson, Harsh Bhatia, Valerio Pascucci, Brian K. Spears, Peer-Timo Bremer
With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization.
1 code implementation • 18 Jul 2019 • Peixiang Zhong, Di Wang, Chunyan Miao
Finally, investigations on the neuronal activities reveal important brain regions and inter-channel relations for EEG-based emotion recognition.
Ranked #1 on
Emotion Recognition
on SEED-IV
1 code implementation • 1 Jul 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
In this work, we alleviate the NAS search cost down to less than 3 hours, while achieving state-of-the-art image classification results under mobile latency constraints.
no code implementations • 5 Jun 2019 • Di Wang, Qi Wu, Wen Zhang
This paper takes a deep learning approach to understand consumer credit risk when e-commerce platforms issue unsecured credit to finance customers' purchase.
no code implementations • 10 May 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
Can we automatically design a Convolutional Network (ConvNet) with the highest image classification accuracy under the latency constraint of a mobile device?
5 code implementations • 5 Apr 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
Can we automatically design a Convolutional Network (ConvNet) with the highest image classification accuracy under the runtime constraint of a mobile device?
Ranked #532 on
Image Classification
on ImageNet
1 code implementation • NAACL 2019 • Chunting Zhou, Xuezhe Ma, Di Wang, Graham Neubig
Recent approaches to cross-lingual word embedding have generally been based on linear transformations between the sets of embedding vectors in the two languages.
no code implementations • 18 Jan 2019 • Di Wang, Jinhui Xu
In this paper, we study the problem of estimating the covariance matrix under differential privacy, where the underlying covariance matrix is assumed to be sparse and of high dimensions.
1 code implementation • 21 Dec 2018 • Thatchaphol Saranurak, Di Wang
Our result achieve both nearly linear running time and the strong expander guarantee for clusters.
Data Structures and Algorithms
no code implementations • 17 Dec 2018 • Di Wang, Adam Smith, Jinhui Xu
For the case of \emph{generalized linear losses} (such as hinge and logistic losses), we give an LDP algorithm whose sample complexity is only linear in the dimensionality $p$ and quasipolynomial in other terms (the privacy parameters $\epsilon$ and $\delta$, and the desired excess risk $\alpha$).
no code implementations • NeurIPS 2018 • Di Wang, Marco Gaboardi, Jinhui Xu
In this paper, we revisit the Empirical Risk Minimization problem in the non-interactive local model of differential privacy.
1 code implementation • 17 Nov 2018 • Peixiang Zhong, Di Wang, Chunyan Miao
Affect conveys important implicit information in human communication.
3 code implementations • ACL 2019 • Zhiting Hu, Haoran Shi, Bowen Tan, Wentao Wang, Zichao Yang, Tiancheng Zhao, Junxian He, Lianhui Qin, Di Wang, Xuezhe Ma, Zhengzhong Liu, Xiaodan Liang, Wangrong Zhu, Devendra Singh Sachan, Eric P. Xing
The versatile toolkit also fosters technique sharing across different text generation tasks.
no code implementations • WS 2018 • Zhiting Hu, Zichao Yang, Tiancheng Zhao, Haoran Shi, Junxian He, Di Wang, Xuezhe Ma, Zhengzhong Liu, Xiaodan Liang, Lianhui Qin, Devendra Singh Chaplot, Bowen Tan, Xingjiang Yu, Eric Xing
The features make Texar particularly suitable for technique sharing and generalization across different text generation applications.
no code implementations • WS 2018 • Yuanhang Ren, Ye Du, Di Wang
Given a paragraph of an article and a corresponding query, instead of directly feeding the whole paragraph to the single BiDAF system, a sentence that most likely contains the answer to the query is first selected, which is done via a deep neural network based on TreeLSTM (Tai et al., 2015).
no code implementations • 29 Jun 2018 • Fandong Meng, Zhaopeng Tu, Yong Cheng, Haiyang Wu, Junjie Zhai, Yuekui Yang, Di Wang
Although attention-based Neural Machine Translation (NMT) has achieved remarkable progress in recent years, it still suffers from issues of repeating and dropping translations.
no code implementations • NeurIPS 2017 • Di Wang, Minwei Ye, Jinhui Xu
In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings.
no code implementations • NeurIPS 2018 • Di Wang, Marco Gaboardi, Jinhui Xu
In the case of constant or low dimensionality ($p\ll n$), we first show that if the ERM loss function is $(\infty, T)$-smooth, then we can avoid a dependence of the sample complexity, to achieve error $\alpha$, on the exponential of the dimensionality $p$ with base $1/\alpha$ (i. e., $\alpha^{-p}$), which answers a question in [smith 2017 interaction].
1 code implementation • 9 Feb 2018 • Di Wang, Jinhui Xu
In this paper, we revisit the large-scale constrained linear regression problem and propose faster methods based on some recent developments in sketching and optimization.
1 code implementation • EMNLP 2017 • Di Wang, Nebojsa Jojic, Chris Brockett, Eric Nyberg
We propose simple and flexible training and decoding methods for influencing output style and topic in neural encoder-decoder based language generation.
no code implementations • ICML 2017 • Di Wang, Kimon Fountoulakis, Monika Henzinger, Michael W. Mahoney, Satish Rao
As an application, we use our CRD Process to develop an improved local algorithm for graph clustering.
no code implementations • 19 Jun 2017 • Di Wang, Kimon Fountoulakis, Monika Henzinger, Michael W. Mahoney, Satish Rao
Thus, our CRD Process is the first local graph clustering algorithm that is not subject to the well-known quadratic Cheeger barrier.
no code implementations • LREC 2014 • Nancy Ide, James Pustejovsky, Christopher Cieri, Eric Nyberg, Di Wang, Keith Suderman, Marc Verhagen, Jonathan Wright
The Language Application (LAPPS) Grid project is establishing a framework that enables language service discovery, composition, and reuse and promotes sustainability, manageability, usability, and interoperability of natural language Processing (NLP) components.
no code implementations • NeurIPS 2013 • Xiaoqin Zhang, Di Wang, Zhengyuan Zhou, Yi Ma
In this context, the state-of-the-art algorithms RASL'' and "TILT'' can be viewed as two special cases of our work, and yet each only performs part of the function of our method."