Search Results for author: Di Wang

Found 75 papers, 18 papers with code

p-Norm Flow Diffusion for Local Graph Clustering

1 code implementation ICML 2020 Kimon Fountoulakis, Di Wang, Shenghao Yang

Local graph clustering and the closely related seed set expansion problem are primitives on graphs that are central to a wide range of analytic and learning tasks such as local clustering, community detection, nodes ranking and feature inference.

Community Detection Graph Clustering

An Empirical Study of Remote Sensing Pretraining

1 code implementation6 Apr 2022 Di Wang, Jing Zhang, Bo Du, Gui-Song Xia, DaCheng Tao

To this end, we train different networks from scratch with the help of the largest RS scene recognition dataset up to now -- MillionAID, to obtain a series of RS pretrained backbones, including both convolutional neural networks (CNN) and vision transformers such as Swin and ViTAE, which have shown promising performance on computer vision tasks.

Change Detection Object Detection +2

High Dimensional Statistical Estimation under One-bit Quantization

no code implementations26 Feb 2022 Junren Chen, Cheng-Long Wang, Michael K. Ng, Di Wang

Compared with data with high precision, one-bit (binary) data are preferable in many applications because of the efficiency in signal storage, processing, transmission, and enhancement of privacy.

Low-Rank Matrix Completion Quantization

Differentially Private $\ell_1$-norm Linear Regression with Heavy-tailed Data

no code implementations10 Jan 2022 Di Wang, Jinhui Xu

Firstly, we study the case where the $\ell_2$ norm of data has bounded second order moment.

VDPC: Variational Density Peak Clustering Algorithm

no code implementations29 Dec 2021 Yizhang Wang, Di Wang, You Zhou, Xiaofeng Zhang, Chai Quek

Furthermore, we divide all data points into different levels according to their local density and propose a unified clustering framework by combining the advantages of both DPC and DBSCAN.

A Survey of Large-Scale Deep Learning Serving System Optimization: Challenges and Opportunities

no code implementations28 Nov 2021 Fuxun Yu, Di Wang, Longfei Shangguan, Minjia Zhang, Xulong Tang, ChenChen Liu, Xiang Chen

With both scaling trends, new problems and challenges emerge in DL inference serving systems, which gradually trends towards Large-scale Deep learning Serving systems (LDS).

Fed2: Feature-Aligned Federated Learning

no code implementations28 Nov 2021 Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, ChenChen Liu, Zhi Tian, Xiang Chen

Federated learning learns from scattered data by fusing collaborative models from local nodes.

Federated Learning

Carousel Memory: Rethinking the Design of Episodic Memory for Continual Learning

no code implementations14 Oct 2021 Soobee Lee, Minindu Weerakoon, Jonghyun Choi, Minjia Zhang, Di Wang, Myeongjae Jeon

In particular, in mobile and IoT devices, real-time data can be stored not just in high-speed RAMs but in internal storage devices as well, which offer significantly larger capacity than the RAMs.

Continual Learning

Concept-Based Label Embedding via Dynamic Routing for Hierarchical Text Classification

no code implementations ACL 2021 Xuepeng Wang, Li Zhao, Bing Liu, Tao Chen, Feng Zhang, Di Wang

In this paper, we propose a novel concept-based label embedding method that can explicitly represent the concept and model the sharing mechanism among classes for the hierarchical text classification.

Text Classification

PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction

1 code implementation ACL 2021 Shulin Liu, Tao Yang, Tianchi Yue, Feng Zhang, Di Wang

In this paper, we propose a Pre-trained masked Language model with Misspelled knowledgE (PLOME) for CSC, which jointly learns how to understand language and correct spelling errors.

Language Modelling Spelling Correction

Faster Rates of Private Stochastic Convex Optimization

no code implementations31 Jul 2021 Jinyan Su, Lijie Hu, Di Wang

Specifically, we first show that under some mild assumptions on the loss functions, there is an algorithm whose output could achieve an upper bound of $\tilde{O}((\frac{1}{\sqrt{n}}+\frac{\sqrt{d\log \frac{1}{\delta}}}{n\epsilon})^\frac{\theta}{\theta-1})$ for $(\epsilon, \delta)$-DP when $\theta\geq 2$, here $n$ is the sample size and $d$ is the dimension of the space.

High Dimensional Differentially Private Stochastic Optimization with Heavy-tailed Data

no code implementations23 Jul 2021 Lijie Hu, Shuo Ni, Hanshen Xiao, Di Wang

To better understand the challenges arising from irregular data distribution, in this paper we provide the first study on the problem of DP-SCO with heavy-tailed data in the high dimensional space.

Sparse Learning Stochastic Optimization

Spectral-Spatial Graph Reasoning Network for Hyperspectral Image Classification

no code implementations26 Jun 2021 Di Wang, Bo Du, Liangpei Zhang

At last, by combining the extracted spatial and spectral graph contexts, we obtain the SSGRN to achieve a high accuracy classification.

Classification Hyperspectral Image Classification

UniKeyphrase: A Unified Extraction and Generation Framework for Keyphrase Prediction

1 code implementation Findings (ACL) 2021 Huanqin Wu, Wei Liu, Lei LI, Dan Nie, Tao Chen, Feng Zhang, Di Wang

Keyphrase Prediction (KP) task aims at predicting several keyphrases that can summarize the main idea of the given document.

Optimal Rates of (Locally) Differentially Private Heavy-tailed Multi-Armed Bandits

no code implementations4 Jun 2021 Youming Tao, Yulian Wu, Peng Zhao, Di Wang

Finally, we establish the lower bound to show that the instance-dependent regret of our improved algorithm is optimal.

Multi-Armed Bandits

$\ell_2$-norm Flow Diffusion in Near-Linear Time

no code implementations30 May 2021 Li Chen, Richard Peng, Di Wang

Diffusion is a fundamental graph procedure and has been a basic building block in a wide range of theoretical and empirical applications such as graph partitioning and semi-supervised learning on graphs.

Graph Clustering Graph Learning +2

GSA-Forecaster: Forecasting Graph-Based Time-Dependent Data with Graph Sequence Attention

no code implementations13 Apr 2021 Yang Li, Di Wang, José M. F. Moura

This task is challenging as models need not only to capture spatial dependency and temporal dependency within the data, but also to leverage useful auxiliary information for accurate predictions.

3DMNDT:3D multi-view registration method based on the normal distributions transform

no code implementations20 Mar 2021 Jihua Zhu, Di Wang, Jiaxi Mu, Huimin Lu, Zhiqiang Tian, Zhongyu Li

Under the NDT framework, this paper proposes a novel multi-view registration method, named 3D multi-view registration based on the normal distributions transform (3DMNDT), which integrates the K-means clustering and Lie algebra solver to achieve multi-view registration.

Minimum Cost Flows, MDPs, and $\ell_1$-Regression in Nearly Linear Time for Dense Instances

no code implementations14 Jan 2021 Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, Di Wang

In the special case of the minimum cost flow problem on $n$-vertex $m$-edge graphs with integer polynomially-bounded costs and capacities we obtain a randomized method which solves the problem in $\tilde{O}(m+n^{1. 5})$ time.

Data Structures and Algorithms Optimization and Control

CARE: Commonsense-Aware Emotional Response Generation with Latent Concepts

no code implementations15 Dec 2020 Peixiang Zhong, Di Wang, Pengfei Li, Chen Zhang, Hao Wang, Chunyan Miao

Experimental results on two large-scale datasets support our hypothesis and show that our model can produce more accurate and commonsense-aware emotional responses and achieve better human ratings than state-of-the-art models that only specialize in one aspect.

Response Generation

Third ArchEdge Workshop: Exploring the Design Space of Efficient Deep Neural Networks

no code implementations22 Nov 2020 Fuxun Yu, Dimitrios Stamoulis, Di Wang, Dimitrios Lymberopoulos, Xiang Chen

This paper gives an overview of our ongoing work on the design space exploration of efficient deep neural networks (DNNs).

Empirical Risk Minimization in the Non-interactive Local Model of Differential Privacy

no code implementations11 Nov 2020 Di Wang, Marco Gaboardi, Adam Smith, Jinhui Xu

In our second attempt, we show that for any $1$-Lipschitz generalized linear convex loss function, there is an $(\epsilon, \delta)$-LDP algorithm whose sample complexity for achieving error $\alpha$ is only linear in the dimensionality $p$.

Deep Learning Analysis and Age Prediction from Shoeprints

1 code implementation7 Nov 2020 Muhammad Hassan, Yan Wang, Di Wang, Daixi Li, Yanchun Liang, You Zhou, Dong Xu

We collected 100, 000 shoeprints of subjects ranging from 7 to 80 years old and used the data to develop a deep learning end-to-end model ShoeNet to analyze age-related patterns and predict age.

Differentially Private (Gradient) Expectation Maximization Algorithm with Statistical Guarantees

no code implementations22 Oct 2020 Di Wang, Jiahao Ding, Lijie Hu, Zejun Xie, Miao Pan, Jinhui Xu

To address this issue, we propose in this paper the first DP version of (Gradient) EM algorithm with statistical guarantees.

On Differentially Private Stochastic Convex Optimization with Heavy-tailed Data

no code implementations ICML 2020 Di Wang, Hanshen Xiao, Srini Devadas, Jinhui Xu

For this case, we propose a method based on the sample-and-aggregate framework, which has an excess population risk of $\tilde{O}(\frac{d^3}{n\epsilon^4})$ (after omitting other factors), where $n$ is the sample size and $d$ is the dimensionality of the data.

Estimating Stochastic Linear Combination of Non-linear Regressions Efficiently and Scalably

no code implementations19 Oct 2020 Di Wang, Xiangyu Guo, Chaowen Guan, Shi Li, Jinhui Xu

To the best of our knowledge, this is the first work that studies and provides theoretical guarantees for the stochastic linear combination of non-linear regressions model.

Robust High Dimensional Expectation Maximization Algorithm via Trimmed Hard Thresholding

no code implementations19 Oct 2020 Di Wang, Xiangyu Guo, Shi Li, Jinhui Xu

In this paper, we study the problem of estimating latent variable models with arbitrarily corrupted samples in high dimensional space ({\em i. e.,} $d\gg n$) where the underlying parameter is assumed to be sparse.

Learning Robust Algorithms for Online Allocation Problems Using Adversarial Training

no code implementations16 Oct 2020 Goran Zuzic, Di Wang, Aranyak Mehta, D. Sivakumar

In this paper, we focus on the AdWords problem, which is a classical online budgeted matching problem of both theoretical and practical significance.

ECG Beats Fast Classification Base on Sparse Dictionaries

1 code implementation8 Sep 2020 Nanyu Li, Yujuan Si, Di Wang, Tong Liu, Jinrun Yu

In VQ method, a set of dictionaries corresponding to segments of ECG beats is trained, and VQ codes are used to represent each heartbeat.

Classification Dictionary Learning +3

Heterogeneous Federated Learning

no code implementations15 Aug 2020 Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, ChenChen Liu, Zhi Tian, Xiang Chen

Specifically, we design a feature-oriented regulation method ({$\Psi$-Net}) to ensure explicit feature information allocation in different neural network structures.

Federated Learning

AntiDote: Attention-based Dynamic Optimization for Neural Network Runtime Efficiency

no code implementations14 Aug 2020 Fuxun Yu, ChenChen Liu, Di Wang, Yanzhi Wang, Xiang Chen

Based on the neural network attention mechanism, we propose a comprehensive dynamic optimization framework including (1) testing-phase channel and column feature map pruning, as well as (2) training-phase optimization by targeted dropout.

Raising Expectations: Automating Expected Cost Analysis with Types

no code implementations24 Jun 2020 Di Wang, David M Kahn, Jan Hoffmann

The effectiveness of the technique is evaluated by analyzing the sample complexity of discrete distributions and with a novel average-case estimation for deterministic programs that combines expected cost analysis with statistical methods.

Programming Languages

$p$-Norm Flow Diffusion for Local Graph Clustering

2 code implementations20 May 2020 Kimon Fountoulakis, Di Wang, Shenghao Yang

Local graph clustering and the closely related seed set expansion problem are primitives on graphs that are central to a wide range of analytic and learning tasks such as local clustering, community detection, nodes ranking and feature inference.

Community Detection Graph Clustering

Towards Assessment of Randomized Smoothing Mechanisms for Certifying Adversarial Robustness

no code implementations15 May 2020 Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu

Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of additive noise required by these mechanisms and the lower bounds (criteria).

Adversarial Robustness

Distributed Kernel Ridge Regression with Communications

no code implementations27 Mar 2020 Shao-Bo Lin, Di Wang, Ding-Xuan Zhou

This paper focuses on generalization performance analysis for distributed algorithms in the framework of learning theory.

Learning Theory

Robust Feature-Based Point Registration Using Directional Mixture Model

no code implementations25 Nov 2019 Saman Fahandezh-Saadi, Di Wang, Masayoshi Tomizuka

This paper presents a robust probabilistic point registration method for estimating the rigid transformation (i. e. rotation matrix and translation vector) between two pointcloud dataset.

Translation

Unsupervised Domain Adaptation for Object Detection via Cross-Domain Semi-Supervised Learning

no code implementations17 Nov 2019 Fuxun Yu, Di Wang, Yinpeng Chen, Nikolaos Karianakis, Tong Shen, Pei Yu, Dimitrios Lymberopoulos, Sidi Lu, Weisong Shi, Xiang Chen

In this work, we show that such adversarial-based methods can only reduce the domain style gap, but cannot address the domain content distribution gap that is shown to be important for object detectors.

Object Detection Unsupervised Domain Adaptation

Facility Location Problem in Differential Privacy Model Revisited

no code implementations NeurIPS 2019 Yunus Esencayi, Marco Gaboardi, Shi Li, Di Wang

On the negative side, we show that the approximation ratio of any $\epsilon$-DP algorithm is lower bounded by $\Omega(\frac{1}{\sqrt{\epsilon}})$, even for instances on HST metrics with uniform facility cost, under the super-set output setting.

Estimating Smooth GLM in Non-interactive Local Differential Privacy Model with Public Unlabeled Data

no code implementations1 Oct 2019 Di Wang, Lijie Hu, Huanyu Zhang, Marco Gaboardi, Jinhui Xu

Then with high probability, the sample complexity of the public and private data, for the algorithm to achieve an $\alpha$ estimation error (in $\ell_\infty$-norm), is $O(p^2\alpha^{-2})$ and ${O}(p^2\alpha^{-2}\epsilon^{-2})$, respectively, if $\alpha$ is not too small ({\em i. e.,} $\alpha\geq \Omega(\frac{1}{\sqrt{p}})$), where $p$ is the dimensionality of the data.

Faster width-dependent algorithm for mixed packing and covering LPs

no code implementations NeurIPS 2019 Digvijay Boob, Saurabh Sawlani, Di Wang

As a special case of our result, we report a $1+\eps$ approximation algorithm for the densest subgraph problem which runs in time $O(md/ \eps)$, where $m$ is the number of edges in the graph and $d$ is the maximum graph degree.

Combinatorial Optimization

A Unified framework for randomized smoothing based certified defenses

no code implementations25 Sep 2019 Tianhang Zheng, Di Wang, Baochun Li, Jinhui Xu

We answer the above two questions by first demonstrating that Gaussian mechanism and Exponential mechanism are the (near) optimal options to certify the $\ell_2$ and $\ell_\infty$-normed robustness.

YaoGAN: Learning Worst-case Competitive Algorithms from Self-generated Inputs

no code implementations25 Sep 2019 Goran Zuzic, Di Wang, Aranyak Mehta, D. Sivakumar

To answer this question, we draw insights from classic results in game theory, analysis of algorithms, and online learning to introduce a novel framework.

Combinatorial Optimization online learning

Heterogeneous-Temporal Graph Convolutional Networks: Make the Community Detection Much Better

no code implementations23 Sep 2019 Yaping Zheng, Shiyi Chen, Xinni Zhang, Xiaofeng Zhang, Xiaofei Yang, Di Wang

Community detection has long been an important yet challenging task to analyze complex networks with a focus on detecting topological structures of graph data.

Community Detection

Distributed Equivalent Substitution Training for Large-Scale Recommender Systems

no code implementations10 Sep 2019 Haidong Rong, Yangzihao Wang, Feihu Zhou, Junjie Zhai, Haiyang Wu, Rui Lan, Fan Li, Han Zhang, Yuekui Yang, Zhenyu Guo, Di Wang

We present Distributed Equivalent Substitution (DES) training, a novel distributed training framework for large-scale recommender systems with dynamic sparse features.

Recommendation Systems

Compact Autoregressive Network

no code implementations6 Sep 2019 Di Wang, Feiqing Huang, Jingyu Zhao, Guodong Li, Guangjian Tian

Autoregressive networks can achieve promising performance in many sequence modeling tasks with short-range dependence.

EEG-Based Emotion Recognition Using Regularized Graph Neural Networks

1 code implementation18 Jul 2019 Peixiang Zhong, Di Wang, Chunyan Miao

Finally, investigations on the neuronal activities reveal important brain regions and inter-channel relations for EEG-based emotion recognition.

EEG Emotion Recognition

Single-Path Mobile AutoML: Efficient ConvNet Design and NAS Hyperparameter Optimization

1 code implementation1 Jul 2019 Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu

In this work, we alleviate the NAS search cost down to less than 3 hours, while achieving state-of-the-art image classification results under mobile latency constraints.

Hyperparameter Optimization Image Classification +1

Neural Learning of Online Consumer Credit Risk

no code implementations5 Jun 2019 Di Wang, Qi Wu, Wen Zhang

This paper takes a deep learning approach to understand consumer credit risk when e-commerce platforms issue unsecured credit to finance customers' purchase.

Time Series

Single-Path NAS: Device-Aware Efficient ConvNet Design

no code implementations10 May 2019 Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu

Can we automatically design a Convolutional Network (ConvNet) with the highest image classification accuracy under the latency constraint of a mobile device?

General Classification Image Classification +1

Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours

5 code implementations5 Apr 2019 Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu

Can we automatically design a Convolutional Network (ConvNet) with the highest image classification accuracy under the runtime constraint of a mobile device?

General Classification Image Classification +1

Density Matching for Bilingual Word Embedding

1 code implementation NAACL 2019 Chunting Zhou, Xuezhe Ma, Di Wang, Graham Neubig

Recent approaches to cross-lingual word embedding have generally been based on linear transformations between the sets of embedding vectors in the two languages.

Bilingual Lexicon Induction Word Embeddings +1

Differentially Private High Dimensional Sparse Covariance Matrix Estimation

no code implementations18 Jan 2019 Di Wang, Jinhui Xu

In this paper, we study the problem of estimating the covariance matrix under differential privacy, where the underlying covariance matrix is assumed to be sparse and of high dimensions.

Expander Decomposition and Pruning: Faster, Stronger, and Simpler

1 code implementation21 Dec 2018 Thatchaphol Saranurak, Di Wang

Our result achieve both nearly linear running time and the strong expander guarantee for clusters.

Data Structures and Algorithms

Noninteractive Locally Private Learning of Linear Models via Polynomial Approximations

no code implementations17 Dec 2018 Di Wang, Adam Smith, Jinhui Xu

For the case of \emph{generalized linear losses} (such as hinge and logistic losses), we give an LDP algorithm whose sample complexity is only linear in the dimensionality $p$ and quasipolynomial in other terms (the privacy parameters $\epsilon$ and $\delta$, and the desired excess risk $\alpha$).

Empirical Risk Minimization in Non-interactive Local Differential Privacy Revisited

no code implementations NeurIPS 2018 Di Wang, Marco Gaboardi, Jinhui Xu

In this paper, we revisit the Empirical Risk Minimization problem in the non-interactive local model of differential privacy.

Tackling Adversarial Examples in QA via Answer Sentence Selection

no code implementations WS 2018 Yuanhang Ren, Ye Du, Di Wang

Given a paragraph of an article and a corresponding query, instead of directly feeding the whole paragraph to the single BiDAF system, a sentence that most likely contains the answer to the query is first selected, which is done via a deep neural network based on TreeLSTM (Tai et al., 2015).

Question Answering Reading Comprehension

Neural Machine Translation with Key-Value Memory-Augmented Attention

no code implementations29 Jun 2018 Fandong Meng, Zhaopeng Tu, Yong Cheng, Haiyang Wu, Junjie Zhai, Yuekui Yang, Di Wang

Although attention-based Neural Machine Translation (NMT) has achieved remarkable progress in recent years, it still suffers from issues of repeating and dropping translations.

Machine Translation Translation

Differentially Private Empirical Risk Minimization Revisited: Faster and More General

no code implementations NeurIPS 2017 Di Wang, Minwei Ye, Jinhui Xu

In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings.

Empirical Risk Minimization in Non-interactive Local Differential Privacy: Efficiency and High Dimensional Case

no code implementations NeurIPS 2018 Di Wang, Marco Gaboardi, Jinhui Xu

In the case of constant or low dimensionality ($p\ll n$), we first show that if the ERM loss function is $(\infty, T)$-smooth, then we can avoid a dependence of the sample complexity, to achieve error $\alpha$, on the exponential of the dimensionality $p$ with base $1/\alpha$ (i. e., $\alpha^{-p}$), which answers a question in [smith 2017 interaction].

Large Scale Constrained Linear Regression Revisited: Faster Algorithms via Preconditioning

1 code implementation9 Feb 2018 Di Wang, Jinhui Xu

In this paper, we revisit the large-scale constrained linear regression problem and propose faster methods based on some recent developments in sketching and optimization.

Steering Output Style and Topic in Neural Response Generation

1 code implementation EMNLP 2017 Di Wang, Nebojsa Jojic, Chris Brockett, Eric Nyberg

We propose simple and flexible training and decoding methods for influencing output style and topic in neural encoder-decoder based language generation.

Response Generation Text Generation

Capacity Releasing Diffusion for Speed and Locality

no code implementations19 Jun 2017 Di Wang, Kimon Fountoulakis, Monika Henzinger, Michael W. Mahoney, Satish Rao

Thus, our CRD Process is the first local graph clustering algorithm that is not subject to the well-known quadratic Cheeger barrier.

Graph Clustering

The Language Application Grid

no code implementations LREC 2014 Nancy Ide, James Pustejovsky, Christopher Cieri, Eric Nyberg, Di Wang, Keith Suderman, Marc Verhagen, Jonathan Wright

The Language Application (LAPPS) Grid project is establishing a framework that enables language service discovery, composition, and reuse and promotes sustainability, manageability, usability, and interoperability of natural language Processing (NLP) components.

Machine Translation Question Answering +1

Simultaneous Rectification and Alignment via Robust Recovery of Low-rank Tensors

no code implementations NeurIPS 2013 Xiaoqin Zhang, Di Wang, Zhengyuan Zhou, Yi Ma

In this context, the state-of-the-art algorithms RASL'' and "TILT'' can be viewed as two special cases of our work, and yet each only performs part of the function of our method."

Cannot find the paper you are looking for? You can Submit a new open access paper.