no code implementations • 26 May 2022 • Wei Dai, Chuanfeng Ning, Shiyu Pei, Song Zhu, Xuesong Wang
As a randomized learner model, SCNs are remarkable that the random weights and biases are assigned employing a supervisory mechanism to ensure universal approximation and fast learning.
no code implementations • 23 May 2022 • Wei Dai, Mingcheng Zhang, Kin Huat Low
In this study, a framework for power consumption modeling of eVTOL aircraft was established.
2 code implementations • 9 May 2022 • Wei Dai, Rui Liu, Tianyi Wu, Min Wang, Jianqin Yin, Jun Liu
Accurate and unbiased examinations of skin lesions are critical for the early diagnosis and treatment of skin conditions and disorders.
no code implementations • 20 Apr 2022 • Ji Liu, Zheng Xu, Yanmei Zhang, Wei Dai, Hao Wu, Shiping Chen
Since the emergence of blockchain technology, its application in the financial market has always been an area of focus and exploration by all parties.
1 code implementation • 17 Mar 2022 • Hejie Cui, Wei Dai, Yanqiao Zhu, Xuan Kan, Antonio Aodong Chen Gu, Joshua Lukemire, Liang Zhan, Lifang He, Ying Guo, Carl Yang
To bridge this gap, we present BrainGB, a benchmark for brain network analysis with GNNs.
no code implementations • 11 Mar 2022 • Yanshuang Ao, Xinyu Zhou, Wei Dai
This novel algorithm can leverage privileged information into SCN in the training stage, which provides a new method to train SCN.
no code implementations • 2 Mar 2022 • Wei Dai, Daniel Berleant
We created comprehensive 69 benchmarking image sets, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions.
no code implementations • 25 Oct 2021 • Cheng Cheng, Wei Dai
Dictionary learning aims at seeking a dictionary under which the training data can be sparsely represented.
no code implementations • 13 Oct 2021 • Cheng Cheng, Wei Dai
Typical methods for dictionary update focuses on refining both dictionary atoms and their corresponding sparse coefficients by using the sparsity patterns obtained from sparse coding stage, and hence it is a non-convex bilinear inverse problem.
no code implementations • 5 Oct 2021 • Cheng Cheng, Wei Dai
In the literature, formulations of blind deconvolution is either a convex programming via a matrix lifting of convolution, or a bilinear Lasso.
no code implementations • 12 Sep 2021 • Wei Dai, Boyeong Woo, Siyu Liu, Matthew Marques, Craig B. Engstrom, Peter B. Greer, Stuart Crozier, Jason A. Dowling, Shekhar S. Chandra
Direct automatic segmentation of objects from 3D medical imaging, such as magnetic resonance (MR) imaging, is challenging as it often involves accurately identifying a number of individual objects with complex geometries within a large volume under investigation.
no code implementations • 27 Jul 2021 • Wei Dai, Bizhao Pang, Kin Huat Low
This paper aims at tackling conflict-free path planning problem for UAM operation with a consideration of four-dimensional airspace management.
no code implementations • 11 Jul 2021 • Hejie Cui, Wei Dai, Yanqiao Zhu, Xiaoxiao Li, Lifang He, Carl Yang
Interpretable brain network models for disease prediction are of great value for the advancement of neuroscience.
no code implementations • 1 Jul 2021 • Wei Dai, Yuan An, Wen Long
Through in-depth analysis of ultra high frequency (UHF) stock price change data, more reasonable discrete dynamic distribution models are constructed in this paper.
no code implementations • 26 May 2021 • Yong Shi, Wei Dai, Wen Long, Bo Li
However, the deep kernel Gaussian Process has not been applied to forecast the conditional returns and volatility in financial market to the best of our knowledge.
no code implementations • 13 Mar 2021 • Pankaj Topiwala, Wei Dai, Jiangfeng Pian, Katalina Biondi, Arvind Krovvidi
We investigate variants of the popular VMAF video quality assessment algorithm for the FR case, using both support vector regression and feedforward neural networks.
2 code implementations • 2 Mar 2021 • Wei Dai, Daniel Berleant
Also, we introduce a new four-quadrant statistical visualization tool, including minimum accuracy, maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking robustness of DL classifiers.
no code implementations • 7 Jan 2021 • Yong Shi, Wei Dai, Wen Long, Bo Li
In the input sequence, the temporal positions which are more important for predicting the next duration can be efficiently highlighted via the added attention mechanism layer.
no code implementations • 29 Sep 2020 • Wenjia Xu, Jiuniu Wang, Yang Wang, Guangluan Xu, Wei Dai, Yirong Wu
We generate attribute-based textual explanations for the network and ground the attributes on the image to show visual explanations.
1 code implementation • 28 Jun 2020 • Siyu Liu, Wei Dai, Craig Engstrom, Jurgen Fripp, Peter B. Greer, Stuart Crozier, Jason A. Dowling, Shekhar S. Chandra
In this work, a novel 3D segmentation network, Fabric Image Representation Networks (FIRENet), is proposed to extract and encode generalisable feature representations from multiple medical image datasets in a large-scale manner.
1 code implementation • ICML 2020 • Jingwei Zhuo, Ziru Xu, Wei Dai, Han Zhu, Han Li, Jian Xu, Kun Gai
Retrieving relevant targets from an extremely large target set under computational limits is a common challenge for information retrieval and recommendation systems.
2 code implementations • 27 Dec 2019 • Roshan Dathathri, Blagovesta Kostova, Olli Saarikivi, Wei Dai, Kim Laine, Madanlal Musuvathi
We believe that EVA would enable a wider adoption of FHE by making it easier to develop FHE applications and domain-specific FHE compilers.
no code implementations • 20 Sep 2019 • M. Sadegh Riazi, Kim Laine, Blake Pelton, Wei Dai
Building on top of NTT engine, we design a novel architecture for computation on homomorphically encrypted data.
no code implementations • IJCNLP 2019 • Huajie Chen, Deng Cai, Wei Dai, Zehui Dai, Yadong Ding
Judgment prediction for legal cases has attracted much research efforts for its practice use, of which the ultimate goal is prison term prediction.
1 code implementation • 5 Jul 2019 • Wei Dai, Daniel Berleant
This paper surveys benchmarking principles, machine learning devices including GPUs, FPGAs, and ASICs, and deep learning software frameworks.
1 code implementation • 24 Jun 2019 • Qi Yu, Wei Dai, Zoran Cvetkovic, Jubo Zhu
BLOTLESS updates a block of dictionary elements and the corresponding sparse coefficients simultaneously.
no code implementations • 16 Oct 2018 • Wei Dai, Kenji Yoshigoe, William Parsley
Traditional data quality control methods are based on users experience or previously established business rules, and this limits performance in addition to being a very time consuming process with lower than desirable accuracy.
no code implementations • ICLR 2019 • Wei Dai, Yi Zhou, Nanqing Dong, Hao Zhang, Eric P. Xing
Many distributed machine learning (ML) systems adopt the non-synchronous execution in order to alleviate the network communication bottleneck, resulting in stale parameters that do not reflect the latest updates.
no code implementations • 29 Jul 2018 • Nanqing Dong, Michael Kampffmeyer, Xiaodan Liang, Zeya Wang, Wei Dai, Eric P. Xing
Motivated by the zoom-in operation of a pathologist using a digital microscope, RAZN learns a policy network to decide whether zooming is required in a given region of interest.
no code implementations • 10 Jul 2018 • Nanqing Dong, Michael Kampffmeyer, Xiaodan Liang, Zeya Wang, Wei Dai, Eric P. Xing
Specifically, we propose a model that enforces our intuition that prediction masks should be domain independent.
no code implementations • 11 Dec 2017 • Hao Zhang, Shizhen Xu, Graham Neubig, Wei Dai, Qirong Ho, Guangwen Yang, Eric P. Xing
Recent deep learning (DL) models have moved beyond static network architectures to dynamic ones, handling data where the network structure changes every example, such as sequences of variable lengths, trees, and graphs.
no code implementations • ICCV 2017 • Xiaodan Liang, Lisa Lee, Wei Dai, Eric P. Xing
To make both synthesized future frames and flows indistinguishable from reality, a dual adversarial training method is proposed to ensure that the future-flow prediction is able to help infer realistic future-frames, while the future-frame prediction in turn leads to realistic optical flows.
no code implementations • 11 Jun 2017 • Hao Zhang, Zeyu Zheng, Shizhen Xu, Wei Dai, Qirong Ho, Xiaodan Liang, Zhiting Hu, Jinliang Wei, Pengtao Xie, Eric P. Xing
We show that Poseidon enables Caffe and TensorFlow to achieve 15. 5x speed-up on 16 single-GPU machines, even with limited bandwidth (10GbE) and the challenging VGG19-22K network for image classification.
no code implementations • 26 Mar 2017 • Wei Dai, Joseph Doyle, Xiaodan Liang, Hao Zhang, Nanqing Dong, Yuan Li, Eric P. Xing
Through this adversarial process the critic network learns the higher order structures and guides the segmentation model to achieve realistic segmentation outcomes.
1 code implementation • 20 Mar 2017 • Juncheng Li, Wei Dai, Florian Metze, Shuhui Qu, Samarjit Das
On these features, we apply five models: Gaussian Mixture Model (GMM), Deep Neural Network (DNN), Recurrent Neural Network (RNN), Convolutional Deep Neural Net- work (CNN) and i-vector.
no code implementations • 29 Nov 2016 • Shuhui Qu, Juncheng Li, Wei Dai, Samarjit Das
Based on the procedure of log Mel-filter banks, we design a filter bank learning layer.
8 code implementations • 1 Oct 2016 • Wei Dai, Chia Dai, Shuhui Qu, Juncheng Li, Samarjit Das
Our CNNs, with up to 34 weight layers, are efficient to optimize over very long sequences (e. g., vector of size 32000), necessary for processing acoustic waveforms.
no code implementations • 31 Dec 2015 • Eric P. Xing, Qirong Ho, Pengtao Xie, Wei Dai
Taking the view that Big ML systems can benefit greatly from ML-rooted statistical and algorithmic insights --- and that ML researchers should therefore not shy away from such systems design --- we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions.
1 code implementation • 4 Dec 2014 • Jinhui Yuan, Fei Gao, Qirong Ho, Wei Dai, Jinliang Wei, Xun Zheng, Eric P. Xing, Tie-Yan Liu, Wei-Ying Ma
When building large-scale machine learning (ML) programs, such as big topic models or deep neural nets, one usually assumes such tasks can only be attempted with industrial-sized clusters with thousands of nodes, which are out of reach for most practitioners or academic researchers.
no code implementations • 29 Oct 2014 • Wei Dai, Abhimanu Kumar, Jinliang Wei, Qirong Ho, Garth Gibson, Eric P. Xing
As Machine Learning (ML) applications increase in data size and model complexity, practitioners turn to distributed clusters to satisfy the increased computational and memory demands.
no code implementations • 22 Sep 2014 • Yu-Xiang Wang, Veeranjaneyulu Sadhanala, Wei Dai, Willie Neiswanger, Suvrit Sra, Eric P. Xing
We develop parallel and distributed Frank-Wolfe algorithms; the former on shared memory machines with mini-batching, and the latter in a delayed update framework.
no code implementations • 30 Dec 2013 • Eric P. Xing, Qirong Ho, Wei Dai, Jin Kyu Kim, Jinliang Wei, Seunghak Lee, Xun Zheng, Pengtao Xie, Abhimanu Kumar, Yao-Liang Yu
What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100s of billions of parameters) on Big Data (up to terabytes or petabytes)?
no code implementations • 30 Dec 2013 • Jinliang Wei, Wei Dai, Abhimanu Kumar, Xun Zheng, Qirong Ho, Eric P. Xing
Many ML algorithms fall into the category of \emph{iterative convergent algorithms} which start from a randomly chosen initial point and converge to optima by repeating iteratively a set of procedures.