no code implementations • 11 Jun 2024 • Ling He, Yanxin Chen, Wenqi Wang, Shuting He, Xiaoqiang Hu
This study employs cutting-edge wearable monitoring technology to conduct high-precision, high-temporal-resolution (1-second interval) cognitive load assessment on electroencephalogram (EEG) data from the FP1 channel and heart rate variability (HRV) data of secondary vocational students.
1 code implementation • 15 Jan 2024 • Wenqi Wang, Jacob Bieker, Rossella Arcucci, César Quilodrán-Casas
In recent years, the convergence of data-driven machine learning models with Data Assimilation (DA) offers a promising avenue for enhancing weather forecasting.
no code implementations • CVPR 2024 • Yizhi Wang, Wallace Lira, Wenqi Wang, Ali Mahdavi-Amiri, Hao Zhang
For the former we inject a learnable slice indicator code to designate each decoded image into a spatial slice location while the slice generator is a denoising diffusion model operating on the entirety of slice images stacked on the input channels.
no code implementations • 3 Dec 2023 • Yizhi Wang, Wallace Lira, Wenqi Wang, Ali Mahdavi-Amiri, Hao Zhang
Our key observation is that object slicing is more advantageous than altering views to reveal occluded structures.
no code implementations • 19 Jul 2023 • Xiaohong Liu, Xiongkuo Min, Wei Sun, Yulun Zhang, Kai Zhang, Radu Timofte, Guangtao Zhai, Yixuan Gao, Yuqin Cao, Tengchuan Kou, Yunlong Dong, Ziheng Jia, Yilin Li, Wei Wu, Shuming Hu, Sibin Deng, Pengxiang Xiao, Ying Chen, Kai Li, Kai Zhao, Kun Yuan, Ming Sun, Heng Cong, Hao Wang, Lingzhi Fu, Yusheng Zhang, Rongyu Zhang, Hang Shi, Qihang Xu, Longan Xiao, Zhiliang Ma, Mirko Agarla, Luigi Celona, Claudio Rota, Raimondo Schettini, Zhiwei Huang, Yanan Li, Xiaotao Wang, Lei Lei, Hongye Liu, Wei Hong, Ironhead Chuang, Allen Lin, Drake Guan, Iris Chen, Kae Lou, Willy Huang, Yachun Tasi, Yvonne Kao, Haotian Fan, Fangyuan Kong, Shiqi Zhou, Hao liu, Yu Lai, Shanshan Chen, Wenqi Wang, HaoNing Wu, Chaofeng Chen, Chunzheng Zhu, Zekun Guo, Shiling Zhao, Haibing Yin, Hongkui Wang, Hanene Brachemi Meftah, Sid Ahmed Fezza, Wassim Hamidouche, Olivier Déforges, Tengfei Shi, Azadeh Mansouri, Hossein Motamednia, Amir Hossein Bakhtiari, Ahmad Mahmoudi Aznaveh
61 participating teams submitted their prediction results during the development phase, with a total of 3168 submissions.
no code implementations • 15 Jul 2023 • Ze Lu, Yalei Lv, Wenqi Wang, Pengfei Xiong
Specifically, we introduce an extra Frequency Branch and Frequency Loss on the spatial-based network to impose direct supervision on the frequency information, and propose a Frequency-Spatial Cross-Attention Block (FSCAB) to fuse multi-domain features and combine the corresponding characteristics.
no code implementations • 20 Nov 2019 • Wenlin Wang, Hongteng Xu, Zhe Gan, Bai Li, Guoyin Wang, Liqun Chen, Qian Yang, Wenqi Wang, Lawrence Carin
We propose a novel graph-driven generative model, that unifies multiple heterogeneous learning tasks into the same framework.
no code implementations • ICLR 2020 • Wenlin Wang, Hongteng Xu, Ruiyi Zhang, Wenqi Wang, Piyush Rai, Lawrence Carin
To address this, we propose a learning framework that improves collaborative filtering with a synthetic feedback loop (CF-SFL) to simulate the user feedback.
no code implementations • 20 Oct 2019 • Wenlin Wang, Hongteng Xu, Guoyin Wang, Wenqi Wang, Lawrence Carin
{Specifically, we build a conditional generative model to generate features from seen-class attributes, and establish an optimal transport between the distribution of the generated features and that of the real features.}
no code implementations • 28 Apr 2019 • Kai Wang, Fuyuan Shi, Wenqi Wang, Yibing Nan, Shiguo Lian
This paper presents an improved scheme for the generation and adaption of synthetic images for the training of deep Convolutional Neural Networks(CNNs) to perform the object detection task in smart vending machines.
no code implementations • 12 Feb 2019 • Wenqi Wang, Run Wang, Lina Wang, Zhibo Wang, Aoshuang Ye
Recently, studies have revealed adversarial examples in the text domain, which could effectively evade various DNN-based text analyzers and further bring the threats of the proliferation of disinformation.
no code implementations • 13 Mar 2018 • Wenqi Wang, Vaneet Aggarwal, Shuchin Aeron
Tensor train is a hierarchical tensor network structure that helps alleviate the curse of dimensionality by parameterizing large-scale multidimensional data via a set of network of low-rank tensors.
no code implementations • CVPR 2018 • Wenqi Wang, Yifan Sun, Brian Eriksson, Wenlin Wang, Vaneet Aggarwal
Deep neural networks have demonstrated state-of-the-art performance in a variety of real-world applications.
no code implementations • 28 Dec 2017 • Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, Lawrence Carin
The TCNLM learns the global semantic coherence of a document via a neural topic model, and the probability of each learned latent topic is further used to build a Mixture-of-Experts (MoE) language model, where each expert (corresponding to one topic) is a recurrent neural network (RNN) that accounts for learning the local structure of a word sequence.
no code implementations • 3 Dec 2017 • Wenqi Wang, Vaneet Aggarwal, Shuchin Aeron
In this paper, we propose a Tensor Train Neighborhood Preserving Embedding (TTNPE) to embed multi-dimensional tensor data into low dimensional tensor subspace.
no code implementations • ICCV 2017 • Wenqi Wang, Vaneet Aggarwal, Shuchin Aeron
Using the matrix product state (MPS) representation of the recently proposed tensor ring decompositions, in this paper we propose a tensor completion algorithm, which is an alternating minimization algorithm that alternates over the factors in the MPS representation.
no code implementations • 14 Nov 2016 • Wenlin Wang, Changyou Chen, Wenqi Wang, Piyush Rai, Lawrence Carin
Unlike most existing methods for early classification of time series data, that are designed to solve this problem under the assumption of the availability of a good set of pre-defined (often hand-crafted) features, our framework can jointly perform feature learning (by learning a deep hierarchy of \emph{shapelets} capturing the salient characteristics in each time series), along with a dynamic truncation model to help our deep feature learning architecture focus on the early parts of each time series.
no code implementations • 15 Oct 2016 • Wenqi Wang, Vaneet Aggarwal, Shuchin Aeron
Similar to the Union of Subspaces (UOS) model where each data from each subspace is generated from a (unknown) basis, in the UOPC model each data from each cone is assumed to be generated from a finite number of (unknown) \emph{extreme rays}. To cluster data under this model, we consider several algorithms - (a) Sparse Subspace Clustering by Non-negative constraints Lasso (NCL), (b) Least squares approximation (LSA), and (c) K-nearest neighbor (KNN) algorithm to arrive at affinity between data points.
no code implementations • 19 Sep 2016 • Wenqi Wang, Vaneet Aggarwal, Shuchin Aeron
Using the matrix product state (MPS) representation of tensor train decompositions, in this paper we propose a tensor completion algorithm which alternates over the matrices (tensors) in the MPS representation.
no code implementations • 11 Jul 2016 • Wenqi Wang, Shuchin Aeron, Vaneet Aggarwal
In this paper we present deterministic conditions for success of sparse subspace clustering (SSC) under missing data, when data is assumed to come from a Union of Subspaces (UoS) model.
no code implementations • 15 Apr 2016 • Wenqi Wang, Shuchin Aeron, Vaneet Aggarwal
We provide extensive set of simulation results for clustering as well as completion of data under missing entries, under the UoS model.