1 code implementation • 18 Aug 2022 • Xiujun Shu, Wei Wen, Haoqian Wu, Keyu Chen, Yiran Song, Ruizhi Qiao, Bo Ren, Xiao Wang
To explore the fine-grained alignment, we further propose two implicit semantic alignment paradigms: multi-level alignment (MLA) and bidirectional mask modeling (BMM).
no code implementations • 12 Aug 2022 • Xiujun Shu, Wei Wen, Taian Guo, Sunan He, Chen Wu, Ruizhi Qiao
This technical report presents the 3rd winning solution for MTVG, a new task introduced in the 4-th Person in Context (PIC) Challenge at ACM MM 2022.
2 code implementations • 14 Jul 2022 • Tunhou Zhang, Dehua Cheng, Yuchen He, Zhengxing Chen, Xiaoliang Dai, Liang Xiong, Feng Yan, Hai Li, Yiran Chen, Wei Wen
To overcome the data multi-modality and architecture heterogeneity challenges in the recommendation domain, NASRec establishes a large supernet (i. e., search space) to search the full architectures.
no code implementations • 12 May 2022 • Abdullah Alqarni, Wei Wen, Ben C. P. Lam, John D. Crawford, Perminder S. Sachdev, Jiyang Jiang
Generalised linear models were applied to examine 1) the main effects of vascular (body mass index, hip to waist ratio, pulse wave velocity, hypercholesterolemia, diabetes, hypertension, smoking status) and hormonal (testosterone levels, contraceptive pill, hormone replacement therapy, menopause) factors on WMH, and 2) the moderation effects of hormonal factors on the relationship between vascular risk factors and WMH volumes.
no code implementations • 4 Apr 2022 • Jiyang Jiang, Dadong Wang, Yang song, Perminder S. Sachdev, Wei Wen
Cerebral small vessel disease (CSVD) is a major vascular contributor to cognitive impairment in ageing, including dementias.
no code implementations • 3 Feb 2022 • Tao Liu, Shu Guo, Hao liu, Rui Kang, Mingyang Bai, Jiyang Jiang, Wei Wen, Xing Pan, Jun Tai, JianXin Li, Jian Cheng, Jing Jing, Zhenzhou Wu, Haijun Niu, Haogang Zhu, Zixiao Li, Yongjun Wang, Henry Brodaty, Perminder Sachdev, Daqing Li
Degeneration and adaptation are two competing sides of the same coin called resilience in the progressive processes of brain aging or diseases.
no code implementations • 27 Nov 2021 • Xiujun Shu, Yusheng Tao, Ruizhi Qiao, Bo Ke, Wei Wen, Bo Ren
It is by far the largest dataset for person search in media.
1 code implementation • 6 Jun 2021 • Jian Cheng, Ziyang Liu, Hao Guan, Zhenzhou Wu, Haogang Zhu, Jiyang Jiang, Wei Wen, DaCheng Tao, Tao Liu
In this paper, a novel 3D convolutional network, called two-stage-age-network (TSAN), is proposed to estimate brain age from T1-weighted MRI data.
1 code implementation • 30 Apr 2020 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss, thus eliminating the fine-tuning process after low rank decomposition.
1 code implementation • 20 Apr 2020 • Huanrui Yang, Minxue Tang, Wei Wen, Feng Yan, Daniel Hu, Ang Li, Hai Li, Yiran Chen
In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.
2 code implementations • ECCV 2020 • Wei Wen, Hanxiao Liu, Hai Li, Yiran Chen, Gabriel Bender, Pieter-Jan Kindermans
First we train N random architectures to generate N (architecture, validation accuracy) pairs and use them to train a regression model that predicts accuracy based on the architecture.
1 code implementation • 9 Oct 2019 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Wenrui Dai, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations.
no code implementations • 25 Sep 2019 • Chunpeng Wu, Wei Wen, Yiran Chen, Hai Li
As such, training our GAN architecture requires much fewer high-quality images with a small number of additional low-quality images.
1 code implementation • ICLR 2020 • Huanrui Yang, Wei Wen, Hai Li
Inspired by the Hoyer measure (the ratio between L1 and L2 norms) used in traditional compressed sensing problems, we present DeepHoyer, a set of sparsity-inducing regularizers that are both differentiable almost everywhere and scale-invariant.
no code implementations • 19 Jun 2019 • Qing Yang, Wei Wen, Zuoguan Wang, Hai Li
With the rapid scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for improving deployment efficiency.
1 code implementation • ICLR 2020 • Wei Wen, Feng Yan, Yiran Chen, Hai Li
Our AutoGrow is efficient.
no code implementations • ICLR 2019 • Qing Yang, Wei Wen, Zuoguan Wang, Yiran Chen, Hai Li
With the rapidly scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for efficient deployment.
1 code implementation • 26 Jan 2019 • Sangkug Lym, Esha Choukse, Siavash Zangeneh, Wei Wen, Sujay Sanghavi, Mattan Erez
State-of-the-art convolutional neural networks (CNNs) used in vision applications have large models with numerous weights.
1 code implementation • 6 Dec 2018 • Yuhui Xu, Yuxi Li, Shuai Zhang, Wei Wen, Botao Wang, Yingyong Qi, Yiran Chen, Weiyao Lin, Hongkai Xiong
We propose Trained Rank Pruning (TRP), which iterates low rank approximation and training.
1 code implementation • 30 Sep 2018 • Sangkug Lym, Armand Behroozi, Wei Wen, Ge Li, Yongkee Kwon, Mattan Erez
Training convolutional neural networks (CNNs) requires intense computations and high memory bandwidth.
1 code implementation • 21 May 2018 • Wei Wen, Yandan Wang, Feng Yan, Cong Xu, Chunpeng Wu, Yiran Chen, Hai Li
It becomes an open question whether escaping sharp minima can improve the generalization.
no code implementations • ICLR 2018 • Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, Hai Li
This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs.
1 code implementation • NeurIPS 2017 • Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients.
3 code implementations • ICCV 2017 • Wei Wen, Cong Xu, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
Moreover, Force Regularization better initializes the low-rank DNNs such that the fine-tuning can converge faster toward higher accuracy.
no code implementations • CVPR 2017 • Chunpeng Wu, Wei Wen, Tariq Afzal, Yongmei Zhang, Yiran Chen, Hai Li
Our DNN has 4. 1M parameters, which is only 6. 7% of AlexNet or 59% of GoogLeNet.
no code implementations • 11 Feb 2017 • Yandan Wang, Wei Wen, Beiye Liu, Donald Chiarulli, Hai Li
Following rank clipping, group connection deletion further reduces the routing area of LeNet and ConvNet to 8. 1\% and 52. 06\%, respectively.
no code implementations • 7 Jan 2017 • Yandan Wang, Wei Wen, Linghao Song, Hai Li
Brain inspired neuromorphic computing has demonstrated remarkable advantages over traditional von Neumann architecture for its high energy efficiency and parallel data processing.
3 code implementations • NeurIPS 2016 • Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li
SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation.
1 code implementation • 4 Aug 2016 • Jongsoo Park, Sheng Li, Wei Wen, Ping Tak Peter Tang, Hai Li, Yiran Chen, Pradeep Dubey
Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels.
no code implementations • 3 Apr 2016 • Wei Wen, Chunpeng Wu, Yandan Wang, Kent Nixon, Qing Wu, Mark Barnell, Hai Li, Yiran Chen
IBM TrueNorth chip uses digital spikes to perform neuromorphic computing and achieves ultrahigh execution parallelism and power efficiency.