Search Results for author: Junxiang Wang

Found 17 papers, 10 papers with code

POND: Multi-Source Time Series Domain Adaptation with Information-Aware Prompt Tuning

no code implementations19 Dec 2023 Junxiang Wang, Guangji Bai, Wei Cheng, Zhengzhang Chen, Liang Zhao, Haifeng Chen

In order to tackle these challenges simultaneously, in this paper, we introduce PrOmpt-based domaiN Discrimination (POND), the first framework to utilize prompts for time series domain adaptation.

Domain Adaptation Human Activity Recognition +3

Non-Euclidean Spatial Graph Neural Network

1 code implementation17 Dec 2023 Zheng Zhang, Sirui Li, Jingcheng Zhou, Junxiang Wang, Abhinav Angirekula, Allen Zhang, Liang Zhao

Besides, existing spatial network representation learning methods can only consider networks embedded in Euclidean space, and can not well exploit the rich geometric information carried by irregular and non-uniform non-Euclidean space.

Representation Learning

Deep Graph Representation Learning and Optimization for Influence Maximization

1 code implementation1 May 2023 Chen Ling, Junji Jiang, Junxiang Wang, My Thai, Lukas Xue, James Song, Meikang Qiu, Liang Zhao

Influence maximization (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users.

Graph Representation Learning

DeepGAR: Deep Graph Learning for Analogical Reasoning

1 code implementation19 Nov 2022 Chen Ling, Tanmoy Chowdhury, Junji Jiang, Junxiang Wang, Xuchao Zhang, Haifeng Chen, Liang Zhao

As the most well-known computational method of analogical reasoning, Structure-Mapping Theory (SMT) abstracts both target and base subjects into relational graphs and forms the cognitive process of analogical reasoning by finding a corresponding subgraph (i. e., correspondence) in the target graph that is aligned with the base graph.

Graph Learning

Source Localization of Graph Diffusion via Variational Autoencoders for Graph Inverse Problems

1 code implementation24 Jun 2022 Chen Ling, Junji Jiang, Junxiang Wang, Liang Zhao

Different from most traditional source localization methods, this paper focuses on a probabilistic manner to account for the uncertainty of different candidate sources.

An Invertible Graph Diffusion Neural Network for Source Localization

1 code implementation18 Jun 2022 Junxiang Wang, Junji Jiang, Liang Zhao

This paper aims to establish a generic framework of invertible graph diffusion models for source localization on graphs, namely Invertible Validity-aware Graph Diffusion (IVGD), to handle major challenges including 1) Difficulty to leverage knowledge in graph diffusion models for modeling their inverse processes in an end-to-end fashion, 2) Difficulty to ensure the validity of the inferred sources, and 3) Efficiency and scalability in source inference.

Misinformation

Edge Graph Neural Networks for Massive MIMO Detection

no code implementations22 May 2022 Hongyi Li, Junxiang Wang, Yongchao Wang

Massive Multiple-Input Multiple-Out (MIMO) detection is an important problem in modern wireless communication systems.

Do Multi-Lingual Pre-trained Language Models Reveal Consistent Token Attributions in Different Languages?

no code implementations23 Dec 2021 Junxiang Wang, Xuchao Zhang, Bo Zong, Yanchi Liu, Wei Cheng, Jingchao Ni, Haifeng Chen, Liang Zhao

During the past several years, a surge of multi-lingual Pre-trained Language Models (PLMs) has been proposed to achieve state-of-the-art performance in many cross-lingual downstream tasks.

A Convergent ADMM Framework for Efficient Neural Network Training

1 code implementation22 Dec 2021 Junxiang Wang, Hongyi Li, Liang Zhao

As a well-known optimization framework, the Alternating Direction Method of Multipliers (ADMM) has achieved tremendous success in many classification and regression applications.

Efficient Neural Network

Towards Quantized Model Parallelism for Graph-Augmented MLPs Based on Gradient-Free ADMM Framework

1 code implementation20 May 2021 Junxiang Wang, Hongyi Li, Zheng Chai, Yongchao Wang, Yue Cheng, Liang Zhao

Theoretical convergence to a (quantized) stationary point of the pdADMM-G algorithm and the pdADMM-G-Q algorithm is provided with a sublinear convergence rate $o(1/k)$, where $k$ is the number of iterations.

Quantization

Sign-regularized Multi-task Learning

no code implementations22 Feb 2021 Johnny Torres, Guangji Bai, Junxiang Wang, Liang Zhao, Carmen Vaca, Cristina Abad

Multi-task learning is a framework that enforces different learning tasks to share their knowledge to improve their generalization performance.

Multi-Task Learning

pdADMM: parallel deep learning Alternating Direction Method of Multipliers

1 code implementation1 Nov 2020 Junxiang Wang, Zheng Chai, Yue Cheng, Liang Zhao

In this paper, we propose a novel parallel deep learning ADMM framework (pdADMM) to achieve layer parallelism: parameters in each layer of neural networks can be updated independently in parallel.

Tunable Subnetwork Splitting for Model-parallelism of Neural Network Training

1 code implementation9 Sep 2020 Junxiang Wang, Zheng Chai, Yue Cheng, Liang Zhao

In this paper, we analyze the reason and propose to achieve a compelling trade-off between parallelism and accuracy by a reformulation called Tunable Subnetwork Splitting Method (TSSM), which can tune the decomposition granularity of deep neural networks.

Gradient-free Neural Network Training by Multi-convex Alternating Optimization

no code implementations25 Sep 2019 Junxiang Wang, Fuxun Yu, Xiang Chen, Liang Zhao

To overcome these drawbacks, alternating minimization-based methods for deep neural network optimization have attracted fast-increasing attention recently.

ADMM for Efficient Deep Learning with Global Convergence

1 code implementation31 May 2019 Junxiang Wang, Fuxun Yu, Xiang Chen, Liang Zhao

However, as an emerging domain, several challenges remain, including 1) The lack of global convergence guarantees, 2) Slow convergence towards solutions, and 3) Cubic time complexity with regard to feature dimensions.

Stochastic Optimization

Nonconvex Generalization of Alternating Direction Method of Multipliers for Nonlinear Equality Constrained Problems

no code implementations9 May 2017 Junxiang Wang, Liang Zhao

The classic Alternating Direction Method of Multipliers (ADMM) is a popular framework to solve linear-equality constrained problems.

Optimization and Control Social and Information Networks

Cannot find the paper you are looking for? You can Submit a new open access paper.