no code implementations • ACL 2022 • Linhai Zhang, Xuemeng Hu, Boyu Wang, Deyu Zhou, Qian-Wen Zhang, Yunbo Cao
Recent years have witnessed growing interests in incorporating external knowledge such as pre-trained word embeddings (PWEs) or pre-trained language models (PLMs) into neural topic modeling.
no code implementations • 16 Feb 2024 • Junfei Xiao, Zheng Xu, Alan Yuille, Shen Yan, Boyu Wang
Our research undertakes a thorough exploration of the state-of-the-art perceiver resampler architecture and builds a strong baseline.
no code implementations • 12 Feb 2024 • Qiuhao Zeng, Wei Wang, Fan Zhou, Gezheng Xu, Ruizhi Pu, Changjian Shui, Christian Gagne, Shichun Yang, Boyu Wang, Charles X. Ling
By employing Koopman Operators, we effectively address the time-evolving distributions encountered in TDG using the principles of Koopman theory, where measurement functions are sought to establish linear transition relations between evolving domains.
no code implementations • 25 Jan 2024 • Ehsan Hallaji, Roozbeh Razavi-Far, Mehrdad Saif, Boyu Wang, Qiang Yang
Federated learning has been rapidly evolving and gaining popularity in recent years due to its privacy-preserving features, among other advantages.
1 code implementation • 11 Jan 2024 • Chunlei Peng, Boyu Wang, Decheng Liu, Nannan Wang, Ruimin Hu, Xinbo Gao
To address this, we mask the clothing and color information in the personal attribute description extracted through an attribute detection model.
1 code implementation • 9 Jan 2024 • Yifan Xie, Boyu Wang, Shiqi Li, Jihua Zhu
In this paper, we propose a novel Iterative Feedback Network (IFNet) for unsupervised point cloud registration, in which the representation of low-level features is efficiently enriched by rerouting subsequent high-level features.
no code implementations • 22 Dec 2023 • Chengming Hu, Haolun Wu, Xuan Li, Chen Ma, Xi Chen, Jun Yan, Boyu Wang, Xue Liu
A simple neural network then learns the implicit mapping from the intra- and inter-sample relations to an adaptive, sample-wise knowledge fusion ratio in a bilevel-optimization manner.
no code implementations • 10 Dec 2023 • William Wei Wang, Dongqi Han, Xufang Luo, Yifei Shen, Charles Ling, Boyu Wang, Dongsheng Li
Empowering embodied agents, such as robots, with Artificial Intelligence (AI) has become increasingly important in recent years.
1 code implementation • 26 Nov 2023 • Jiaqi Li, Rui Wang, Yuanhao Lai, Changjian Shui, Sabyasachi Sahoo, Charles X. Ling, Shichun Yang, Boyu Wang, Christian Gagné, Fan Zhou
We conduct extensive experiments on various benchmarks, including a dataset with large-scale tasks, and compare our method against some recent state-of-the-art methods to demonstrate the effectiveness and scalability of our proposed method.
no code implementations • 28 Jun 2023 • Ganyu Wang, Qingsong Zhang, Li Xiang, Boyu Wang, Bin Gu, Charles Ling
Meanwhile, the upstream model (server) is updated with first-order optimization (FOO) locally, which significantly improves the convergence rate, making it feasible to train the large models without compromising privacy and security.
no code implementations • 3 Mar 2023 • Yujiao Hao, Boyu Wang, Rong Zheng
In this work, we examine two in-the-wild HAR datasets and DivideMix, a state-of-the-art learning with noise labels (LNL) method to understand the extent and impacts of noisy labels in training data.
1 code implementation • CVPR 2023 • Yuting He, Guanyu Yang, Rongjun Ge, Yang Chen, Jean-Louis Coatrieux, Boyu Wang, Shuo Li
We propose a novel visual similarity learning paradigm, Geometric Visual Similarity Learning, which embeds the prior of topological invariance into the measurement of the inter-image similarity for consistent representation of semantic regions.
1 code implementation • 3 Feb 2023 • Pengcheng Xu, Boyu Wang, Charles Ling
We demonstrate that domain labels are not directly necessary for BTDA if categorical distributions of various domains are sufficiently aligned even facing the imbalance of domains and the label distribution shift of classes.
Ranked #1 on Multi-target Domain Adaptation on Office-Home
Blended-target Domain Adaptation Label shift of blended-target domain adaptation +1
no code implementations • 31 Jan 2023 • Li Yi, Gezheng Xu, Pengcheng Xu, Jiaqi Li, Ruizhi Pu, Charles Ling, A. Ian McLeod, Boyu Wang
We also prove that such a difference makes existing LLN methods that rely on their distribution assumptions unable to address the label noise in SFDA.
1 code implementation • 19 Jan 2023 • Qiuhao Zeng, Wei Wang, Fan Zhou, Charles Ling, Boyu Wang
In this paper, we formulate such problems as Evolving Domain Generalization, where a model aims to generalize well on a target domain by discovering and leveraging the evolving pattern of the environment.
1 code implementation • CVPR 2023 • Wei Wang, Zhun Zhong, Weijie Wang, Xi Chen, Charles Ling, Boyu Wang, Nicu Sebe
In this paper, we study the application of Test-time domain adaptation in semantic segmentation (TTDA-Seg) where both efficiency and effectiveness are crucial.
1 code implementation • 19 Oct 2022 • Changjian Shui, Gezheng Xu, Qi Chen, Jiaqi Li, Charles Ling, Tal Arbel, Boyu Wang, Christian Gagné
In the upper-level, the fair predictor is updated to be close to all subgroup specific predictors.
no code implementations • 29 Jun 2022 • Ghazal Farhani, Alexander Kazachek, Boyu Wang
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs).
no code implementations • 31 May 2022 • William Wei Wang, Gezheng Xu, Ruizhi Pu, Jiaqi Li, Fan Zhou, Changjian Shui, Charles Ling, Christian Gagné, Boyu Wang
Domain generalization aims to learn a predictive model from multiple different but related source tasks that can generalize well to a target task without the need of accessing any target data.
no code implementations • 26 May 2022 • Changjian Shui, Qi Chen, Jiaqi Li, Boyu Wang, Christian Gagné
We consider a fair representation learning perspective, where optimal predictors, on top of the data representation, are ensured to be invariant with respect to different sub-groups.
1 code implementation • CVPR 2022 • Li Yi, Sheng Liu, Qi She, A. Ian McLeod, Boyu Wang
To address this issue, we focus on learning robust contrastive representations of data on which the classifier is hard to memorize the label noise under the CE loss.
1 code implementation • CVPR 2022 • Lei Zhu, Qi She, Qian Chen, Yunfei You, Boyu Wang, Yanye Lu
To avoid this problem, this work provides a novel perspective that models WSOL as a domain adaption (DA) task, where the score estimator trained on the source/image domain is tested on the target/pixel domain to locate objects.
no code implementations • 21 Feb 2022 • Yujiao Hao, Boyu Wang, Rong Zheng
With the prevalence of wearable devices, inertial measurement unit (IMU) data has been utilized in monitoring and assessment of human mobility such as human activity recognition (HAR).
no code implementations • 21 Feb 2022 • Qiuhao Zeng, Tianze Luo, Boyu Wang
Unsupervised domain adaptation (UDA) enables knowledge transfer from the labelled source domain to the unlabeled target domain by reducing the cross-domain discrepancy.
no code implementations • 26 Jan 2022 • Boyu Wang, Jorge Mendez, Changjian Shui, Fan Zhou, Di wu, Gezheng Xu, Christian Gagné, Eric Eaton
Unlike existing measures which are used as tools to bound the difference of expected risks between tasks (e. g., $\mathcal{H}$-divergence or discrepancy distance), we theoretically show that the performance gap can be viewed as a data- and algorithm-dependent regularizer, which controls the model complexity and leads to finer guarantees.
no code implementations • 13 Nov 2021 • Yuan Zhou, Haiyang Wang, Shuwei Huo, Boyu Wang
Thus, it is appropriate to consider using NAS methods to discover a better self-attention architecture automatically.
no code implementations • 29 Sep 2021 • Wei Wang, Jiaqi Li, Ruizhi Pu, Gezheng Xu, Fan Zhou, Changjian Shui, Charles Ling, Boyu Wang
Domain generalization aims to learn a predictive model from multiple different but related source tasks that can generalize well to a target task without the need of accessing any target data.
no code implementations • 30 May 2021 • Changjian Shui, Boyu Wang, Christian Gagné
Our regularization is orthogonal to and can be straightforwardly adopted in existing domain generalization algorithms for invariant representation learning.
1 code implementation • 9 May 2021 • Changjian Shui, Zijian Li, Jiaqi Li, Christian Gagné, Charles Ling, Boyu Wang
Multi-source domain adaptation aims at leveraging the knowledge from multiple tasks for predicting a related target domain.
no code implementations • 3 Mar 2021 • Fan Zhou, Brahim Chaib-Draa, Boyu Wang
To confirm the effectiveness of the proposed method, we first compare the algorithm with several baselines on some benchmarks and then test the algorithms under label space shift conditions.
no code implementations • 1 Jan 2021 • Changjian Shui, Zijian Li, Jiaqi Li, Christian Gagné, Charles Ling, Boyu Wang
We study the label shift problem in multi-source transfer learning and derive new generic principles to control the target generalization risk.
no code implementations • 14 Dec 2020 • Yujiao Hao, Boyu Wang, Rong Zheng
In recent years, many deep models have been applied to HAR problems.
1 code implementation • 30 Sep 2020 • Viresh Ranjan, Boyu Wang, Mubarak Shah, Minh Hoai
We present sample selection strategies which make use of the density and uncertainty of predictions from the networks trained on one domain to select the informative images from a target domain of interest to acquire human annotation.
1 code implementation • NeurIPS 2020 • Boyu Wang, Huidong Liu, Dimitris Samaras, Minh Hoai
Existing crowd counting methods need to use a Gaussian to smooth each annotated dot or to estimate the likelihood of every pixel given the annotated point.
Ranked #3 on Crowd Counting on UCF CC 50
no code implementations • 3 Sep 2020 • Xinyi Huang, Suphanut Jamonnak, Ye Zhao, Boyu Wang, Minh Hoai, Kevin Yager, Wei Xu
Existing interactive visualization tools for deep learning are mostly applied to the training, debugging, and refinement of neural network models working on natural images.
no code implementations • 30 Jul 2020 • Changjian Shui, Qi Chen, Jun Wen, Fan Zhou, Christian Gagné, Boyu Wang
We reveal the incoherence between the widely-adopted empirical domain adversarial training and its generally-assumed theoretical counterpart based on $\mathcal{H}$-divergence.
no code implementations • 21 Jul 2020 • Fan Zhou, Zhuqing Jiang, Changjian Shui, Boyu Wang, Brahim Chaib-Draa
Previous domain generalization approaches mainly focused on learning invariant features and stacking the learned features from each source domain to generalize to a new target domain while ignoring the label information, which will lead to indistinguishable features with an ambiguous classification boundary.
2 code implementations • NeurIPS 2020 • Jorge A. Mendez, Boyu Wang, Eric Eaton
Policy gradient methods have shown success in learning control policies for high-dimensional dynamical systems.
no code implementations • CVPR 2020 • Boyu Wang, Lihan Huang, Minh Hoai
We propose a method for early recognition of human actions, one that can take advantages of multiple cameras while satisfying the constraints due to limited communication bandwidth and processing power.
no code implementations • 24 May 2020 • Fan Zhou, Changjian Shui, Bincheng Huang, Boyu Wang, Brahim Chaib-Draa
To this end, we introduce a discriminative active learning approach for domain adaptation to reduce the efforts of data annotation.
1 code implementation • 20 Feb 2020 • Yao-Hui Chen, Mansour Ahmadi, Reza Mirzazade farkhani, Boyu Wang, Long Lu
Seed scheduling is a prominent factor in determining the yields of hybrid fuzzing.
no code implementations • 3 Dec 2019 • Juncheng Lv, Zhao Kang, Boyu Wang, Luping Ji, Zenglin Xu
Multi-view clustering is an important approach to analyze multi-view data in an unsupervised way.
1 code implementation • NeurIPS 2019 • Boyu Wang, Jorge Mendez, Mingbo Cai, Eric Eaton
We propose a new principle for transfer learning, based on a straightforward intuition: if two domains are similar to each other, the model trained on one domain should also perform well on the other domain, and vice versa.
1 code implementation • 20 Nov 2019 • Changjian Shui, Fan Zhou, Christian Gagné, Boyu Wang
In this paper, we are proposing a unified and principled method for both the querying and training processes in deep batch active learning.
no code implementations • 21 Oct 2019 • Jiahao Xie, Zebang Shen, Chao Zhang, Boyu Wang, Hui Qian
This paper focuses on projection-free methods for solving smooth Online Convex Optimization (OCO) problems.
no code implementations • 10 Oct 2019 • Xinyi Huang, Suphanut Jamonnak, Ye Zhao, Boyu Wang, Minh Hoai, Kevin Yager, Wei Xu
This extended abstract presents a visualization system, which is designed for domain scientists to visually understand their deep learning model of extracting multiple attributes in x-ray scattering images.
no code implementations • 21 May 2019 • Zhao Kang, Honghui Xu, Boyu Wang, Hongyuan Zhu, Zenglin Xu
A key step of graph-based approach is the similarity graph construction.
1 code implementation • 21 Mar 2019 • Changjian Shui, Mahdieh Abbasi, Louis-Émile Robitaille, Boyu Wang, Christian Gagné
Hence, an important aspect of multitask learning is to understand the similarities within a set of tasks.
no code implementations • NeurIPS 2018 • Zijun Wei, Boyu Wang, Minh Hoai Nguyen, Jianming Zhang, Zhe Lin, Xiaohui Shen, Radomir Mech, Dimitris Samaras
Detecting segments of interest from an input sequence is a challenging problem which often requires not only good knowledge of individual target segments, but also contextual understanding of the entire input sequence and the relationships between the target segments.
no code implementations • 26 Jun 2018 • Qicheng Lao, Thomas Fevens, Boyu Wang
Unlike natural images, medical images often have intrinsic characteristics that can be leveraged for neural network learning.
no code implementations • 10 Nov 2016 • Boyu Wang, Kevin Yager, Dantong Yu, Minh Hoai
In this paper, we explore the use of deep learning to develop methods for automatically analyzing x-ray scattering images.
no code implementations • 30 Oct 2013 • Boyu Wang, Joelle Pineau
While both cost-sensitive learning and online learning have been studied extensively, the effort in simultaneously dealing with these two issues is limited.