1 code implementation • 18 Jun 2024 • Ruixin Hong, Hongming Zhang, Xiaoman Pan, Dong Yu, ChangShui Zhang
Abstract reasoning, the ability to reason from the abstract essence of a problem, serves as a key to generalization in human reasoning.
no code implementations • 16 May 2024 • Kunda Yan, Sen Cui, Abudukelimu Wuerkaixi, Jingfeng Zhang, Bo Han, Gang Niu, Masashi Sugiyama, ChangShui Zhang
Our framework aims to approximate an optimal cooperation network for each client by optimizing a weighted sum of model similarity and feature complementarity.
1 code implementation • 29 Nov 2023 • Yinya Huang, Ruixin Hong, Hongming Zhang, Wei Shao, Zhicheng Yang, Dong Yu, ChangShui Zhang, Xiaodan Liang, Linqi Song
In this study, we delve into the realm of counterfactual reasoning capabilities of large language models (LLMs).
1 code implementation • 14 Nov 2023 • Ruixin Hong, Hongming Zhang, Xinyu Pang, Dong Yu, ChangShui Zhang
In this paper, we take a closer look at the self-verification abilities of LLMs in the context of logical reasoning, focusing on their ability to identify logical fallacies accurately.
1 code implementation • 8 Sep 2023 • Changming Xiao, Qi Yang, Feng Zhou, ChangShui Zhang
Experiments in various situations demonstrate the advantages of our method compared to strong baselines on this task.
Ranked #11 on Weakly-Supervised Semantic Segmentation on COCO 2014 val (using extra training data)
1 code implementation • 27 Jul 2023 • Sen Cui, Weishen Pan, ChangShui Zhang, Fei Wang
xOrder consistently achieves a better balance between the algorithm utility and ranking fairness on a variety of datasets with different metrics.
1 code implementation • 4 May 2023 • Ruixin Hong, Hongming Zhang, Hong Zhao, Dong Yu, ChangShui Zhang
In this paper, we propose FAME (FAithful question answering with MontE-carlo planning) to answer questions based on faithful reasoning steps.
no code implementations • 22 Mar 2023 • Zhilong Liang, Zhenzhi Tan, Ruixin Hong, Wanli Ouyang, Jinying Yuan, ChangShui Zhang
Computer image recognition with machine learning method can make up the defects of artificial judging, giving accurate and quantitative judgement.
3 code implementations • 22 Oct 2022 • Yinya Huang, Hongming Zhang, Ruixin Hong, Xiaodan Liang, ChangShui Zhang, Dong Yu
To this end, we propose a comprehensive logical reasoning explanation form.
no code implementations • 21 Jun 2022 • Abudukelimu Wuerkaixi, You Zhang, Zhiyao Duan, ChangShui Zhang
This clarification of definition is motivated by our extensive experiments, through which we discover that existing ASD methods fail in modeling the audio-visual synchronization and often classify unsynchronized videos as active speaking.
1 code implementation • 31 May 2022 • Peng Dai, Yiqiang Feng, Renliang Weng, ChangShui Zhang
The recent trend in multiple object tracking (MOT) is heading towards leveraging deep learning to boost the tracking performance.
1 code implementation • 29 May 2022 • Xintong Yu, Hongming Zhang, Ruixin Hong, Yangqiu Song, ChangShui Zhang
In this paper, we propose VD-PCR, a novel framework to improve Visual Dialog understanding with Pronoun Coreference Resolution in both implicit and explicit ways.
3 code implementations • Findings (NAACL) 2022 • Ruixin Hong, Hongming Zhang, Xintong Yu, ChangShui Zhang
Advances on QA explanation propose to explain the answers with entailment trees composed of multiple entailment steps.
1 code implementation • 25 Mar 2022 • Xiu Su, Shan You, Jiyang Xie, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu
In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.
1 code implementation • 16 Mar 2022 • Mingkai Zheng, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu
Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations.
Ranked #64 on Self-Supervised Image Classification on ImageNet
no code implementations • 28 Feb 2022 • Zhilong Liang, Zhiwei Li, Shuo Zhou, Yiwen Sun, ChangShui Zhang, Jinying Yuan
We present a brand-new and general machine learning method for material property prediction.
no code implementations • 21 Dec 2021 • Ziang Li, Kailun Wu, Yiwen Guo, ChangShui Zhang
Drawing on theoretical insights, we advocate an error-based thresholding (EBT) mechanism for learned ISTA (LISTA), which utilizes a function of the layer-wise reconstruction error to suggest a specific threshold for each observation in the shrinkage function of each layer.
1 code implementation • 21 Dec 2021 • Ziang Li, Yiwen Guo, Haodi Liu, ChangShui Zhang
This paper serves as a complement and somewhat an extension to Guo et al.'s paper, by providing theoretical analyses on LinBP in neural-network-involved learning tasks, including adversarial attack and model training.
no code implementations • CVPR 2022 • Tao Huang, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu
In this paper, we leverage an explicit path filter to capture the characteristics of paths and directly filter those weak ones, so that the search can be thus implemented on the shrunk space more greedily and efficiently.
no code implementations • 21 Oct 2021 • Wenzheng Hu, Zhengping Che, Ning Liu, Mingyang Li, Jian Tang, ChangShui Zhang, Jianqiang Wang
Deep convolutional neural networks are shown to be overkill with high parametric and computational redundancy in many application scenarios, and an increasing number of works have explored model pruning to obtain lightweight and efficient networks.
1 code implementation • ICCV 2021 • Mingkai Zheng, Fei Wang, Shan You, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu
Specifically, our proposed framework is based on two projection heads, one of which will perform the regular instance discrimination task.
no code implementations • 29 Sep 2021 • Sen Cui, Jingfeng Zhang, Jian Liang, Masashi Sugiyama, ChangShui Zhang
However, an ensemble still wastes the limited capacity of multiple models.
no code implementations • 29 Sep 2021 • Li Ziang, Yiwen Guo, Haodi Liu, ChangShui Zhang
In this paper, we study the very recent method called ``linear backpropagation'' (LinBP), which modifies the standard backpropagation and can improve the transferability in black-box adversarial attack.
no code implementations • 13 Sep 2021 • Weishen Pan, Sen Cui, Hongyi Wen, Kun Chen, ChangShui Zhang, Fei Wang
We empirically validated the existence of such user feedback-loop bias in real world recommendation systems and compared the performance of our method with the baseline models that are either without de-biasing or with propensity scores estimated by other methods.
1 code implementation • EMNLP 2021 • Xintong Yu, Hongming Zhang, Yangqiu Song, ChangShui Zhang, Kun Xu, Dong Yu
Resolving pronouns to their referents has long been studied as a fundamental natural language understanding problem.
1 code implementation • NeurIPS 2021 • Sen Cui, Weishen Pan, Jian Liang, ChangShui Zhang, Fei Wang
In this paper, we propose an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients (data sources).
1 code implementation • 18 Aug 2021 • Sen Cui, Jian Liang, Weishen Pan, Kun Chen, ChangShui Zhang, Fei Wang
Federated learning (FL) refers to the paradigm of learning models over a collaborative research network involving multiple clients without sacrificing privacy.
no code implementations • 11 Aug 2021 • Weishen Pan, Sen Cui, Jiang Bian, ChangShui Zhang, Fei Wang
Algorithmic fairness has aroused considerable interests in data mining and machine learning communities recently.
1 code implementation • 27 Jul 2021 • Song Tang, Yan Yang, Zhiyuan Ma, Norman Hendrich, Fanyu Zeng, Shuzhi Sam Ge, ChangShui Zhang, Jianwei Zhang
To reach this goal, we construct the nearest neighborhood for every target data and take it as the fundamental clustering unit by building our objective on the geometry.
2 code implementations • NeurIPS 2021 • Mingkai Zheng, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu
Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations.
Ranked #82 on Self-Supervised Image Classification on ImageNet
1 code implementation • 25 Jun 2021 • Xiu Su, Shan You, Jiyang Xie, Mingkai Zheng, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang, Chang Xu
Vision transformers (ViTs) inherited the success of NLP but their structures have not been sufficiently investigated and optimized for visual tasks.
no code implementations • 11 Jun 2021 • Xiu Su, Shan You, Mingkai Zheng, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu
The operation weight for each path is represented as a convex combination of items in a dictionary with a simplex code.
no code implementations • 29 May 2021 • Weishen Pan, ChangShui Zhang
As machine learning algorithms getting adopted in an ever-increasing number of applications, interpretation has emerged as a crucial desideratum.
no code implementations • CVPR 2021 • Xiu Su, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu
In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.
no code implementations • 25 Mar 2021 • Yiwen Guo, ChangShui Zhang
This paper serves as a survey of recent advances in large margin training and its theoretical foundations, mostly for (nonlinear) deep neural networks (DNNs) that are probably the most prominent machine learning models for large-scale data in the community over the past decade.
1 code implementation • CVPR 2021 • Xiu Su, Tao Huang, Yanxi Li, Shan You, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu
One-shot neural architecture search (NAS) methods significantly reduce the search cost by considering the whole search space as one network, which only needs to be trained once.
1 code implementation • CVPR 2021 • Peng Dai, Renliang Weng, Wongun Choi, ChangShui Zhang, Zhangping He, Wei Ding
In this paper, we propose a novel proposal-based learnable framework, which models MOT as a proposal generation, proposal scoring and trajectory inference paradigm on an affinity graph.
no code implementations • ICLR 2021 • Xiu Su, Shan You, Tao Huang, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu
In this paper, to better evaluate each width, we propose a locally free weight sharing strategy (CafeNet) accordingly.
1 code implementation • 6 Feb 2021 • Haipeng Zhang, Zhong Cao, Ziang Yan, ChangShui Zhang
For visual object recognition tasks, the illumination variations can cause distinct changes in object appearance and thus confuse the deep neural network based recognition models.
Ranked #1 on Traffic Sign Recognition on TopLogo-10
no code implementations • 1 Jan 2021 • Zhong Cao, Jiang Lu, Jian Liang, ChangShui Zhang
Recently, self-supervised learning (SSL) algorithms have been applied to Few-shot learning(FSL).
no code implementations • ICCV 2021 • Yuru Song, Zan Lou, Shan You, Erkun Yang, Fei Wang, Chen Qian, ChangShui Zhang, Xiaogang Wang
Concretely, we introduce a privileged parameter so that the optimization direction does not necessarily follow the gradient from the privileged tasks, but concentrates more on the target tasks.
no code implementations • 1 Jan 2021 • Weishen Pan, Sen Cui, ChangShui Zhang
In this paper, we focus on the unsupervised learning of disentanglement in a general setting which the generative factors may be correlated.
no code implementations • ICLR 2021 • Ziang Yan, Yiwen Guo, Jian Liang, ChangShui Zhang
To craft black-box adversarial examples, adversaries need to query the victim model and take proper advantage of its feedback.
no code implementations • 1 Jan 2021 • Tao Huang, Shan You, Yibo Yang, Zhuozhuo Tu, Fei Wang, Chen Qian, ChangShui Zhang
Differentiable neural architecture search (NAS) has gained much success in discovering more flexible and diverse cell types.
1 code implementation • NeurIPS 2020 • Shangchen Du, Shan You, Xiaojie Li, Jianlong Wu, Fei Wang, Chen Qian, ChangShui Zhang
In this paper, we examine the diversity of teacher models in the gradient space and regard the ensemble knowledge distillation as a multi-objective optimization problem so that we can determine a better optimization direction for the training of student network.
1 code implementation • NeurIPS 2020 • Nan Jiang, Sheng Jin, Zhiyao Duan, ChangShui Zhang
An interaction reward model is trained on the duets formed from outer parts of Bach chorales to model counterpoint interaction, while a style reward model is trained on monophonic melodies of Chinese folk songs to model melodic patterns.
no code implementations • 18 Nov 2020 • Tao Huang, Shan You, Yibo Yang, Zhuozhuo Tu, Fei Wang, Chen Qian, ChangShui Zhang
However, even for this consistent search, the searched cells often suffer from poor performance, especially for the supernet with fewer layers, as current DARTS methods are prone to wide and shallow cells, and this topology collapse induces sub-optimal searched cells.
no code implementations • 17 Nov 2020 • MingJie Sun, Jianguo Li, ChangShui Zhang
Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures, while traditional robust visual features like SIFT (scale-invariant feature transforms) are designed to be robust across a substantial range of affine distortion, addition of noise, etc with the mimic of human perception nature.
no code implementations • 28 Oct 2020 • Xiu Su, Shan You, Tao Huang, Hongyan Xu, Fei Wang, Chen Qian, ChangShui Zhang, Chang Xu
To deploy a well-trained CNN model on low-end computation edge devices, it is usually supposed to compress or prune the model under certain computation budget (e. g., FLOPs).
1 code implementation • 26 Oct 2020 • Yuhai Song, Zhong Cao, Kailun Wu, Ziang Yan, ChangShui Zhang
The idea of unfolding iterative algorithms as deep neural networks has been widely applied in solving sparse coding problems, providing both solid theoretical analysis in convergence rate and superior empirical performance.
no code implementations • 12 Oct 2020 • Jian Liang, Kun Chen, Ming Lin, ChangShui Zhang, Fei Wang
FMR is an effective scheme for handling sample heterogeneity, where a single regression model is not enough for capturing the complexities of the conditional distribution of the observed samples given the features.
no code implementations • 6 Sep 2020 • Jiang Lu, Pinghua Gong, Jieping Ye, Jianwei Zhang, ChangShui Zhang
The capability of learning and generalizing from very few samples successfully is a noticeable demarcation separating artificial intelligence and human intelligence since humans can readily establish their cognition to novelty from just a single or a handful of examples whereas machine learning algorithms typically entail hundreds or thousands of supervised samples to guarantee generalization ability.
no code implementations • 25 Sep 2019 • Jianguo Li, MingJie Sun, ChangShui Zhang
Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures, while traditional robust visual features like SIFT (scale-invariant feature transforms) are designed to be robust across a substantial range of affine distortion, addition of noise, etc with the mimic of human perception nature.
1 code implementation • IEEE Transactions on Multimedia 2019 • Runpeng Cui, Hu Liu, ChangShui Zhang
In contrast, our proposed architecture adopts deep convolutional neural networks with stacked temporal fusion layers as the feature extraction module, and bi-directional recurrent neural networks as the sequence learning module.
Ranked #13 on Sign Language Recognition on RWTH-PHOENIX-Weather 2014
no code implementations • 27 Sep 2018 • Tianhong Li, Jianguo Li, Zhuang Liu, ChangShui Zhang
Taking the assumption that both "teacher" and "student" have the same feature map sizes at each corresponding block, we add a $1\times 1$ conv-layer at the end of each block in the student-net, and align the block-level outputs between "teacher" and "student" by estimating the parameters of the added layer with limited samples.