1 code implementation • 8 Jun 2024 • Zhaoru Ke, Hang Yu, Jianguo Li, Haipeng Zhang
Current directed graph embedding methods build upon undirected techniques but often inadequately capture directed edge information, leading to challenges such as: (1) Suboptimal representations for nodes with low in/out-degrees, due to the insufficient neighbor interactions; (2) Limited inductive ability for representing new nodes post-training; (3) Narrow generalizability, as training is overly coupled with specific tasks.
1 code implementation • 16 Apr 2024 • Tiannuo Yang, Wen Hu, Wangqi Peng, Yusen Li, Jianguo Li, Gang Wang, Xiaoguang Liu
However, due to the inherent characteristics of VDMS, automatic performance tuning for VDMS faces several critical challenges, which cannot be well addressed by the existing auto-tuning methods.
no code implementations • 7 Mar 2024 • Kanglei Zhou, Liyuan Wang, Xingxing Zhang, Hubert P. H. Shum, Frederick W. B. Li, Jianguo Li, Xiaohui Liang
We propose Continual AQA (CAQA) to refine models using sparse new data.
1 code implementation • 14 Nov 2023 • Ziyin Zhang, Chaoyu Chen, Bingchang Liu, Cong Liao, Zi Gong, Hang Yu, Jianguo Li, Rui Wang
In this work we systematically review the recent advancements in software engineering with language models, covering 70+ models, 40+ evaluation tasks, 180+ datasets, and 900 related works.
1 code implementation • 4 Nov 2023 • Bingchang Liu, Chaoyu Chen, Cong Liao, Zi Gong, Huan Wang, Zhichao Lei, Ming Liang, Dajun Chen, Min Shen, Hailian Zhou, Hang Yu, Jianguo Li
Code LLMs have emerged as a specialized research field, with remarkable studies dedicated to enhancing model's coding capabilities through fine-tuning on pre-trained models.
1 code implementation • NeurIPS 2023 • Zelin Ni, Hang Yu, Shizhan Liu, Jianguo Li, Weiyao Lin
Bases have become an integral part of modern deep learning-based models for time series forecasting due to their ability to act as feature extractors or future references.
no code implementations • 10 Oct 2023 • Peng Di, Jianguo Li, Hang Yu, Wei Jiang, Wenting Cai, Yang Cao, Chaoyu Chen, Dajun Chen, Hongwei Chen, Liang Chen, Gang Fan, Jie Gong, Zi Gong, Wen Hu, Tingting Guo, Zhichao Lei, Ting Li, Zheng Li, Ming Liang, Cong Liao, Bingchang Liu, Jiachen Liu, Zhiwei Liu, Shaojun Lu, Min Shen, Guangpei Wang, Huan Wang, Zhi Wang, Zhaogui Xu, Jiawei Yang, Qing Ye, Gehao Zhang, Yu Zhang, Zelin Zhao, Xunjin Zheng, Hailian Zhou, Lifu Zhu, Xianying Zhu
It is specifically designed for code-related tasks with both English and Chinese prompts and supports over 40 programming languages.
1 code implementation • 7 Oct 2023 • Jun Huang, Yang Yang, Hang Yu, Jianguo Li, Xiao Zheng
The MST graph provides a virtual representation of the status and scheduling relationships among service instances of a real-world microservice system.
1 code implementation • 15 Sep 2023 • Shiyi Zhu, Jing Ye, Wei Jiang, Siqiao Xue, Qi Zhang, Yifan Wu, Jianguo Li
In fact, anomalous behaviors harming long context extrapolation exist between Rotary Position Embedding (RoPE) and vanilla self-attention unveiled by our work.
1 code implementation • 31 Jan 2023 • Chaoyu Chen, Hang Yu, Zhichao Lei, Jianguo Li, Shaokang Ren, Tingkai Zhang, Silin Hu, Jianchao Wang, Wenhui Shi
In particular, we propose BALANCE (BAyesian Linear AttributioN for root CausE localization), which formulates the problem of RCA through the lens of attribution in XAI and seeks to explain the anomalies in the target KPIs by the behavior of the candidate root causes.
Explainable Artificial Intelligence (XAI) Fault Detection +2
1 code implementation • 12 Oct 2022 • Hongyuan Yu, Ting Li, Weichen Yu, Jianguo Li, Yan Huang, Liang Wang, Alex Liu
In this paper, we propose Regularized Graph Structure Learning (RGSL) model to incorporate both explicit prior structure and implicit structure together, and learn the forecasting deep networks along with the graph structure.
1 code implementation • 31 May 2022 • Siqiao Xue, Chao Qu, Xiaoming Shi, Cong Liao, Shiyi Zhu, Xiaoyu Tan, Lintao Ma, Shiyu Wang, Shijun Wang, Yun Hu, Lei Lei, Yangfei Zheng, Jianguo Li, James Zhang
Predictive autoscaling (autoscaling with workload forecasting) is an important mechanism that supports autonomous adjustment of computing resources in accordance with fluctuating workload demands in the Cloud.
no code implementations • 7 Apr 2022 • He Zhou, Haibo Zhou, Jianguo Li, Kai Yang, Jianping An, Xuemin, Shen
By combining the PCP and MRWP model, the distributions of distances from a typical terminal to the BSs in different tiers are derived.
3 code implementations • ICLR 2022 • Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X. Liu, Schahram Dustdar
Accurate prediction of the future given the past based on time series data is of paramount importance, since it opens the door for decision making and risk management ahead of time.
no code implementations • CVPR 2021 • Yuang Zhang, Huanyu He, Jianguo Li, Yuxi Li, John See, Weiyao Lin
Pedestrian detection in a crowd is a challenging task due to a high number of mutually-occluding human instances, which brings ambiguity and optimization difficulties to the current IoU-based ground truth assignment procedure in classical object detection methods.
no code implementations • 17 Nov 2020 • MingJie Sun, Jianguo Li, ChangShui Zhang
Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures, while traditional robust visual features like SIFT (scale-invariant feature transforms) are designed to be robust across a substantial range of affine distortion, addition of noise, etc with the mimic of human perception nature.
1 code implementation • 17 Aug 2020 • Kean Chen, Weiyao Lin, Jianguo Li, John See, Ji Wang, Junni Zou
This paper alleviates this issue by proposing a novel framework to replace the classification task in one-stage detectors with a ranking task, and adopting the Average-Precision loss (AP-loss) for the ranking problem.
no code implementations • 25 Sep 2019 • Jianguo Li, MingJie Sun, ChangShui Zhang
Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures, while traditional robust visual features like SIFT (scale-invariant feature transforms) are designed to be robust across a substantial range of affine distortion, addition of noise, etc with the mimic of human perception nature.
1 code implementation • 13 Jun 2019 • Shuyuan Li, Jianguo Li, Hanlin Tang, Rui Qian, Weiyao Lin
This paper tries to fill the gap by introducing a novel large-scale dataset, the Amur Tiger Re-identification in the Wild (ATRW) dataset.
1 code implementation • CVPR 2019 • Kean Chen, Jianguo Li, Weiyao Lin, John See, Ji Wang, Ling-Yu Duan, Zhibo Chen, Changwei He, Junni Zou
For this purpose, we develop a novel optimization algorithm, which seamlessly combines the error-driven update scheme in perceptron learning and backpropagation algorithm in deep networks.
1 code implementation • CVPR 2020 • Tianhong Li, Jianguo Li, Zhuang Liu, Chang-Shui Zhang
Deep neural network compression techniques such as pruning and weight tensor decomposition usually require fine-tuning to recover the prediction accuracy when the compression ratio is high.
no code implementations • 16 Nov 2018 • You Qiaoben, Zheng Wang, Jianguo Li, Yinpeng Dong, Yu-Gang Jiang, Jun Zhu
Binary neural networks have great resource and computing efficiency, while suffer from long training procedure and non-negligible accuracy drops, when comparing to the full-precision counterparts.
no code implementations • 27 Sep 2018 • Tianhong Li, Jianguo Li, Zhuang Liu, ChangShui Zhang
Taking the assumption that both "teacher" and "student" have the same feature map sizes at each corresponding block, we add a $1\times 1$ conv-layer at the end of each block in the student-net, and align the block-level outputs between "teacher" and "student" by estimating the parameters of the added layer with limited samples.
1 code implementation • 25 Sep 2018 • Zhiqiang Shen, Zhuang Liu, Jianguo Li, Yu-Gang Jiang, Yurong Chen, xiangyang xue
Thus, a better solution to handle these critical problems is to train object detectors from scratch, which motivates our proposed method.
1 code implementation • 16 Aug 2018 • Jianbo Guo, Yuxi Li, Weiyao Lin, Yurong Chen, Jianguo Li
Depthwise separable convolution has shown great efficiency in network design, but requires time-consuming training procedure with full training-set available.
1 code implementation • 29 Jul 2018 • Yuxi Li, Jiuwei Li, Weiyao Lin, Jianguo Li
Based on the deeply supervised object detection (DSOD) framework, we propose Tiny-DSOD dedicating to resource-restricted usages.
no code implementations • CVPR 2018 • Zhou Su, Chen Zhu, Yinpeng Dong, Dongqi Cai, Yurong Chen, Jianguo Li
Second is the mechanism for handling multiple knowledge facts expanding from question and answer pairs.
7 code implementations • CVPR 2018 • Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li
To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks.
no code implementations • ICCV 2017 • Tao Yu, Kaiwen Guo, Feng Xu, Yuan Dong, Zhaoqi Su, Jianhui Zhao, Jianguo Li, Qionghai Dai, Yebin Liu
To reduce the ambiguities of the non-rigid deformation parameterization on the surface graph nodes, we take advantage of the internal articulated motion prior for human performance and contribute a skeleton-embedded surface fusion (SSF) method.
12 code implementations • ICCV 2017 • Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, Chang-Shui Zhang
For VGGNet, a multi-pass version of network slimming gives a 20x reduction in model size and a 5x reduction in computing operations.
4 code implementations • ICCV 2017 • Zhiqiang Shen, Zhuang Liu, Jianguo Li, Yu-Gang Jiang, Yurong Chen, xiangyang xue
State-of-the-art object objectors rely heavily on the off-the-shelf networks pre-trained on large-scale classification datasets like ImageNet, which incurs learning bias due to the difference on both the loss functions and the category distributions between classification and detection tasks.
1 code implementation • 3 Aug 2017 • Yinpeng Dong, Renkun Ni, Jianguo Li, Yurong Chen, Jun Zhu, Hang Su
This procedure can greatly compensate the quantization error and thus yield better accuracy for low-bit DNNs.
no code implementations • CVPR 2017 • Zhiqiang Shen, Jianguo Li, Zhou Su, Minjun Li, Yurong Chen, Yu-Gang Jiang, xiangyang xue
This paper focuses on a novel and challenging vision task, dense video captioning, which aims to automatically describe a video clip with multiple informative and diverse caption sentences.
no code implementations • 8 Sep 2015 • Jianwei Luo, Jianguo Li, Jun Wang, Zhiguo Jiang, Yurong Chen
Results show that deep attribute approaches achieve state-of-the-art results, and outperforms existing peer methods with a significant margin, even though some benchmarks have little overlap of concepts with the pre-trained CNN models.
no code implementations • 6 Jul 2014 • Jianguo Li, Yurong Chen
Second, the face image is further represented by patches of picked channels, and we search from the over-complete patch pool to activate only those most discriminant patches.
no code implementations • CVPR 2013 • Jianguo Li, Yimin Zhang
Third, we adopt AUC as a single criterion for the convergence test during cascade training rather than the two trade-off criteria (false-positive-rate and hit-rate) in the VJ framework.