no code implementations • 31 Dec 2024 • Xiaolei Wang, Xiaoyang Wang, Huihui Bai, Eng Gee Lim, Jimin Xiao
Existing unsupervised distillation-based methods rely on the differences between encoded and decoded features to locate abnormal regions in test images.
no code implementations • 11 Dec 2024 • Fan Li, Xiaoyang Wang, Dawei Cheng, Cong Chen, Ying Zhang, Xuemin Lin
iii) Current state-of-the-art dynamic graph generators are based on the temporal random walk, making the simulation process time-consuming.
no code implementations • 19 Nov 2024 • Xiaoyang Wang, Xin Chen
With the fast-growing penetration of power inverter-interfaced renewable generation, power systems face significant challenges in maintaining power balance and the nominal frequency.
no code implementations • 18 Oct 2024 • Xingyu Tan, Xiaoyang Wang, Qing Liu, Xiwei Xu, Xin Yuan, Wenjie Zhang
In order to improve the efficiency, PoG prunes irrelevant information from the graph exploration first and introduces efficient three-step pruning techniques that incorporate graph structures, LLM prompting, and a pre-trained language model (e. g., SBERT) to effectively narrow down the explored candidate paths.
Ranked #1 on Knowledge Base Question Answering on WebQuestions
1 code implementation • 17 Oct 2024 • Shwai He, Tao Ge, Guoheng Sun, Bowei Tian, Xiaoyang Wang, Ang Li, Dong Yu
Traditional transformer models often allocate a fixed amount of computational resources to every input token, leading to inefficient and unnecessary computation.
no code implementations • 9 Oct 2024 • Fan Li, Xiaoyang Wang, Dawei Cheng, Wenjie Zhang, Ying Zhang, Xuemin Lin
With growing demands for data privacy and model robustness, graph unlearning (GU), which erases the influence of specific data on trained GNN models, has gained significant attention.
no code implementations • 2 Oct 2024 • Yebowen Hu, Xiaoyang Wang, Wenlin Yao, Yiming Lu, Daoan Zhang, Hassan Foroosh, Dong Yu, Fei Liu
In this paper, we introduce DeFine, a new framework that constructs probabilistic factor profiles from complex scenarios.
1 code implementation • 12 Sep 2024 • Liqiang Jing, Zhehui Huang, Xiaoyang Wang, Wenlin Yao, Wenhao Yu, Kaixin Ma, Hongming Zhang, Xinya Du, Dong Yu
To bridge this gap, we introduce DSBench, a comprehensive benchmark designed to evaluate data science agents with realistic tasks.
no code implementations • 25 Jul 2024 • Jianke Yu, Hanchen Wang, Chen Chen, Xiaoyang Wang, Wenjie Zhang, Ying Zhang
However, current research overlooks the real-world scenario of incomplete graphs. To address this gap, we introduce the Robust Incomplete Deep Attack Framework (RIDA).
no code implementations • 18 Jul 2024 • Qiuyu Zhu, Liang Zhang, Qianxiong Xu, Kaijun Liu, Cheng Long, Xiaoyang Wang
Based on this structure, we propose a novel Hierarchical Heterogeneous Graph Transformer (HHGT) model, which seamlessly integrates a Type-level Transformer for aggregating nodes of different types within each k-ring neighborhood, followed by a Ring-level Transformer for aggregating different k-ring neighborhoods in a hierarchical manner.
1 code implementation • 11 Jul 2024 • Miao Yan, Ping Zhang, Haofei Zhang, Ruqian Hao, Juanxiu Liu, Xiaoyang Wang, Lin Liu
In this paper, to address these issues, we apply natural language modeling to TIR tracking and propose a coordinate-aware thermal infrared tracking model called NLMTrack, which enhances the utilization of coordinate and temporal information.
no code implementations • 3 Jul 2024 • Hakan Erdol, Xiaoyang Wang, Robert Piechocki, George Oikonomou, Arjun Parekh
The advancements of machine learning-based (ML) decision-making algorithms created various research and industrial opportunities.
3 code implementations • 28 Jun 2024 • Tao Ge, Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, Dong Yu
We propose a novel persona-driven data synthesis methodology that leverages various perspectives within a large language model (LLM) to create diverse synthetic data.
1 code implementation • 17 Jun 2024 • Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Wenlin Yao, Hassan Foroosh, Dong Yu, Fei Liu
Finally, the effectiveness of reasoning is influenced by narrative complexity, information density, and domain-specific terms, highlighting the challenges in analytical reasoning tasks.
1 code implementation • 20 Apr 2024 • Yixuan Li, Xuelin Liu, Xiaoyang Wang, Bu Sung Lee, Shiqi Wang, Anderson Rocha, Weisi Lin
Meanwhile, large multimodal models (LMMs) have exhibited immense visual-text capabilities on various tasks, bringing the potential for explainable fake image detection.
no code implementations • 19 Apr 2024 • Chia-Hsuan Chang, Xiaoyang Wang, Christopher C. Yang
By focusing on the predictive modeling of sepsis-related mortality, we propose a method that learns a performance-optimized predictive model and then employs the transfer learning process to produce a model with better fairness.
1 code implementation • 18 Apr 2024 • Fan Li, Xiaoyang Wang, Dawei Cheng, Wenjie Zhang, Ying Zhang, Xuemin Lin
Self-supervised learning (SSL) provides a promising alternative for representation learning on hypergraphs without costly labels.
no code implementations • 4 Apr 2024 • Mary M. Lucas, Xiaoyang Wang, Chia-Hsuan Chang, Christopher C. Yang, Jacqueline E. Braughton, Quyen M. Ngo
Fairness of machine learning models in healthcare has drawn increasing attention from clinicians, researchers, and even at the highest level of government.
1 code implementation • 2 Apr 2024 • Yuanyuan Lei, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Ruihong Huang, Dong Yu
To address this issue and make the summarizer express both sides of opinions, we introduce the concept of polarity calibration, which aims to align the polarity of output summary with that of input text.
1 code implementation • CVPR 2024 • Xiaoyang Wang, Huihui Bai, Limin Yu, Yao Zhao, Jimin Xiao
Inspired by the low-density separation assumption in semi-supervised learning, our key insight is that feature density can shed a light on the most promising direction for the segmentation classifier to explore, which is the regions with lower density.
no code implementations • CVPR 2024 • Yizheng Gong, Siyue Yu, Xiaoyang Wang, Jimin Xiao
Based on these findings, we propose CoMasTRe by disentangling continual segmentation into two stages: forgetting-resistant continual objectness learning and well-researched continual classification.
no code implementations • 6 Mar 2024 • Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Hassan Foroosh, Dong Yu, Fei Liu
Our analytical reasoning embodies the tasks of letting large language models count how many points each team scores in a quarter in the NBA and NFL games.
no code implementations • 15 Feb 2024 • Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Hassan Foroosh, Dong Yu, Fei Liu
In this paper, we introduce four novel tasks centered around sports data analytics to evaluate the numerical reasoning and information fusion capabilities of LLMs.
no code implementations • 31 Jan 2024 • Sangwoo Cho, Kaiqiang Song, Chao Zhao, Xiaoyang Wang, Dong Yu
Multi-turn dialogues are characterized by their extended length and the presence of turn-taking conversations.
1 code implementation • 31 Jan 2024 • Hongpeng Guo, Haotian Gu, Xiaoyang Wang, Bo Chen, Eun Kyung Lee, Tamar Eilam, Deming Chen, Klara Nahrstedt
Federated learning (FL) is a machine learning paradigm that allows multiple clients to collaboratively train a shared model while keeping their data on-premise.
1 code implementation • 22 Jan 2024 • Xinqiao Zhao, Feilong Tang, Xiaoyang Wang, Jimin Xiao
Specifically, we leverage the class prototypes that carry positive shared features and propose a Multi-Scaled Distribution-Weighted (MSDW) consistency loss for narrowing the gap between the CAMs generated through classifier weights and class prototypes during training.
1 code implementation • 7 Jan 2024 • Yiwei Qin, Kaiqiang Song, Yebowen Hu, Wenlin Yao, Sangwoo Cho, Xiaoyang Wang, Xuansheng Wu, Fei Liu, PengFei Liu, Dong Yu
This paper introduces the Decomposed Requirements Following Ratio (DRFR), a new metric for evaluating Large Language Models' (LLMs) ability to follow instructions.
no code implementations • CVPR 2024 • Xiaoyang Wang, Hongping Gan
Deep unfolding networks (DUNs) renowned for their interpretability and superior performance have invigorated the realm of compressive sensing (CS).
no code implementations • 14 Dec 2023 • Kaiqiang Song, Xiaoyang Wang, Sangwoo Cho, Xiaoman Pan, Dong Yu
This paper introduces a novel approach to enhance the capabilities of Large Language Models (LLMs) in processing and understanding extensive text sequences, a critical aspect in applications requiring deep comprehension and synthesis of large volumes of information.
6 code implementations • 15 Nov 2023 • Fuxiao Liu, Xiaoyang Wang, Wenlin Yao, Jianshu Chen, Kaiqiang Song, Sangwoo Cho, Yaser Yacoob, Dong Yu
Recognizing the need for a comprehensive evaluation of LMM chart understanding, we also propose a MultiModal Chart Benchmark (\textbf{MMC-Benchmark}), a comprehensive human-annotated benchmark with nine distinct tasks evaluating reasoning capabilities over charts.
1 code implementation • 30 Sep 2023 • Xuansheng Wu, Wenlin Yao, Jianshu Chen, Xiaoman Pan, Xiaoyang Wang, Ninghao Liu, Dong Yu
In this work, we investigate how the instruction tuning adjusts pre-trained models with a focus on intrinsic changes.
no code implementations • 8 Sep 2023 • Haopeng Zhang, Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Hongwei Wang, Jiawei Zhang, Dong Yu
SRI balances the importance and diversity of a subset of sentences from the source documents and can be calculated in unsupervised and adaptive manners.
no code implementations • 1 Aug 2023 • Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, Jianshu Chen
We investigate how to elicit compositional generalization capabilities in large language models (LLMs).
Ranked #34 on Math Word Problem Solving on MATH
1 code implementation • 8 Jun 2023 • Shanshan Han, Baturalp Buyukates, Zijian Hu, Han Jin, Weizhao Jin, Lichao Sun, Xiaoyang Wang, Wenxuan Wu, Chulin Xie, Yuhang Yao, Kai Zhang, Qifan Zhang, Yuhui Zhang, Carlee Joe-Wong, Salman Avestimehr, Chaoyang He
This paper introduces FedSecurity, an end-to-end benchmark that serves as a supplementary component of the FedML library for simulating adversarial attacks and corresponding defense mechanisms in Federated Learning (FL).
no code implementations • 2 Jun 2023 • Canjia Li, Xiaoyang Wang, Dongdong Li, Yiding Liu, Yu Lu, Shuaiqiang Wang, Zhicong Cheng, Simiu Gu, Dawei Yin
In this work, we focus on ranking user satisfaction rather than relevance in web search, and propose a PLM-based framework, namely SAT-Ranker, which comprehensively models different dimensions of user satisfaction in a unified manner.
no code implementations • 24 May 2023 • Yebowen Hu, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Hassan Foroosh, Fei Liu
Human preference judgments are pivotal in guiding large language models (LLMs) to produce outputs that align with human values.
no code implementations • 24 Feb 2023 • Mohammud J. Bocus, Xiaoyang Wang, Robert. J. Piechocki
This paper presents a novel approach for multimodal data fusion based on the Vector-Quantized Variational Autoencoder (VQVAE) architecture.
1 code implementation • CVPR 2023 • Xiaoyang Wang, Bingfeng Zhang, Limin Yu, Jimin Xiao
Inspired by density-based unsupervised clustering, we propose to leverage feature density to locate sparse regions within feature clusters defined by label and pseudo labels.
no code implementations • ICCV 2023 • Zheng Fang, Xiaoyang Wang, Haocheng Li, Jiejie Liu, Qiugui Hu, Jimin Xiao
In this paper, we propose a few-shot anomaly detection strategy that works in a low-data regime and can generalize across products at no cost.
1 code implementation • 19 Dec 2022 • Xianjun Yang, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Xiaoman Pan, Linda Petzold, Dong Yu
Specifically, zero/few-shot and fine-tuning results show that the model pre-trained on our corpus demonstrates a strong aspect or query-focused generation ability compared with the backbone model.
1 code implementation • 28 Oct 2022 • Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Fei Liu, Dong Yu
The problem is only exacerbated by a lack of segmentation in transcripts of audio/video recordings.
Ranked #6 on Text Summarization on Pubmed
1 code implementation • 22 Oct 2022 • Fei Wang, Kaiqiang Song, Hongming Zhang, Lifeng Jin, Sangwoo Cho, Wenlin Yao, Xiaoyang Wang, Muhao Chen, Dong Yu
Recent literature adds extractive summaries as guidance for abstractive summarization models to provide hints of salient content and achieves better performance.
Ranked #7 on Abstractive Text Summarization on CNN / Daily Mail
1 code implementation • 21 Oct 2022 • Yue Yang, Wenlin Yao, Hongming Zhang, Xiaoyang Wang, Dong Yu, Jianshu Chen
Large-scale pretrained language models have made significant advances in solving downstream language understanding tasks.
Ranked #2 on Visual Commonsense Tests on ViComTe-color
no code implementations • 4 Oct 2022 • Xiaoyang Wang, Dimitrios Dimitriadis, Sanmi Koyejo, Shruti Tople
Federated learning enables training high-utility models across several clients without directly sharing their private data.
no code implementations • 13 Sep 2022 • Hakan Erdol, Xiaoyang Wang, Peizheng Li, Jonathan D. Thomas, Robert Piechocki, George Oikonomou, Rui Inacio, Abdelrahim Ahmad, Keith Briggs, Shipra Kapoor
In order to provide such services, 5G systems will support various combinations of access technologies such as LTE, NR, NR-U and Wi-Fi.
no code implementations • 31 Aug 2022 • Peizheng Li, Hakan Erdol, Keith Briggs, Xiaoyang Wang, Robert Piechocki, Abdelrahim Ahmad, Rui Inacio, Shipra Kapoor, Angela Doufexi, Arjun Parekh
The model will also be used as the base model for adaptive training in the new environment.
no code implementations • 3 Aug 2022 • Robert J. Piechocki, Xiaoyang Wang, Mohammud J. Bocus
In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks.
no code implementations • 27 Jun 2022 • Peizheng Li, Xiaoyang Wang, Robert Piechocki, Shipra Kapoor, Angela Doufexi, Arjun Parekh
Measuring customer experience on mobile data is of utmost importance for global mobile operators.
no code implementations • 8 Jun 2022 • Peizheng Li, Jonathan Thomas, Xiaoyang Wang, Hakan Erdol, Abdelrahim Ahmad, Rui Inacio, Shipra Kapoor, Arjun Parekh, Angela Doufexi, Arman Shojaeifard, Robert Piechocki
One of the main reasons is the modelling gap between the simulation and the real environment, which could make the RL agent trained by simulation ill-equipped for the real environment.
1 code implementation • ACL 2022 • Kaiqiang Song, Chen Li, Xiaoyang Wang, Dong Yu, Fei Liu
Summarization of podcast transcripts is of practical benefit to both content providers and consumers.
no code implementations • 12 Nov 2021 • Peizheng Li, Jonathan Thomas, Xiaoyang Wang, Ahmed Khalil, Abdelrahim Ahmad, Rui Inacio, Shipra Kapoor, Arjun Parekh, Angela Doufexi, Arman Shojaeifard, Robert Piechocki
We provide a taxonomy for the challenges faced by ML/RL models throughout the development life-cycle: from the system specification to production deployment (data acquisition, model design, testing and management, etc.).
no code implementations • 1 Nov 2021 • Zhe Zhou, Cong Li, Xuechao Wei, Xiaoyang Wang, Guangyu Sun
However, to realize efficient GNN training is challenging, especially on large graphs.
no code implementations • 29 Sep 2021 • Xiaoyang Wang, Han Zhao, Klara Nahrstedt, Oluwasanmi O Koyejo
To this end, we propose a strategy to mitigate the effect of spurious features based on our observation that the global model in the federated learning step has a low accuracy disparity due to statistical heterogeneity.
no code implementations • 8 Sep 2021 • Dan Su, Jiqiang Liu, Sencun Zhu, Xiaoyang Wang, Wei Wang, Xiangliang Zhang
In this work, we propose AppQ, a novel app quality grading and recommendation system that extracts inborn features of apps based on app source code.
1 code implementation • 24 Jul 2021 • Zhenguang Liu, Peng Qian, Xiaoyang Wang, Yuan Zhuang, Lin Qiu, Xun Wang
Then, we propose a novel temporal message propagation network to extract the graph feature from the normalized graph, and combine the graph feature with designed expert patterns to yield a final detection system.
no code implementations • 3 Mar 2021 • Xiaoyang Wang, Jonathan D Thomas, Robert J Piechocki, Shipra Kapoor, Raul Santos-Rodriguez, Arjun Parekh
Open Radio Access Network (ORAN) is being developed with an aim to democratise access and lower the cost of future mobile data networks, supporting network services with various QoS requirements, such as massive IoT and URLLC.
no code implementations • 3 Mar 2021 • Xiaoyang Wang, Chen Li, Jianqiao Zhao, Dong Yu
To facilitate the research on this corpus, we provide results of several benchmark models.
no code implementations • 15 Jan 2021 • Xiaoyang Wang, Bo Li, Yibo Zhang, Bhavya Kailkhura, Klara Nahrstedt
However, these AutoML pipelines only focus on improving the learning accuracy of benign samples while ignoring the ML model robustness under adversarial attacks.
1 code implementation • 1st Conference on Causal Learning and Reasoning 2022 • Xiaoyang Wang, Klara Nahrstedt, Oluwasanmi O Koyejo
Current approaches for learning disentangled representations assume that independent latent variables generate the data through a single data generation process.
no code implementations • 9 Nov 2020 • Kaiqiang Song, Chen Li, Xiaoyang Wang, Dong Yu, Fei Liu
Instead, we investigate several less-studied aspects of neural abstractive summarization, including (i) the importance of selecting important segments from transcripts to serve as input to the summarizer; (ii) striking a balance between the amount and quality of training instances; (iii) the appropriate summary length and start/end points.
1 code implementation • TKDE 2020 • Dawei Cheng, Xiaoyang Wang, Ying Zhang, Liqing Zhang
But manually generating features needs domain knowledge and may lay behind the modus operandi of fraud, which means we need to automatically focus on the most relevant fraudulent behavior patterns in the online detection system.
5 code implementations • 27 Jul 2020 • Chaoyang He, Songze Li, Jinhyun So, Xiao Zeng, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, Xinghua Zhu, Jianzong Wang, Li Shen, Peilin Zhao, Yan Kang, Yang Liu, Ramesh Raskar, Qiang Yang, Murali Annavaram, Salman Avestimehr
Federated learning (FL) is a rapidly growing research field in machine learning.
no code implementations • ACL 2020 • Zhenyi Wang, Xiaoyang Wang, Bang An, Dong Yu, Changyou Chen
Text generation from a knowledge base aims to translate knowledge triples to natural language descriptions.
no code implementations • 4 Oct 2019 • Hong Jiang, Jong-Hoon Ahn, Xiaoyang Wang
We will develop a theoretical framework to characterize the signals that can be robustly recovered from their observations by an ML algorithm, and establish a Lipschitz condition on signals and observations that is both necessary and sufficient for the existence of a robust recovery.
no code implementations • 1 Jul 2019 • Xiaoyang Wang, Ioannis Mavromatis, Andrea Tassi, Raul Santos-Rodriguez, Robert J. Piechocki
Future Connected and Automated Vehicles (CAV), and more generally ITS, will form a highly interconnected system.
no code implementations • 9 Nov 2018 • Yang Fu, Xiaoyang Wang, Yunchao Wei, Thomas Huang
Thus, a more robust clip-level feature representation can be generated according to a weighted sum operation guided by the mined 2-D attention score matrix.
Large-Scale Person Re-Identification Video-Based Person Re-Identification
no code implementations • 25 Jun 2018 • Shujian Yu, Xiaoyang Wang, Jose C. Principe
In this paper, a novel Hierarchical Hypothesis Testing framework with Request-and-Reverify strategy is developed to detect concept drifts by requesting labels only when necessary.
no code implementations • CVPR 2015 • Xiaoyang Wang, Qiang Ji
Video event recognition still faces great challenges due to large intra-class variation and low image resolution, in particular for surveillance videos.
no code implementations • The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014 2014 • Xiaoyang Wang, Qiang Ji
These three levels of context provide crucial bottom-up, middle level, and top down information that can benefit the recognition task itself.
Ranked #1 on Action Recognition on VIRAT Ground 2.0