1 code implementation • ACL 2022 • Yongqi Zhang, Zhanke Zhou, Quanming Yao, Yong Li
Based on the analysis, we propose an efficient two-stage search algorithm KGTuner, which efficiently explores HP configurations on small subgraph at the first stage and transfers the top-performed configurations for fine-tuning on the large full graph at the second stage.
1 code implementation • 28 Mar 2025 • Zhanke Zhou, Zhaocheng Zhu, Xuan Li, Mikhail Galkin, Xiao Feng, Sanmi Koyejo, Jian Tang, Bo Han
We showcase this advantage by adapting our tool to a lightweight verifier that evaluates the correctness of reasoning paths.
no code implementations • 26 Feb 2025 • Qizhou Wang, Jin Peng Zhou, Zhanke Zhou, Saebyeol Shin, Bo Han, Kilian Q. Weinberger
Large language models (LLMs) should undergo rigorous audits to identify potential risks, such as copyright and privacy infringements.
1 code implementation • 20 Feb 2025 • Chentao Cao, Zhun Zhong, Zhanke Zhou, Tongliang Liu, Yang Liu, Kun Zhang, Bo Han
Leveraging the zero-shot capability of pre-trained vision-language models (VLMs), this paper introduces Zero-Shot Noisy TTA (ZS-NTTA), focusing on adapting the model to target data with noisy samples during test-time in a zero-shot manner.
1 code implementation • 19 Dec 2024 • Yajing Wang, Zongwei Luo, Jingzhe Wang, Zhanke Zhou, Yongqiang Chen, Bo Han
In SCIE, the instructions are treated as the treatment, and textual features are used to process natural language, establishing causal relationships through treatments between instructions and downstream tasks.
1 code implementation • 18 Dec 2024 • Xinyu Pang, Ruixin Hong, Zhanke Zhou, Fangrui Lv, Xinwei Yang, Zhilong Liang, Bo Han, ChangShui Zhang
Physics problems constitute a significant aspect of reasoning, necessitating complicated reasoning ability and abundant physics knowledge.
1 code implementation • 15 Nov 2024 • Zhanke Zhou, Jianing Zhu, Fengfei Yu, Xuan Li, Xiong Peng, Tongliang Liu, Bo Han
These attacks highlight the vulnerability of neural networks and raise awareness about the risk of privacy leakage within the research community.
2 code implementations • 31 Oct 2024 • Zhanke Zhou, Rong Tao, Jianing Zhu, Yiwen Luo, Zengmao Wang, Bo Han
Here, we propose the method of contrastive denoising with noisy chain-of-thought (CD-CoT).
1 code implementation • 16 Oct 2024 • Hongduan Tian, Feng Liu, Zhanke Zhou, Tongliang Liu, Chengqi Zhang, Bo Han
However, in this paper, we find that there naturally exists a gap, which resembles the modality gap, between the prototype and image instance embeddings extracted from the frozen pre-trained backbone, and simply applying the same transformation during the adaptation phase constrains exploring the optimal representations and shrinks the gap between prototype and image representations.
1 code implementation • 2 Jun 2024 • Chentao Cao, Zhun Zhong, Zhanke Zhou, Yang Liu, Tongliang Liu, Bo Han
In this paper, we propose to tackle this constraint by leveraging the expert knowledge and reasoning capability of large language models (LLM) to Envision potential Outlier Exposure, termed EOE, without access to any actual OOD data.
1 code implementation • 15 Mar 2024 • Zhanke Zhou, Yongqi Zhang, Jiangchao Yao, Quanming Yao, Bo Han
To deduce new facts on a knowledge graph (KG), a link predictor learns from the graph structure and collects local evidence to find the answer to a given query.
1 code implementation • 6 Nov 2023 • Xuan Li, Zhanke Zhou, Jianing Zhu, Jiangchao Yao, Tongliang Liu, Bo Han
Large language models (LLMs) have succeeded significantly in various applications but remain susceptible to adversarial jailbreaks that void their safety guardrails.
1 code implementation • 2 Nov 2023 • Xuan Li, Zhanke Zhou, Jiangchao Yao, Yu Rong, Lu Zhang, Bo Han
To tackle this issue, we propose a method to abstract the collective information of atomic groups into a few $\textit{Neural Atoms}$ by implicitly projecting the atoms of a molecular.
1 code implementation • NeurIPS 2023 • Zhanke Zhou, Jiangchao Yao, Jiaxu Liu, Xiawei Guo, Quanming Yao, Li He, Liang Wang, Bo Zheng, Bo Han
To address this dilemma, we propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse.
1 code implementation • 17 Oct 2023 • Wei Yao, Zhanke Zhou, Zhicong Li, Bo Han, Yong liu
To mitigate such bias while achieving comparable accuracy, a promising approach is to introduce surrogate functions of the concerned fairness definition and solve a constrained optimization problem.
1 code implementation • 15 Jun 2023 • Zhanke Zhou, Chenyu Zhou, Xuan Li, Jiangchao Yao, Quanming Yao, Bo Han
Although powerful graph neural networks (GNNs) have boosted numerous real-world applications, the potential privacy risk is still underexplored.
2 code implementations • 30 May 2022 • Yongqi Zhang, Zhanke Zhou, Quanming Yao, Xiaowen Chu, Bo Han
An important design component of GNN-based KG reasoning methods is called the propagation path, which contains a set of involved entities in each propagation step.
2 code implementations • 5 May 2022 • Yongqi Zhang, Zhanke Zhou, Quanming Yao, Yong Li
While hyper-parameters (HPs) are important for knowledge graph (KG) learning, existing methods fail to search them efficiently.