Search Results for author: Denghui Zhang

Found 20 papers, 8 papers with code

Internal Activation as the Polar Star for Steering Unsafe LLM Behavior

no code implementations3 Feb 2025 Peixuan Han, Cheng Qian, Xiusi Chen, Yuji Zhang, Denghui Zhang, Heng Ji

Large language models (LLMs) have demonstrated exceptional capabilities across a wide range of tasks but also pose significant risks due to their potential to generate harmful content.

EscapeBench: Pushing Language Models to Think Outside the Box

1 code implementation18 Dec 2024 Cheng Qian, Peixuan Han, Qinyu Luo, Bingxiang He, Xiusi Chen, Yuji Zhang, Hongyi Du, Jiarui Yao, Xiaocheng Yang, Denghui Zhang, Yunzhu Li, Heng Ji

Language model agents excel in long-session planning and reasoning, but existing benchmarks primarily focus on goal-oriented tasks with explicit objectives, neglecting creative adaptation in unfamiliar environments.

Language Modeling Language Modelling

Do LLMs Know to Respect Copyright Notice?

1 code implementation2 Nov 2024 Jialiang Xu, Shenglan Li, Zhaozhuo Xu, Denghui Zhang

Prior study shows that LLMs sometimes generate content that violates copyright.

Measuring Copyright Risks of Large Language Model via Partial Information Probing

1 code implementation20 Sep 2024 Weijie Zhao, Huajie Shao, Zhaozhuo Xu, Suzhen Duan, Denghui Zhang

Addressing this direction, we investigate and assess LLMs' capacity to generate infringing content by providing them with partial information from copyrighted materials, and try to use iterative prompting to get LLMs to generate more infringing content.

Language Modeling Language Modelling +1

Make Graph Neural Networks Great Again: A Generic Integration Paradigm of Topology-Free Patterns for Traffic Speed Prediction

1 code implementation24 Jun 2024 Yicheng Zhou, Pengfei Wang, Hao Dong, Denghui Zhang, Dingqi Yang, Yanjie Fu, Pengyang Wang

To tackle this challenge, we propose a generic model for enabling the current GNN-based methods to preserve topology-free patterns.

Boosting Urban Traffic Speed Prediction via Integrating Implicit Spatial Correlations

no code implementations25 Dec 2022 Dongkun Wang, Wei Fan, Pengyang Wang, Pengfei Wang, Dongjie Wang, Denghui Zhang, Yanjie Fu

To tackle the challenge, we propose a generic model for enabling the current traffic speed prediction methods to preserve implicit spatial correlations.

Human-instructed Deep Hierarchical Generative Learning for Automated Urban Planning

no code implementations1 Dec 2022 Dongjie Wang, Lingfei Wu, Denghui Zhang, Jingbo Zhou, Leilei Sun, Yanjie Fu

The third stage is to leverage multi-attentions to model the zone-zone peer dependencies of the functionality projections to generate grid-level land-use configurations.

Graph Soft-Contrastive Learning via Neighborhood Ranking

no code implementations28 Sep 2022 Zhiyuan Ning, Pengfei Wang, Pengyang Wang, Ziyue Qiao, Wei Fan, Denghui Zhang, Yi Du, Yuanchun Zhou

Moreover, as the neighborhood size exponentially expands with more hops considered, we propose neighborhood sampling strategies to improve learning efficiency.

Contrastive Learning Self-Supervised Learning

Learning to Walk with Dual Agents for Knowledge Graph Reasoning

1 code implementation23 Dec 2021 Denghui Zhang, Zixuan Yuan, Hao liu, Xiaodong Lin, Hui Xiong

Graph walking based on reinforcement learning (RL) has shown great success in navigating an agent to automatically complete various reasoning tasks over an incomplete knowledge graph (KG) by exploring multi-hop relational paths.

reinforcement-learning Reinforcement Learning +1

Domain-oriented Language Pre-training with Adaptive Hybrid Masking and Optimal Transport Alignment

no code implementations1 Dec 2021 Denghui Zhang, Zixuan Yuan, Yanchi Liu, Hao liu, Fuzhen Zhuang, Hui Xiong, Haifeng Chen

Also, the word co-occurrences guided semantic learning of pre-training models can be largely augmented by entity-level association knowledge.

Entity Alignment

Learning Disentangled Representations for Time Series

no code implementations17 May 2021 Yuening Li, Zhengzhang Chen, Daochen Zha, Mengnan Du, Denghui Zhang, Haifeng Chen, Xia Hu

Motivated by the success of disentangled representation learning in computer vision, we study the possibility of learning semantic-rich time-series representations, which remains unexplored due to three main challenges: 1) sequential data structure introduces complex temporal correlations and makes the latent representations hard to interpret, 2) sequential models suffer from KL vanishing problem, and 3) interpretable semantic concepts for time-series often rely on multiple factors instead of individuals.

Disentanglement Time Series +1

T$^2$-Net: A Semi-supervised Deep Model for Turbulence Forecasting

no code implementations26 Oct 2020 Denghui Zhang, Yanchi Liu, Wei Cheng, Bo Zong, Jingchao Ni, Zhengzhang Chen, Haifeng Chen, Hui Xiong

Accurate air turbulence forecasting can help airlines avoid hazardous turbulence, guide the routes that keep passengers safe, maximize efficiency, and reduce costs.

Decoder

Job2Vec: Job Title Benchmarking with Collective Multi-View Representation Learning

no code implementations16 Sep 2020 Denghui Zhang, Junming Liu, HengShu Zhu, Yanchi Liu, Lichen Wang, Pengyang Wang, Hui Xiong

However, it is still a challenging task since (1) the job title and job transition (job-hopping) data is messy which contains a lot of subjective and non-standard naming conventions for the same position (e. g., Programmer, Software Development Engineer, SDE, Implementation Engineer), (2) there is a large amount of missing title/transition information, and (3) one talent only seeks limited numbers of jobs which brings the incompleteness and randomness modeling job transition patterns.

Benchmarking Link Prediction +2

E-BERT: A Phrase and Product Knowledge Enhanced Language Model for E-commerce

no code implementations7 Sep 2020 Denghui Zhang, Zixuan Yuan, Yanchi Liu, Fuzhen Zhuang, Haifeng Chen, Hui Xiong

Pre-trained language models such as BERT have achieved great success in a broad range of natural language processing tasks.

Aspect Extraction Denoising +5

Path-Based Attention Neural Model for Fine-Grained Entity Typing

no code implementations29 Oct 2017 Denghui Zhang, Pengshan Cai, Yantao Jia, Manling Li, Yuanzhuo Wang, Xue-Qi Cheng

Fine-grained entity typing aims to assign entity mentions in the free text with types arranged in a hierarchical structure.

Entity Typing

Efficient Parallel Translating Embedding For Knowledge Graphs

1 code implementation30 Mar 2017 Denghui Zhang, Manling Li, Yantao Jia, Yuanzhuo Wang, Xue-Qi Cheng

Knowledge graph embedding aims to embed entities and relations of knowledge graphs into low-dimensional vector spaces.

Knowledge Graph Embedding Knowledge Graphs +2

Cannot find the paper you are looking for? You can Submit a new open access paper.