no code implementations • Findings (EMNLP) 2021 • Zixuan Zhang, Hongwei Wang, Han Zhao, Hanghang Tong, Heng Ji
Relations in most of the traditional knowledge graphs (KGs) only reflect static and factual connections, but fail to represent the dynamic activities and state changes about entities.
1 code implementation • NAACL (ACL) 2022 • Xinya Du, Zixuan Zhang, Sha Li, Pengfei Yu, Hongwei Wang, Tuan Lai, Xudong Lin, Ziqi Wang, Iris Liu, Ben Zhou, Haoyang Wen, Manling Li, Darryl Hannan, Jie Lei, Hyounghun Kim, Rotem Dror, Haoyu Wang, Michael Regan, Qi Zeng, Qing Lyu, Charles Yu, Carl Edwards, Xiaomeng Jin, Yizhu Jiao, Ghazaleh Kazeminejad, Zhenhailong Wang, Chris Callison-Burch, Mohit Bansal, Carl Vondrick, Jiawei Han, Dan Roth, Shih-Fu Chang, Martha Palmer, Heng Ji
We introduce RESIN-11, a new schema-guided event extraction&prediction framework that can be applied to a large variety of newsworthy scenarios.
no code implementations • 19 Jan 2025 • Chenlu Zhan, Yufei Zhang, Yu Lin, Gaoang Wang, Hongwei Wang
While advanced techniques like radiance fields and 3D Gaussian Splatting achieve rendering quality and impressive efficiency with dense view inputs, they suffer from significant geometric reconstruction errors when applied to sparse input views.
1 code implementation • 16 Jan 2025 • Hanrong Zhang, Yifei Yao, Zixuan Wang, Jiayuan Su, Mengxuan Li, Peng Peng, Hongwei Wang
Class-incremental fault diagnosis requires a model to adapt to new fault classes while retaining previous knowledge.
1 code implementation • 28 Dec 2024 • Zhaopeng Feng, Jiayuan Su, Jiamei Zheng, Jiahan Ren, Yan Zhang, Jian Wu, Hongwei Wang, Zuozhu Liu
Recent advancements in large language models (LLMs) have given rise to the LLM-as-a-judge paradigm, showcasing their potential to deliver human-like judgments.
1 code implementation • 15 Dec 2024 • Kairong Yu, Tianqing Zhang, Hongwei Wang, Qi Xu
Spiking Neural Networks (SNNs) are emerging as a promising alternative to Artificial Neural Networks (ANNs) due to their inherent energy efficiency. Owing to the inherent sparsity in spike generation within SNNs, the in-depth analysis and optimization of intermediate output spikes are often neglected. This oversight significantly restricts the inherent energy efficiency of SNNs and diminishes their advantages in spatiotemporal feature extraction, resulting in a lack of accuracy and unnecessary energy expenditure. In this work, we analyze the inherent spiking characteristics of SNNs from both temporal and spatial perspectives. In terms of spatial analysis, we find that shallow layers tend to focus on learning vertical variations, while deeper layers gradually learn horizontal variations of features. Regarding temporal analysis, we observe that there is not a significant difference in feature learning across different time steps. This suggests that increasing the time steps has limited effect on feature learning. Based on the insights derived from these analyses, we propose a Frequency-based Spatial-Temporal Attention (FSTA) module to enhance feature learning in SNNs. This module aims to improve the feature learning capabilities by suppressing redundant spike features. The experimental results indicate that the introduction of the FSTA module significantly reduces the spike firing rate of SNNs, demonstrating superior performance compared to state-of-the-art baselines across multiple datasets. Our source code is available in https://github. com/yukairong/FSTA-SNN.
no code implementations • 25 Nov 2024 • Feifei Shao, Ping Liu, Zhao Wang, Yawei Luo, Hongwei Wang, Jun Xiao
We identify inter-task and intra-task sensitivity issues in current ICL methods for PCP, which we attribute to inflexible sampling strategies lacking context adaptation at the point and prompt levels.
1 code implementation • 18 Nov 2024 • Mengxuan Li, Ke Liu, Hongyang Chen, Jiajun Bu, Hongwei Wang, Haishuai Wang
Specifically, we adopt INR to parameterize time series data as a continuous function and employ a transformer-based architecture to predict the INR of given data.
no code implementations • 15 Nov 2024 • Guodong Sun, Qixiang Ma, Liqiang Zhang, Hongwei Wang, Zixuan Gao, Haotian Zhang
Atmospheric turbulence introduces severe spatial and geometric distortions, challenging traditional image restoration methods.
no code implementations • 24 Oct 2024 • Jiashun Cheng, Zinan Zheng, Yang Liu, Jianheng Tang, Hongwei Wang, Yu Rong, Jia Li, Fugee Tsung
Graph Anomaly Detection (GAD) is a challenging and practical research topic where Graph Neural Networks (GNNs) have recently shown promising results.
1 code implementation • 14 Oct 2024 • Di wu, Hongwei Wang, Wenhao Yu, Yuwei Zhang, Kai-Wei Chang, Dong Yu
Recent large language model (LLM)-driven chat assistant systems have integrated memory components to track user-assistant chat histories, enabling more accurate and personalized responses.
no code implementations • 10 Oct 2024 • Hanrong Zhang, Xinyue Wang, Jiabao Pan, Hongwei Wang
We prove the feasibility of the semi-automatic KG construction method on the SAKA platform.
1 code implementation • 3 Oct 2024 • Hanrong Zhang, Jingyuan Huang, Kai Mei, Yifei Yao, Zhenting Wang, Chenlu Zhan, Hongwei Wang, Yongfeng Zhang
Our benchmark results reveal critical vulnerabilities in different stages of agent operation, including system prompt, user prompt handling, tool usage, and memory retrieval, with the highest average attack success rate of 84. 30\%, but limited effectiveness shown in current defenses, unveiling important works to be done in terms of agent security for the community.
1 code implementation • 16 Sep 2024 • Hongming Zhang, Xiaoman Pan, Hongwei Wang, Kaixin Ma, Wenhao Yu, Dong Yu
Cognitive Kernel adopts a model-centric design.
no code implementations • 13 Aug 2024 • Hongwei Wang, Jun Fang, Hongbin Li, Geert Leus
To overcome the noise sensitivity of higher-order difference-based methods, we explore the properties of the first-order difference of modulo samples, and develop two line spectral estimation algorithms based on first-order difference, which are robust against noise.
no code implementations • 1 Jul 2024 • Jiabao Pan, Yan Zhang, Chen Zhang, Zuozhu Liu, Hongwei Wang, Haizhou Li
Large language models (LLMs) have demonstrated emergent capabilities across diverse reasoning tasks via popular Chains-of-Thought (COT) prompting.
1 code implementation • 25 Jun 2024 • Tingyu Xie, Jian Zhang, Yan Zhang, Yuanyuan Liang, Qi Li, Hongwei Wang
The strong capability of large language models (LLMs) has been applied to information extraction (IE) through either retrieval augmented prompting or instruction tuning (IT).
no code implementations • 17 Jun 2024 • Zhonghan Zhao, Wenhao Chai, Xuan Wang, Ke Ma, Kewei Chen, Dongxu Guo, Tian Ye, Yanting Zhang, Hongwei Wang, Gaoang Wang
We begin our exploration with a vanilla large language model, augmenting it with a vision encoder and an action codebase trained on our collected high-quality dataset STEVE-21K.
1 code implementation • 23 May 2024 • Hanrong Zhang, Zhenting Wang, Tingxu Han, Mingyu Jin, Chenlu Zhan, Mengnan Du, Hongwei Wang, Shiqing Ma
In this paper, we propose an imperceptible and effective backdoor attack against self-supervised models.
no code implementations • 15 May 2024 • Shurong Wang, Yufei Zhang, Xuliang Huang, Hongwei Wang
Our results underscore the effectiveness of human-designed DP in the task of LP, emphasizing the pivotal role of collaboration between humans and AI on KG.
no code implementations • 8 Apr 2024 • Hongwei Wang, Jun Fang, Huiping Duan, Hongbin Li
In this paper, we consider the problem of hybrid near/far-field channel estimation by taking spherical wave propagation into account.
no code implementations • 6 Apr 2024 • Zhonghan Zhao, Ke Ma, Wenhao Chai, Xuan Wang, Kewei Chen, Dongxu Guo, Yanting Zhang, Hongwei Wang, Gaoang Wang
After distillation, embodied agents can complete complex, open-ended tasks without additional expert guidance, utilizing the performance and knowledge of a versatile MLM.
no code implementations • 30 Mar 2024 • Ben Zhou, Hongming Zhang, Sihao Chen, Dian Yu, Hongwei Wang, Baolin Peng, Dan Roth, Dong Yu
Conceptual reasoning, the ability to reason in abstract and high-level perspectives, is key to generalization in human cognition.
no code implementations • CVPR 2024 • Chenlu Zhan, Yu Lin, Gaoang Wang, Hongwei Wang, Jian Wu
Medical generative models, acknowledged for their high-quality sample generation ability, have accelerated the fast growth of medical applications.
no code implementations • 2 Mar 2024 • Jiayuan Su, Jing Luo, Hongwei Wang, Lu Cheng
This study aims to address the pervasive challenge of quantifying uncertainty in large language models (LLMs) without logit-access.
no code implementations • 18 Dec 2023 • Chenlu Zhan, Yufei Zhang, Yu Lin, Gaoang Wang, Hongwei Wang
Medical vision-language pre-training (Med-VLP) models have recently accelerated the fast-growing medical diagnostics application.
3 code implementations • 11 Dec 2023 • Tong Chen, Hongwei Wang, Sihao Chen, Wenhao Yu, Kaixin Ma, Xinran Zhao, Hongming Zhang, Dong Yu
We discover that the retrieval unit choice significantly impacts the performance of both retrieval and downstream tasks.
no code implementations • 15 Nov 2023 • Wenhao Yu, Hongming Zhang, Xiaoman Pan, Kaixin Ma, Hongwei Wang, Dong Yu
In response to these challenges, we introduces Chain-of-Noting (CoN), a novel approach aimed at improving the robustness of RALMs in facing noisy, irrelevant documents and in handling unknown scenarios.
1 code implementation • 15 Nov 2023 • Tingyu Xie, Qi Li, Yan Zhang, Zuozhu Liu, Hongwei Wang
Exploring the application of powerful large language models (LLMs) on the named entity recognition (NER) task has drawn much attention recently.
1 code implementation • 7 Nov 2023 • Sihao Chen, Hongming Zhang, Tong Chen, Ben Zhou, Wenhao Yu, Dian Yu, Baolin Peng, Hongwei Wang, Dan Roth, Dong Yu
We introduce sub-sentence encoder, a contrastively-learned contextual embedding model for fine-grained semantic representation of text.
no code implementations • 23 Oct 2023 • Hongwei Wang, Hongming Zhang, Dong Yu
Therefore, we propose a two-step training method for sentence representation learning models, wherein the encoder and the pooler are optimized separately to mitigate the overall performance loss in low-dimension scenarios.
no code implementations • 20 Oct 2023 • Zixuan Wang, Haoran Tang, Haibo Wang, Bo Qin, Mark D. Butala, Weiming Shen, Hongwei Wang
Despite the remarkable results that can be achieved by data-driven intelligent fault diagnosis techniques, they presuppose the same distribution of training and test data as well as sufficient labeled data.
1 code implementation • 16 Oct 2023 • Tingyu Xie, Qi Li, Jian Zhang, Yan Zhang, Zuozhu Liu, Hongwei Wang
Large language models (LLMs) exhibited powerful capability in various natural language processing tasks.
2 code implementations • 6 Oct 2023 • Abe Bohan Hou, Jingyu Zhang, Tianxing He, Yichen Wang, Yung-Sung Chuang, Hongwei Wang, Lingfeng Shen, Benjamin Van Durme, Daniel Khashabi, Yulia Tsvetkov
Existing watermarking algorithms are vulnerable to paraphrase attacks because of their token-level design.
1 code implementation • 15 Sep 2023 • Kaixin Ma, Hongming Zhang, Hongwei Wang, Xiaoman Pan, Wenhao Yu, Dong Yu
We evaluate our proposed LLM Agent with State-Space ExploRation (LASER) on both the WebShop task and amazon. com.
no code implementations • 8 Sep 2023 • Haopeng Zhang, Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Hongwei Wang, Jiawei Zhang, Dong Yu
SRI balances the importance and diversity of a subset of sentences from the source documents and can be calculated in unsupervised and adaptive manners.
no code implementations • 30 Aug 2023 • Jun Li, Jingjian Wang, Hongwei Wang, Xing Deng, Jielong Chen, Bing Cao, Zekun Wang, Guanjie Xu, Ge Zhang, Feng Shi, Hualei Liu
(ii) Integrate Network (IN) builds a new integrated sequence by utilizing spatial-temporal interaction on MSS and captures the comprehensive spatial-temporal representation by modeling the integrated sequence with a complicated attention.
no code implementations • 27 Jun 2023 • Xingyue Wang, Hanrong Zhang, Xinlong Qiao, Ke Ma, Shuting Tao, Peng Peng, Hongwei Wang
Additionally, a unified fault diagnosis method based on internal contrastive learning and Mahalanobis distance is put forward to underpin the proposed generalized framework.
no code implementations • 26 Jun 2023 • Zixuan Wang, Bo Qin, Mengxuan Li, Chenlu Zhan, Mark D. Butala, Peng Peng, Hongwei Wang
The proposed method employs cosine similarity to identify hard samples and subsequently, leverages supervised contrastive learning to learn more discriminative representations by constructing hard sample pairs.
no code implementations • 22 May 2023 • Siyi Liu, Hongming Zhang, Hongwei Wang, Kaiqiang Song, Dan Roth, Dong Yu
However, none of the existing methods have explicitly addressed the issue of framing bias that is inherent in news articles.
no code implementations • 1 Mar 2023 • Wenhao Hu, Yingying Liu, Xuanyu Chen, Wenhao Chai, Hangyue Chen, Hongwei Wang, Gaoang Wang
With the development of computer-assisted techniques, research communities including biochemistry and deep learning have been devoted into the drug discovery field for over a decade.
no code implementations • CVPR 2023 • Hao Ren, Shoudong Han, Huilin Ding, Ziwen Zhang, Hongwei Wang, Faquan Wang
This fine-grained representation requires high feature resolution and precise semantic information.
no code implementations • 13 Feb 2023 • Mengxuan Li, Peng Peng, Min Wang, Hongwei Wang
The novelty of HDLCNN lies in its capability of processing tabular data with features of arbitrary order without seeking the optimal order, due to the ability to agglomerate correlated features of feature clustering and the large receptive field of dilated convolution.
no code implementations • 12 Feb 2023 • Peng Peng, Hanrong Zhang, Mengxuan Li, Gongzhuang Peng, Hongwei Wang, Weiming Shen
Finally, the model decision is biased toward the new classes due to the class imbalance.
no code implementations • 3 Feb 2023 • Mengxuan Li, Peng Peng, Jingxin Zhang, Hongwei Wang, Weiming Shen
The comprehensive results demonstrate that the proposed SCCAM method can achieve better performance compared with the state-of-the-art methods on fault classification and root cause analysis.
no code implementations • 14 Jan 2023 • Jiacheng He, Hongwei Wang, Gang Wang, Shan Zhong, Bei Peng
Outliers and impulsive disturbances often cause heavy-tailed distributions in practical applications, and these will degrade the performance of Gaussian approximation smoothing algorithms.
no code implementations • 21 Dec 2022 • Chenlu Zhan, Peng Peng, Hongsen Wang, Tao Chen, Hongwei Wang
Moreover, for grasping the unified semantic representation, we extend the adversarial masking data augmentation to the contrastive representation learning of vision and text in a unified manner.
no code implementations • 19 Oct 2022 • Shuting Tao, Peng Peng, Qi Li, Hongwei Wang
To solve this problem, we propose a Supervised Contrastive Learning (SCL) method with Tree-structured Parzen Estimator (TPE) technique for imbalanced tabular datasets.
no code implementations • 4 Oct 2022 • Xi Zheng, Jun Fang, Hongwei Wang, Peilan Wang, Hongbin Li
Also, by utilizing the singular value decomposition-like structure of the effective channel, this paper develops a joint active and passive beamforming method based on the estimated cascade channels.
no code implementations • 6 Jun 2022 • Hongwei Wang, Zixuan Zhang, Sha Li, Jiawei Han, Yizhou Sun, Hanghang Tong, Joseph P. Olive, Heng Ji
Existing link prediction or graph completion methods have difficulty dealing with event graphs because they are usually designed for a single large graph such as a social network or a knowledge graph, rather than multiple small dynamic event graphs.
no code implementations • 17 Apr 2022 • Hongwei Wang, Jun Fang, Huiping Duan, Hongbin Li
We consider the problem of spatial channel covariance matrix (CCM) estimation for intelligent reflecting surface (IRS)-assisted millimeter wave (mmWave) communication systems.
1 code implementation • ICLR 2022 • Hongwei Wang, Weijiang Li, Xiaomeng Jin, Kyunghyun Cho, Heng Ji, Jiawei Han, Martin D. Burke
Molecule representation learning (MRL) methods aim to embed molecules into a real vector space.
no code implementations • 10 Jul 2021 • Hongwei Wang, Lantao Yu, Zhangjie Cao, Stefano Ermon
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions, which is essential for understanding physical, social, and team-play systems.
no code implementations • 10 May 2021 • En Yu, Zhuoling Li, Shoudong Han, Hongwei Wang
Existing online multiple object tracking (MOT) algorithms often consist of two subtasks, detection and re-identification (ReID).
no code implementations • 10 Sep 2020 • Shoudong Han, Piao Huang, Hongwei Wang, En Yu, Donghaisheng Liu, Xiaofeng Pan, Jun Zhao
Modern multi-object tracking (MOT) systems usually model the trajectories by associating per-frame detections.
2 code implementations • NeurIPS 2020 • Pan Li, Yanbang Wang, Hongwei Wang, Jure Leskovec
DE captures the distance between the node set whose representation is to be learned and each node in the graph.
no code implementations • 16 Mar 2020 • Piao Huang, Shoudong Han, Jun Zhao, Donghaisheng Liu, Hongwei Wang, En Yu, Alex ChiChung Kot
Modern multi-object tracking (MOT) system usually involves separated modules, such as motion model for location and appearance model for data association.
4 code implementations • 17 Feb 2020 • Hongwei Wang, Hongyu Ren, Jure Leskovec
Specifically, two kinds of neighborhood topology are modeled for a given entity pair under the relational message passing framework: (1) Relational context, which captures the relation types of edges adjacent to the given entity pair; (2) Relational paths, which characterize the relative position between the given two entities in the knowledge graph.
2 code implementations • 17 Feb 2020 • Hongwei Wang, Jure Leskovec
Both solve the task of node classification but LPA propagates node label information across the edges of the graph, while GCN propagates and transforms node feature information.
Ranked #1 on
Node Classification
on Coauthor Phy
no code implementations • 25 Sep 2019 • Daiheng Gao, Hongwei Wang, Hehui Zhang, Meng Wang, Zhenzhi Wu
Stemming from neuroscience, Spiking neural networks (SNNs), a brain-inspired neural network that is a versatile solution to fault-tolerant and energy efficient information processing pertains to the ”event-driven” characteristic as the analogy of the behavior of biological neurons.
5 code implementations • 11 May 2019 • Hongwei Wang, Fuzheng Zhang, Mengdi Zhang, Jure Leskovec, Miao Zhao, Wenjie Li, Zhongyuan Wang
Here we propose Knowledge-aware Graph Neural Networks with Label Smoothness regularization (KGNN-LS) to provide better recommendations.
Ranked #1 on
Recommendation Systems
on Dianping-Food
8 code implementations • 18 Mar 2019 • Hongwei Wang, Miao Zhao, Xing Xie, Wenjie Li, Minyi Guo
To alleviate sparsity and cold start problem of collaborative filtering based recommender systems, researchers and engineers usually collect attributes of users and items, and design delicate algorithms to exploit these additional information.
Ranked #1 on
Click-Through Rate Prediction
on Book-Crossing
1 code implementation • 15 Feb 2019 • Zhiming Zhou, Jiadong Liang, Yuxuan Song, Lantao Yu, Hongwei Wang, Wei-Nan Zhang, Yong Yu, Zhihua Zhang
By contrast, Wasserstein GAN (WGAN), where the discriminative function is restricted to 1-Lipschitz, does not suffer from such a gradient uninformativeness problem.
3 code implementations • 23 Jan 2019 • Hongwei Wang, Fuzheng Zhang, Miao Zhao, Wenjie Li, Xing Xie, Minyi Guo
Collaborative filtering often suffers from sparsity and cold start problems in real recommendation scenarios, therefore, researchers and engineers usually use side information to address the issues and improve the performance of recommender systems.
no code implementations • 19 Nov 2018 • Chenchen Li, Jialin Wang, Hongwei Wang, Miao Zhao, Wenjie Li, Xiaotie Deng
To enhance the emotion discriminativeness of words in textual feature extraction, we propose Emotional Word Embedding (EWE) to learn text representations by jointly considering their semantics and emotions.
no code implementations • 13 Nov 2018 • Chang Xu, Weiran Huang, Hongwei Wang, Gang Wang, Tie-Yan Liu
In this paper, we propose an improved variant of RNN, Multi-Channel RNN (MC-RNN), to dynamically capture and leverage local semantic structure information.
3 code implementations • ICLR 2019 • Zhiming Zhou, Qingru Zhang, Guansong Lu, Hongwei Wang, Wei-Nan Zhang, Yong Yu
Adam is shown not being able to converge to the optimal solution in certain cases.
1 code implementation • 2 Jul 2018 • Zhiming Zhou, Yuxuan Song, Lantao Yu, Hongwei Wang, Jiadong Liang, Wei-Nan Zhang, Zhihua Zhang, Yong Yu
In this paper, we investigate the underlying factor that leads to failure and success in the training of GANs.
9 code implementations • 9 Mar 2018 • Hongwei Wang, Fuzheng Zhang, Jialin Wang, Miao Zhao, Wenjie Li, Xing Xie, Minyi Guo
To address the sparsity and cold start problem of collaborative filtering, researchers usually make use of side information, such as social networks or item attributes, to improve recommendation performance.
Ranked #2 on
Click-Through Rate Prediction
on Book-Crossing
4 code implementations • 25 Jan 2018 • Hongwei Wang, Fuzheng Zhang, Xing Xie, Minyi Guo
To solve the above problems, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation.
Ranked #5 on
News Recommendation
on MIND
1 code implementation • 3 Dec 2017 • Hongwei Wang, Jia Wang, Miao Zhao, Jiannong Cao, Minyi Guo
JTS-MF model calculates similarity among users and votings by combining their TEWE representation and structural information of social networks, and preserves this topic-semantic-social similarity during matrix factorization.
1 code implementation • 3 Dec 2017 • Hongwei Wang, Fuzheng Zhang, Min Hou, Xing Xie, Minyi Guo, Qi Liu
First, due to the lack of explicit sentiment links in mainstream social networks, we establish a labeled heterogeneous sentiment dataset which consists of users' sentiment relation, social relation and profile knowledge by entity-level sentiment extraction method.
5 code implementations • 22 Nov 2017 • Hongwei Wang, Jia Wang, Jialin Wang, Miao Zhao, Wei-Nan Zhang, Fuzheng Zhang, Xing Xie, Minyi Guo
The goal of graph representation learning is to embed each vertex in a graph into a low-dimensional vector space.
Ranked #1 on
Node Classification
on Wikipedia
no code implementations • 25 Jul 2017 • Changbo Fu, Xiaopeng Zhou, Xun Chen, Yunhua Chen, Xiangyi Cui, Deqing Fang, Karl Giboni, Franco Giuliani, Ke Han, Xingtao Huang, Xiangdong Ji, Yonglin Ju, Siao Lei, Shaoli Li, Huaxuan Liu, Jianglai Liu, Yugang Ma, Yajun Mao, Xiangxiang Ren, Andi Tan, Hongwei Wang, Jimin Wang, Meng Wang, Qiuhong Wang, Siguang Wang, Xuming Wang, Zhou Wang, Shiyong Wu, Mengjiao Xiao, Pengwei Xie, Binbin Yan, Yong Yang, Jianfeng Yue, Hongguang Zhang, Tao Zhang, Li Zhao, Ning Zhou
We report new searches for the solar axions and galactic axion-like dark matter particles, using the first low-background data from PandaX-II experiment at China Jinping Underground Laboratory, corresponding to a total exposure of about $2. 7\times 10^4$ kg$\cdot$day.
High Energy Physics - Experiment Solar and Stellar Astrophysics High Energy Physics - Phenomenology