no code implementations • SMM4H (COLING) 2022 • Antoine Lain, Wonjin Yoon, Hyunjae Kim, Jaewoo Kang, Ian Simpson
This paper describes our system developed for the Social Media Mining for Health (SMM4H) 2022 SocialDisNER task.
no code implementations • 20 Feb 2025 • Yein Park, Chanwoong Yoon, Jungwoo Park, Minbyul Jeong, Jaewoo Kang
While the ability of language models to elicit facts has been widely investigated, how they handle temporally changing facts remains underexplored.
1 code implementation • 31 Jan 2025 • Seungheun Baek, Soyon Park, Yan Ting Chok, Mogan Gim, Jaewoo Kang
One of the most effective ways to achieve explainability is incorporating the concept of gene regulatory networks (GRNs) in designing deep learning models such as VAEs.
no code implementations • 16 Jan 2025 • Hajung Kim, Chanhwi Kim, Jiwoong Sohn, Tim Beck, Marek Rei, Sunkyu Kim, T Ian Simpson, Joram M Posma, Antoine Lain, Mujeen Sung, Jaewoo Kang
The objective of BioCreative8 Track 3 is to extract phenotypic key medical findings embedded within EHR texts and subsequently normalize these findings to their Human Phenotype Ontology (HPO) terms.
1 code implementation • 5 Dec 2024 • Jungwoo Park, Young Jin Ahn, Kee-Eung Kim, Jaewoo Kang
Understanding the internal computations of large language models (LLMs) is crucial for aligning them with human values and preventing undesirable behaviors like toxic content generation.
no code implementations • 4 Nov 2024 • Hoonick Lee, Mogan Gim, Donghyeon Park, Donghee Choi, Jaewoo Kang
We introduce the ASH (authenticity, sensitivity, harmony) benchmark to evaluate LLMs' recipe generation abilities in the cuisine transfer task, assessing their cultural accuracy and creativity in the culinary domain.
1 code implementation • 1 Nov 2024 • Jiwoong Sohn, Yein Park, Chanwoong Yoon, Sihyeon Park, Hyeon Hwang, Mujeen Sung, Hyunjae Kim, Jaewoo Kang
Large language models (LLM) hold significant potential for applications in biomedicine, but they struggle with hallucinations and outdated knowledge.
1 code implementation • 28 Oct 2024 • Kiwoong Yoo, Owen Oertell, Junhyun Lee, SangHoon Lee, Jaewoo Kang
Scaffold hopping is an efficient strategy that facilitates the identification of similar active compounds by strategically modifying the core structure of molecules, effectively narrowing the wide chemical space and enhancing the discovery of drug-like products.
1 code implementation • 22 Oct 2024 • Taewhoo Lee, Chanwoong Yoon, Kyochul Jang, Donghyeon Lee, Minju Song, Hyunjae Kim, Jaewoo Kang
To thoroughly examine the effectiveness of existing benchmarks, we introduce a new metric called information coverage (IC), which quantifies the proportion of the input context necessary for answering queries.
1 code implementation • 13 Oct 2024 • Yein Park, Chanwoong Yoon, Jungwoo Park, Donghyeon Lee, Minbyul Jeong, Jaewoo Kang
Large language models (LLMs) have brought significant changes to many aspects of our lives.
1 code implementation • 9 Sep 2024 • Seungheun Baek, Soyon Park, Yan Ting Chok, Junhyun Lee, Jueon Park, Mogan Gim, Jaewoo Kang
Predicting cellular responses to various perturbations is a critical focus in drug discovery and personalized therapeutics, with deep learning models playing a significant role in this endeavor.
1 code implementation • 29 Aug 2024 • Chanhwi Kim, Hyunjae Kim, Sihyeon Park, Jiwoo Lee, Mujeen Sung, Jaewoo Kang
However, these models are usually trained only with positive samples--entities that match the input mention's identifier--and do not explicitly learn from hard negative samples, which are entities that look similar but have different meanings.
no code implementations • 19 Jul 2024 • Heedou Kim, Dain Kim, Jiwoo Lee, Chanwoong Yoon, Donghee Choi, Mogan Gim, Jaewoo Kang
An AI-assisted criminal investigation system, providing prompt but precise legal counsel is in need for police officers.
no code implementations • 18 Jul 2024 • Donghee Choi, Jinkyu Kim, Mogan Gim, Jinho Lee, Jaewoo Kang
To integrate the forecasting model into a deep reinforcement learning-driven portfolio selection framework, we introduced a two-step strategy: first, pre-training the time-series model on market data, followed by fine-tuning the portfolio selection architecture using this model.
1 code implementation • 12 Jul 2024 • Chanwoong Yoon, Taewhoo Lee, Hyeon Hwang, Minbyul Jeong, Jaewoo Kang
Retrieval-augmented generation supports language models to strengthen their factual groundings by providing external contexts.
1 code implementation • 15 Jun 2024 • Yu Yin, Hyunjae Kim, Xiao Xiao, Chih Hsuan Wei, Jaewoo Kang, Zhiyong Lu, Hua Xu, Meng Fang, Qingyu Chen
Specifically, our models consistently outperformed the baseline models in six out of eight entity types, achieving an average improvement of 0. 9% over the best baseline performance across eight entities.
no code implementations • 22 May 2024 • Hajung Kim, Chanhwi Kim, Hoonick Lee, Kyochul Jang, Jiwoo Lee, Kyungjae Lee, Gangwoo Kim, Jaewoo Kang
Transforming natural language questions into SQL queries is crucial for precise data retrieval from electronic health record (EHR) databases.
1 code implementation • 21 May 2024 • Minbyul Jeong, Hyeon Hwang, Chanwoong Yoon, Taewhoo Lee, Jaewoo Kang
We also propose OLAPH, a simple and novel framework that utilizes cost-effective and multifaceted automatic evaluation to construct a synthetic preference set and answers questions in our preferred manner.
no code implementations • 1 May 2024 • Donghee Choi, Mogan Gim, Donghyeon Park, Mujeen Sung, Hyunjae Kim, Jaewoo Kang, Jihun Choi
This paper introduces CookingSense, a descriptive collection of knowledge assertions in the culinary domain extracted from various sources, including web data, scientific papers, and recipes, from which knowledge covering a broad range of aspects is acquired.
no code implementations • 30 Mar 2024 • Hyunjae Kim, Hyeon Hwang, Jiwoo Lee, Sihyeon Park, Dain Kim, Taewhoo Lee, Chanwoong Yoon, Jiwoong Sohn, Donghee Choi, Jaewoo Kang
While recent advancements in commercial large language models (LM) have shown promising results in medical tasks, their closed-source nature poses significant privacy and security concerns, hindering their widespread use in the medical field.
Ranked #6 on
Zero-Shot Learning
on MedConceptsQA
no code implementations • 23 Feb 2024 • Hyunjae Kim, Seunghyun Yoon, Trung Bui, Handong Zhao, Quan Tran, Franck Dernoncourt, Jaewoo Kang
Contrastive language-image pre-training (CLIP) models have demonstrated considerable success across various vision-language tasks, such as text-to-image retrieval, where the model is required to effectively process natural language input to produce an accurate visual output.
no code implementations • 19 Feb 2024 • Chanwoong Yoon, Gangwoo Kim, Byeongguk Jeon, Sungdong Kim, Yohan Jo, Jaewoo Kang
Furthermore, we fine-tune a smaller LM using this dataset to align it with the retrievers' preferences as feedback.
no code implementations • 16 Feb 2024 • Junhyun Lee, Wooseong Yang, Jaewoo Kang
In the evolving landscape of machine learning, the adaptation of pre-trained models through prompt tuning has become increasingly prominent.
1 code implementation • 30 Jan 2024 • Mogan Gim, Jueon Park, Soyon Park, SangHoon Lee, Seungheun Baek, Junhyun Lee, Ngoc-Quang Nguyen, Jaewoo Kang
Molecular core structures and R-groups are essential concepts in drug development.
1 code implementation • 27 Jan 2024 • Minbyul Jeong, Jiwoong Sohn, Mujeen Sung, Jaewoo Kang
To address challenges that still cannot be handled with the encoded knowledge of LLMs, various retrieval-augmented generation (RAG) methods have been developed by searching documents from the knowledge corpus and appending them unconditionally or selectively to the input of LLMs for generation.
no code implementations • ECCV 2020 • Bumsoo Kim, Taeho Choi, Jaewoo Kang, Hyunwoo J. Kim
This is a major bottleneck in HOI detection inference time.
1 code implementation • 23 Oct 2023 • Gangwoo Kim, Sungdong Kim, Byeongguk Jeon, Joonsuk Park, Jaewoo Kang
To cope with the challenge, we propose a novel framework, Tree of Clarifications (ToC): It recursively constructs a tree of disambiguations for the AQ -- via few-shot prompting leveraging external knowledge -- and uses it to generate a long-form answer.
1 code implementation • 28 Jul 2023 • Junhyun Lee, Bumsoo Kim, Minji Jeon, Jaewoo Kang
Graph Neural Networks (GNNs) have proven to be effective in processing and learning from graph-structured data.
1 code implementation • 16 Jul 2023 • Hyunjun Lee, Junhyun Lee, Taehwa Choi, Jaewoo Kang, Sangbum Choi
The proposed method is a semiparametric approach to AFT modeling that does not impose any distributional assumptions on the survival time distribution.
no code implementations • 10 Jul 2023 • Gangwoo Kim, Hajung Kim, Lei Ji, Seongsu Bae, Chanhwi Kim, Mujeen Sung, Hyunjae Kim, Kun Yan, Eric Chang, Jaewoo Kang
In this paper, we introduce CheXOFA, a new pre-trained vision-language model (VLM) for the chest X-ray domain.
1 code implementation • 21 Apr 2023 • Donghee Choi, Mogan Gim, Samy Badreddine, Hajung Kim, Donghyeon Park, Jaewoo Kang
We introduce KitchenScale, a fine-tuned Pre-trained Language Model (PLM) that predicts a target ingredient's quantity and measurement unit given its recipe context.
no code implementations • 11 Apr 2023 • Sumin Seo, Jaewoong Shin, Jaewoo Kang, Tae Soo Kim, Thijs Kooi
Deep learning has shown great potential in assisting radiologists in reading chest X-ray (CXR) images, but its need for expensive annotations for improving performance prevents widespread clinical application.
1 code implementation • 3 Feb 2023 • Seongyun Lee, Hyunjae Kim, Jaewoo Kang
Question answering (QA) models often rely on large-scale training datasets, which necessitates the development of a data generation framework to reduce the cost of manual annotations.
Ranked #1 on
Question Answering
on MultiSpanQA
1 code implementation • 1 Dec 2022 • Wonjin Yoon, Richard Jackson, Elliot Ford, Vladimir Poroshin, Jaewoo Kang
In order to assist the drug discovery/development process, pharmaceutical companies often apply biomedical NER and linking techniques over internal and public corpora.
1 code implementation • 24 Oct 2022 • Minbyul Jeong, Jaewoo Kang
A notable advantage of NER is its consistency in extracting biomedical entities in a document context.
Ranked #1 on
Named Entity Recognition (NER)
on Gellus
1 code implementation • 14 Oct 2022 • Mogan Gim, Donghee Choi, Kana Maruyama, Jihun Choi, Hajung Kim, Donghyeon Park, Jaewoo Kang
To perform this task, we developed RecipeMind, a food affinity score prediction model that quantifies the suitability of adding an ingredient to set of other ingredients.
no code implementations • 14 Oct 2022 • Hyunjae Kim, Jaehyo Yoo, Seunghyun Yoon, Jaewoo Kang
Most weakly supervised named entity recognition (NER) models rely on domain-specific dictionaries provided by experts.
no code implementations • 29 Jun 2022 • Jinyoung Park, Seongjun Yun, Hyeonjin Park, Jaewoo Kang, Jisu Jeong, Kyung-Min Kim, Jung-Woo Ha, Hyunwoo J. Kim
Transformer-based models have recently shown success in representation learning on graph-structured data beyond natural language processing and computer vision.
1 code implementation • NeurIPS 2021 • Seongjun Yun, Seoyoon Kim, Junhyun Lee, Jaewoo Kang, Hyunwoo J. Kim
Graph Neural Networks (GNNs) have been widely applied to various fields for learning over graph-structured data.
1 code implementation • 25 May 2022 • Mujeen Sung, Jungsoo Park, Jaewoo Kang, Danqi Chen, Jinhyuk Lee
In this paper, we introduce TOUR (Test-Time Optimization of Query Representations), which further optimizes instance-level query representations guided by signals from test-time retrieval results.
1 code implementation • 25 May 2022 • Gangwoo Kim, Sungdong Kim, Kang Min Yoo, Jaewoo Kang
In this paper, we introduce a novel framework, SIMSEEK, (Simulating information-Seeking conversation from unlabeled documents), and compare its two variants.
no code implementations • 24 May 2022 • Jaehyo Yoo, Jaewoo Kang
While the most train sentences are created via automatic techniques such as crawling and sentence-alignment methods, the test sentences are annotated with the consideration of fluency by human.
1 code implementation • 6 Jan 2022 • Mujeen Sung, Minbyul Jeong, Yonghwa Choi, Donghyeon Kim, Jinhyuk Lee, Jaewoo Kang
In biomedical natural language processing, named entity recognition (NER) and named entity normalization (NEN) are key tasks that enable the automatic extraction of biomedical entities (e. g. diseases and drugs) from the ever-growing biomedical literature.
Ranked #3 on
Named Entity Recognition (NER)
on BC4CHEMD
1 code implementation • 16 Dec 2021 • Hyunjae Kim, Jaehyo Yoo, Seunghyun Yoon, Jinhyuk Lee, Jaewoo Kang
Recent named entity recognition (NER) models often rely on human-annotated datasets, requiring the significant engagement of professional knowledge on the target domain and entities.
no code implementations • 20 Nov 2021 • Hyunjae Kim, Mujeen Sung, Wonjin Yoon, Sungjoon Park, Jaewoo Kang
This paper is a technical report on our system submitted to the chemical identification task of the BioCreative VII Track 2 challenge.
no code implementations • 29 Sep 2021 • Hyunjun Lee, Junhyun Lee, Taehwa Choi, Jaewoo Kang, Sangbum Choi
Time-to-event analysis, also known as survival analysis, aims to predict the first occurred event time, conditional on a set of features.
1 code implementation • EMNLP 2021 • Mujeen Sung, Jinhyuk Lee, Sean Yi, Minji Jeon, Sungdong Kim, Jaewoo Kang
To this end, we create the BioLAMA benchmark, which is comprised of 49K biomedical factual knowledge triples for probing biomedical LMs.
2 code implementations • ACL 2022 • Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettlemoyer, Hannaneh Hajishirzi
Claims in FAVIQ are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
1 code implementation • ACL 2021 • Gangwoo Kim, Hyunjae Kim, Jungsoo Park, Jaewoo Kang
One of the main challenges in conversational question answering (CQA) is to resolve the conversational dependency, such as anaphora and ellipsis.
1 code implementation • 11 Jun 2021 • Seongjun Yun, Minbyul Jeong, Sungdong Yoo, Seunghun Lee, Sean S. Yi, Raehyun Kim, Jaewoo Kang, Hyunwoo J. Kim
Despite the success of GNNs, most existing GNNs are designed to learn node representations on the fixed and homogeneous graphs.
1 code implementation • CVPR 2021 • Bumsoo Kim, Junhyun Lee, Jaewoo Kang, Eun-Sol Kim, Hyunwoo J. Kim
Human-Object Interaction (HOI) detection is a task of identifying "a set of interactions" in an image, which involves the i) localization of the subject (i. e., humans) and target (i. e., objects) of interaction, and ii) the classification of the interaction labels.
Ranked #16 on
Human-Object Interaction Detection
on V-COCO
1 code implementation • 15 Apr 2021 • Wonjin Yoon, Richard Jackson, Aron Lagerberg, Jaewoo Kang
Following general domain EQA models, current biomedical EQA (BioEQA) models utilize the single-span extraction setting with post-processing steps.
1 code implementation • NAACL 2022 • Jungsoo Park, Gyuwan Kim, Jaewoo Kang
Consistency training regularizes a model by enforcing predictions of original and perturbed inputs to be similar.
1 code implementation • 15 Apr 2021 • Minbyul Jeong, Jaewoo Kang
Pre-trained language models (PLMs) are used to solve NER tasks and tend to be biased toward dataset patterns such as length statistics, surface form, and skewed class distribution.
Ranked #5 on
Named Entity Recognition (NER)
on WNUT 2017
no code implementations • EACL 2021 • Buru Chang, Inggeol Lee, Hyunjae Kim, Jaewoo Kang
Several machine learning-based spoiler detection models have been proposed recently to protect users from spoilers on review websites.
no code implementations • 15 Jan 2021 • Buru Chang, Inggeol Lee, Hyunjae Kim, Jaewoo Kang
Several machine learning-based spoiler detection models have been proposed recently to protect users from spoilers on review websites.
no code implementations • 1 Jan 2021 • Joel Jang, Yoonjeon Kim, Jaewoo Kang
Classification tasks require balanced distribution of data in order to ensure the learner to be trained to generalize over all classes.
1 code implementation • 1 Jan 2021 • Hyunjae Kim, Jaewoo Kang
The number of biomedical literature on new biomedical concepts is rapidly increasing, which necessitates a reliable biomedical named entity recognition (BioNER) model for identifying new and unseen entity mentions.
4 code implementations • ACL 2021 • Jinhyuk Lee, Mujeen Sung, Jaewoo Kang, Danqi Chen
Open-domain question answering can be reformulated as a phrase retrieval problem, without the need for processing documents on-demand during inference (Seo et al., 2019).
Ranked #1 on
Question Answering
on Natural Questions (long)
1 code implementation • ECCV 2020 • Byungjoo Kim, Bryce Chudomelka, Jinyoung Park, Jaewoo Kang, Youngjoon Hong, Hyunwoo J. Kim
Motivated by the SSP property and a generalized Runge-Kutta method, we propose Strong Stability Preserving networks (SSP networks) which improve robustness against adversarial attacks.
no code implementations • 10 Jul 2020 • Jinho Lee, Raehyun Kim, Seok-Won Yi, Jaewoo Kang
Generating an investment strategy using advanced deep learning methods in stock markets has recently been a topic of interest.
2 code implementations • 1 Jul 2020 • Minbyul Jeong, Mujeen Sung, Gangwoo Kim, Donghyeon Kim, Wonjin Yoon, Jaehyo Yoo, Jaewoo Kang
We observe that BioBERT trained on the NLI dataset obtains better performance on Yes/No (+5. 59%), Factoid (+0. 53%), List type (+13. 58%) questions compared to performance obtained in a previous challenge (BioASQ 7B Phase B).
1 code implementation • EMNLP (NLP-COVID19) 2020 • Jinhyuk Lee, Sean S. Yi, Minbyul Jeong, Mujeen Sung, Wonjin Yoon, Yonghwa Choi, Miyoung Ko, Jaewoo Kang
The recent outbreak of the novel coronavirus is wreaking havoc on the world and researchers are struggling to effectively combat it.
3 code implementations • ACL 2020 • Mujeen Sung, Hwisang Jeon, Jinhyuk Lee, Jaewoo Kang
In this way, we avoid the explicit pre-selection of negative samples from more than 400K candidates.
1 code implementation • EMNLP 2020 • Miyoung Ko, Jinhyuk Lee, Hyunjae Kim, Gangwoo Kim, Jaewoo Kang
In this study, we hypothesize that when the distribution of the answer positions is highly skewed in the training set (e. g., answers lie only in the k-th sentence of each passage), QA models predicting answers as positions can learn spurious positional cues and fail to give answers in different positions.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Jungsoo Park, Mujeen Sung, Jinhyuk Lee, Jaewoo Kang
Exposing diverse subword segmentations to neural machine translation (NMT) models often improves the robustness of machine translation as NMT models can experience various subword candidates.
no code implementations • 11 Apr 2020 • Hyunjae Kim, Yookyung Koh, Jinheon Baek, Jaewoo Kang
Also, we analyze how neural models solve spatial reasoning tests with visual aids.
1 code implementation • 18 Feb 2020 • Wonjin Yoon, Yoon Sun Yeo, Minbyul Jeong, Bong-Jun Yi, Jaewoo Kang
By harnessing pre-trained language models, summarization models had rapid progress recently.
3 code implementations • ACL 2020 • Jinhyuk Lee, Minjoon Seo, Hannaneh Hajishirzi, Jaewoo Kang
Open-domain question answering can be formulated as a phrase retrieval problem, in which we can expect huge scalability and speed benefit but often suffer from low accuracy due to the limitation of existing phrase representation models.
1 code implementation • NeurIPS 2019 • Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, Hyunwoo J. Kim
In this paper, we propose Graph Transformer Networks (GTNs) that are capable of generating new graph structures, which involve identifying useful connections between unconnected nodes on the original graph, while learning effective node representation on the new graphs in an end-to-end fashion.
3 code implementations • 18 Sep 2019 • Wonjin Yoon, Jinhyuk Lee, Donghyeon Kim, Minbyul Jeong, Jaewoo Kang
The recent success of question answering systems is largely attributed to pre-trained language models.
3 code implementations • 7 Aug 2019 • Raehyun Kim, Chan Ho So, Minbyul Jeong, Sang-Hoon Lee, Jinkyu Kim, Jaewoo Kang
Methods that use relational data for stock market prediction have been recently proposed, but they are still in their infancy.
1 code implementation • IEEE Access 2019 • Donghyeon Kim, Jinhyuk Lee, Chan Ho So, Hwisang Jeon, Minbyul Jeong, Yonghwa Choi, Wonjin Yoon, Mujeen Sung, Jaewoo Kang
Also, the traditional text mining tools do not consider overlapping entities, which are frequently observed in multi-type named entity recognition results.
Ranked #4 on
Named Entity Recognition (NER)
on LINNAEUS
1 code implementation • 27 May 2019 • Seoungjun Yun, Raehyun Kim, Miyoung Ko, Jaewoo Kang
To deal with this problem, content based recommendation models which use the auxiliary attributes of users and items have been proposed.
1 code implementation • 16 May 2019 • Donghyeon Park, Keonwoo Kim, Yonggyu Park, Jungwoon Shin, Jaewoo Kang
As a vast number of ingredients exist in the culinary world, there are countless food ingredient pairings, but only a small number of pairings have been adopted by chefs and studied by food researchers.
3 code implementations • 17 Apr 2019 • Junhyun Lee, Inyeop Lee, Jaewoo Kang
In particular, studies have focused on generalizing convolutional neural networks to graph data, which includes redefining the convolution and the downsampling (pooling) operations for graphs.
Ranked #5 on
Graph Classification
on FRANKENSTEIN
1 code implementation • 25 Mar 2019 • Raehyun Kim, Hyunjae Kim, Janghyuk Lee, Jaewoo Kang
Second, they assumed that all transactions are equally important in predicting demographic attributes.
1 code implementation • 28 Feb 2019 • Jinho Lee, Raehyun Kim, Yookyung Koh, Jaewoo Kang
Moreover, the results show that future stock prices can be predicted even if the training and testing procedures are done in different countries.
19 code implementations • 25 Jan 2019 • Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, Jaewoo Kang
Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows.
Ranked #1 on
Named Entity Recognition (NER)
on Species-800
2 code implementations • 9 Nov 2018 • Yonggyu Park, Junhyun Lee, Yookyung Koh, Inyeop Lee, Jinhyuk Lee, Jaewoo Kang
However, in designing a typeface, it is difficult to keep the style of various characters consistent, especially for languages with lots of morphological variations such as Chinese.
1 code implementation • EMNLP 2018 • Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, Jaewoo Kang
Recently, open-domain question answering (QA) has been combined with machine comprehension models to find answers in a large knowledge source.
2 code implementations • 21 Sep 2018 • Wonjin Yoon, Chan Ho So, Jinhyuk Lee, Jaewoo Kang
Our model has successfully reduced the number of misclassified entities and improved the performance by leveraging multiple datasets annotated for different entity types.
Ranked #15 on
Named Entity Recognition (NER)
on BC5CDR
1 code implementation • 5 Sep 2018 • Donghyeon Kim, Jinhyuk Lee, Donghee Choi, Jaehoon Choi, Jaewoo Kang
With online calendar services gaining popularity worldwide, calendar data has become one of the richest context sources for understanding human behavior.