1 code implementation • ACL 2022 • Ramit Sawhney, Megh Thakkar, Shrey Pandit, Ritesh Soun, Di Jin, Diyi Yang, Lucie Flek
Interpolation-based regularisation methods such as Mixup, which generate virtual training samples, have proven to be effective for various tasks and modalities. We extend Mixup and propose DMix, an adaptive distance-aware interpolative Mixup that selects samples based on their diversity in the embedding space.
1 code implementation • EMNLP 2021 • Ramit Sawhney, Megh Thakkar, Shivam Agarwal, Di Jin, Diyi Yang, Lucie Flek
Interpolation-based regularisation methods for data augmentation have proven to be effective for various tasks and modalities.
no code implementations • NAACL 2022 • Sha Li, Mahdi Namazifar, Di Jin, Mohit Bansal, Heng Ji, Yang Liu, Dilek Hakkani-Tur
In this work, we propose to automatically convert the background knowledge documents into document semantic graphs and then perform knowledge selection over such graphs.
no code implementations • 1 Jun 2023 • Di Jin, Luzhi Wang, He Zhang, Yizhen Zheng, Weiping Ding, Feng Xia, Shirui Pan
As information filtering services, recommender systems have extremely enriched our daily life by providing personalized suggestions and facilitating people in decision-making, which makes them vital and indispensable to human society in the information era.
no code implementations • 24 May 2023 • Jingyuan Qi, Zhiyang Xu, Ying Shen, Minqian Liu, Di Jin, Qifan Wang, Lifu Huang
Socratic Questioning is driven by a Self-Questioning module that employs a large-scale language model to propose sub-problems related to the original problem as intermediate steps and Socratic Questioning recursively backtracks and answers the sub-problems until reaches the original problem.
1 code implementation • 20 May 2023 • Chao Zhao, Spandana Gella, Seokhwan Kim, Di Jin, Devamanyu Hazarika, Alexandros Papangelis, Behnam Hedayatnia, Mahdi Namazifar, Yang Liu, Dilek Hakkani-Tur
We hope this task and dataset can promote further research on TOD and subjective content understanding.
1 code implementation • 10 May 2023 • Di Jin, Luzhi Wang, Yizhen Zheng, Guojie Song, Fei Jiang, Xiang Li, Wei Lin, Shirui Pan
We design a dual-intent network to learn user intent from an attention mechanism and the distribution of historical data respectively, which can simulate users' decision-making process in interacting with a new item.
no code implementations • 9 May 2023 • Jingbo Zhou, Yixuan Du, Ruqiong Zhang, Di Jin, Carl Yang, Rui Zhang
Based on this, we propose a sampling-based node-level residual module (SNR) that can achieve a more flexible utilization of different hops of subgraph aggregation by introducing node-level parameters sampled from a learnable distribution.
no code implementations • 22 Feb 2023 • Zhizhi Yu, Di Jin, Cuiying Huo, Zhiqiang Wang, Xiulong Liu, Heng Qi, Jia Wu, Lingfei Wu
Graph neural networks for trust evaluation typically adopt a straightforward way such as one-hot or node2vec to comprehend node characteristics, which ignores the valuable semantic knowledge attached to nodes.
no code implementations • 10 Feb 2023 • Yen-Ting Lin, Alexandros Papangelis, Seokhwan Kim, Sungjin Lee, Devamanyu Hazarika, Mahdi Namazifar, Di Jin, Yang Liu, Dilek Hakkani-Tur
This work focuses on in-context data augmentation for intent detection.
Ranked #1 on
Intent Detection
on HWU64 5-shot
no code implementations • 2 Feb 2023 • Nicholas Meade, Spandana Gella, Devamanyu Hazarika, Prakhar Gupta, Di Jin, Siva Reddy, Yang Liu, Dilek Hakkani-Tür
For instance, using automatic evaluation, we find our best fine-tuned baseline only generates safe responses to unsafe dialogue contexts from DiaSafety 4. 04% more than our approach.
no code implementations • 26 Jan 2023 • Mingyu Derek Ma, Jiun-Yu Kao, Shuyang Gao, Arpit Gupta, Di Jin, Tagyoung Chung, Nanyun Peng
Dialogue state tracking (DST) is an important step in dialogue management to keep track of users' beliefs.
no code implementations • 24 Dec 2022 • Cuiying Huo, Di Jin, Yawen Li, Dongxiao He, Yu-Bin Yang, Lingfei Wu
A key premise for the remarkable performance of GNNs relies on complete and trustworthy initial graph descriptions (i. e., node features and graph structure), which is often not satisfied since real-world graphs are often incomplete due to various unavoidable factors.
1 code implementation • 20 Dec 2022 • Prakhar Gupta, Yang Liu, Di Jin, Behnam Hedayatnia, Spandana Gella, Sijia Liu, Patrick Lange, Julia Hirschberg, Dilek Hakkani-Tur
These guidelines provide information about the context they are applicable to and what should be included in the response, allowing the models to generate responses that are more closely aligned with the developer's expectations and intent.
1 code implementation • 26 Oct 2022 • Yifan Chen, Devamanyu Hazarika, Mahdi Namazifar, Yang Liu, Di Jin, Dilek Hakkani-Tur
Prefix-tuning, or more generally continuous prompt tuning, has become an essential paradigm of parameter-efficient transfer learning.
no code implementations • SIGDIAL (ACL) 2022 • Behnam Hedayatnia, Di Jin, Yang Liu, Dilek Hakkani-Tur
In this work, we curated a dataset where responses from multiple response generators produced for the same dialog context are manually annotated as appropriate (positive) and inappropriate (negative).
1 code implementation • SIGDIAL (ACL) 2022 • Di Jin, Sijia Liu, Yang Liu, Dilek Hakkani-Tur
Previous work has treated contradiction detection in bot responses as a task similar to natural language inference, e. g., detect the contradiction between a pair of bot utterances.
no code implementations • 28 Jun 2022 • Di Jin, Rui Wang, Meng Ge, Dongxiao He, Xiang Li, Wei Lin, Weixiong Zhang
Due to the homophily assumption of Graph Convolutional Networks (GCNs) that these methods use, they are not suitable for heterophily graphs where nodes with different labels or dissimilar attributes tend to be adjacent.
no code implementations • 22 Jun 2022 • Sebastian Gehrmann, Abhik Bhattacharjee, Abinaya Mahendiran, Alex Wang, Alexandros Papangelis, Aman Madaan, Angelina McMillan-Major, Anna Shvets, Ashish Upadhyay, Bingsheng Yao, Bryan Wilie, Chandra Bhagavatula, Chaobin You, Craig Thomson, Cristina Garbacea, Dakuo Wang, Daniel Deutsch, Deyi Xiong, Di Jin, Dimitra Gkatzia, Dragomir Radev, Elizabeth Clark, Esin Durmus, Faisal Ladhak, Filip Ginter, Genta Indra Winata, Hendrik Strobelt, Hiroaki Hayashi, Jekaterina Novikova, Jenna Kanerva, Jenny Chim, Jiawei Zhou, Jordan Clive, Joshua Maynez, João Sedoc, Juraj Juraska, Kaustubh Dhole, Khyathi Raghavi Chandu, Laura Perez-Beltrachini, Leonardo F. R. Ribeiro, Lewis Tunstall, Li Zhang, Mahima Pushkarna, Mathias Creutz, Michael White, Mihir Sanjay Kale, Moussa Kamal Eddine, Nico Daheim, Nishant Subramani, Ondrej Dusek, Paul Pu Liang, Pawan Sasanka Ammanamanchi, Qi Zhu, Ratish Puduppully, Reno Kriz, Rifat Shahriyar, Ronald Cardenas, Saad Mahamood, Salomey Osei, Samuel Cahyawijaya, Sanja Štajner, Sebastien Montella, Shailza, Shailza Jolly, Simon Mille, Tahmid Hasan, Tianhao Shen, Tosin Adewumi, Vikas Raunak, Vipul Raheja, Vitaly Nikolaev, Vivian Tsai, Yacine Jernite, Ying Xu, Yisi Sang, Yixin Liu, Yufang Hou
This problem is especially pertinent in natural language generation which requires ever-improving suites of datasets, metrics, and human evaluation to make definitive claims.
no code implementations • 15 Jun 2022 • Sha Li, Mahdi Namazifar, Di Jin, Mohit Bansal, Heng Ji, Yang Liu, Dilek Hakkani-Tur
Providing conversation models with background knowledge has been shown to make open-domain dialogues more informative and engaging.
no code implementations • 15 Jun 2022 • Zhizhi Yu, Di Jin, Jianguo Wei, Ziyang Liu, Yue Shang, Yun Xiao, Jiawei Han, Lingfei Wu
Graph Neural Networks (GNNs) have gained great popularity in tackling various analytical tasks on graph-structured data (i. e., networks).
1 code implementation • 1 Jun 2022 • Yifan Chen, Tianning Xu, Dilek Hakkani-Tur, Di Jin, Yun Yang, Ruoqing Zhu
This paper revisits the approach from a matrix approximation perspective, and identifies two issues in the existing layer-wise sampling methods: suboptimal sampling probabilities and estimation biases induced by sampling without replacement.
1 code implementation • 30 May 2022 • Di Jin, Luzhi Wang, Yizhen Zheng, Xiang Li, Fei Jiang, Wei Lin, Shirui Pan
As most of the existing graph neural networks yield effective graph representations of a single graph, little effort has been made for jointly learning two graph representations and calculating their similarity score.
no code implementations • 25 May 2022 • Cuiying Huo, Di Jin, Chundong Liang, Dongxiao He, Tie Qiu, Lingfei Wu
In this work, we propose a new GNN based trust evaluation method named TrustGNN, which integrates smartly the propagative and composable nature of trust graphs into a GNN framework for better trust evaluation.
no code implementations • insights (ACL) 2022 • Hyounghun Kim, Aishwarya Padmakumar, Di Jin, Mohit Bansal, Dilek Hakkani-Tur
Natural language guided embodied task completion is a challenging problem since it requires understanding natural language instructions, aligning them with egocentric visual observations, and choosing appropriate actions to execute in the environment to produce desired changes.
1 code implementation • Findings (NAACL) 2022 • Yifan Chen, Devamanyu Hazarika, Mahdi Namazifar, Yang Liu, Di Jin, Dilek Hakkani-Tur
The massive amount of trainable parameters in the pre-trained language models (PLMs) makes them hard to be deployed to multiple downstream tasks.
no code implementations • 30 Apr 2022 • Di Jin, Cuiying Huo, Jianwu Dang, Peican Zhu, Weixiong Zhang, Witold Pedrycz, Lingfei Wu
However, the existing contrastive learning methods are inadequate for heterogeneous graphs because they construct contrastive views only based on data perturbation or pre-defined structural properties (e. g., meta-path) in graph data while ignore the noises that may exist in both node attributes and graph topologies.
no code implementations • 22 Mar 2022 • Di Jin, Shuyang Gao, Seokhwan Kim, Yang Liu, Dilek Hakkani-Tur
In many real-world settings, machine learning models need to identify user inputs that are out-of-domain (OOD) so as to avoid performing wrong actions.
no code implementations • 17 Mar 2022 • Ruiteng Zhang, Jianguo Wei, Xugang Lu, Wenhuan Lu, Di Jin, Junhai Xu, Lin Zhang, Yantao Ji, Jianwu Dang
Therefore, in the most current state-of-the-art network architectures, only a few branches corresponding to a limited number of temporal scales could be designed for speaker embeddings.
no code implementations • 14 Feb 2022 • Xin Zheng, Yixin Liu, Shirui Pan, Miao Zhang, Di Jin, Philip S. Yu
Recent years have witnessed fast developments of graph neural networks (GNNs) that have benefited myriads of graph analytic tasks and applications.
no code implementations • 27 Dec 2021 • Tao Wang, Rui Wang, Di Jin, Dongxiao He, Yuxiao Huang
To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs.
1 code implementation • NAACL 2022 • Yifan Chen, Qi Zeng, Dilek Hakkani-Tur, Di Jin, Heng Ji, Yun Yang
Transformer-based models are not efficient in processing long sequences due to the quadratic space and time complexity of the self-attention modules.
no code implementations • 5 Dec 2021 • Di Jin, Elena Sergeeva, Wei-Hung Weng, Geeticka Chauhan, Peter Szolovits
In this review, we focus on the interpretability of the DL models in healthcare.
1 code implementation • NeurIPS 2021 • Di Jin, Zhizhi Yu, Cuiying Huo, Rui Wang, Xiao Wang, Dongxiao He, Jiawei Han
So can we reasonably utilize these segmentation rules to design a universal propagation mechanism independent of the network structural assumption?
1 code implementation • 9 Nov 2021 • Fatemeh Vahedian, Ruiyu Li, Puja Trivedi, Di Jin, Danai Koutra
Understanding the training dynamics of deep neural networks (DNNs) is important as it can lead to improved training efficiency and task performance.
1 code implementation • 27 Oct 2021 • Di Jin, Bunyamin Sisman, Hao Wei, Xin Luna Dong, Danai Koutra
AdaMEL models the attribute importance that is used to match entities through an attribute-level self-attention mechanism, and leverages the massive unlabeled data from new data sources through domain adaptation to make it generic and data-source agnostic.
no code implementations • 29 Sep 2021 • Yifan Chen, Tianning Xu, Dilek Hakkani-Tur, Di Jin, Yun Yang, Ruoqing Zhu
To accelerate the training of graph convolutional networks (GCN), many sampling-based methods have been developed for approximating the embedding aggregation.
1 code implementation • 28 Sep 2021 • Seokhwan Kim, Yang Liu, Di Jin, Alexandros Papangelis, Karthik Gopalakrishnan, Behnam Hedayatnia, Dilek Hakkani-Tur
Most prior work in dialogue modeling has been on written conversations mostly because of existing data sets.
1 code implementation • EMNLP (NLP4ConvAI) 2021 • Di Jin, Shuyang Gao, Seokhwan Kim, Yang Liu, Dilek Hakkani-Tur
Most prior work on task-oriented dialogue systems is restricted to supporting domain APIs.
no code implementations • ACL (dialdoc) 2021 • Di Jin, Seokhwan Kim, Dilek Hakkani-Tur
Most prior work on task-oriented dialogue systems are restricted to limited coverage of domain APIs.
no code implementations • 26 May 2021 • Xing Su, Shan Xue, Fanzhen Liu, Jia Wu, Jian Yang, Chuan Zhou, Wenbin Hu, Cecile Paris, Surya Nepal, Di Jin, Quan Z. Sheng, Philip S. Yu
A community reveals the features and connections of its members that are different from those in other communities in a network.
no code implementations • 31 Jan 2021 • Matthew B. A. McDermott, Brendan Yap, Harry Hsu, Di Jin, Peter Szolovits
Recent developments in Natural Language Processing (NLP) demonstrate that large-scale, self-supervised pre-training can be extremely beneficial for downstream tasks.
1 code implementation • 14 Jan 2021 • Junchen Jin, Mark Heimann, Di Jin, Danai Koutra
While most network embedding techniques model the proximity between nodes in a network, recently there has been significant interest in structural embeddings that are based on node equivalences, a notion rooted in sociology: equivalences or positions are collections of nodes that have similar roles--i. e., similar functions, ties or interactions with nodes in other positions--irrespective of their distance or reachability in the network.
Network Embedding
Social and Information Networks
no code implementations • 13 Jan 2021 • Ziyang Liu, Zhaomeng Cheng, Yunjiang Jiang, Yue Shang, Wei Xiong, Sulong Xu, Bo Long, Di Jin
We propose in this paper a novel Second-order Relevance, which is fundamentally different from the previous First-order Relevance, to improve result relevance prediction.
no code implementations • 3 Jan 2021 • Di Jin, Zhizhi Yu, Pengfei Jiao, Shirui Pan, Dongxiao He, Jia Wu, Philip S. Yu, Weixiong Zhang
We conclude with discussions of the challenges of the field and suggestions of possible directions for future research.
no code implementations • COLING 2020 • Amit Jindal, Arijit Ghosh Chowdhury, Aniket Didolkar, Di Jin, Ramit Sawhney, Rajiv Ratn Shah
Models with a large number of parameters are prone to over-fitting and often fail to capture the underlying input distribution.
2 code implementations • CL (ACL) 2022 • Di Jin, Zhijing Jin, Zhiting Hu, Olga Vechtomova, Rada Mihalcea
Text style transfer is an important task in natural language generation, which aims to control certain attributes in the generated text, such as politeness, emotion, humor, and many others.
no code implementations • 23 Oct 2020 • Di Jin, Xiangchen Song, Zhizhi Yu, Ziyang Liu, Heling Zhang, Zhaomeng Cheng, Jiawei Han
We propose BiTe-GCN, a novel GCN architecture with bidirectional convolution of both topology and features on text-rich networks to solve these limitations.
1 code implementation • 22 Oct 2020 • Wenzhong Yan, Di Jin, Zhidi Lin, Feng Yin
In this work, we adopt GNN for a classic but challenging nonlinear regression problem, namely the network localization.
no code implementations • 20 Oct 2020 • Yunjiang Jiang, Yue Shang, Ziyang Liu, Hongwei Shen, Yun Xiao, Wei Xiong, Sulong Xu, Weipeng Yan, Di Jin
Relevance has significant impact on user experience and business profit for e-commerce search platform.
2 code implementations • 28 Sep 2020 • Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, Peter Szolovits
Open domain question answering (OpenQA) tasks have been recently attracting more and more attention from the natural language processing (NLP) community.
no code implementations • 21 Sep 2020 • Di Jin, Sungchul Kim, Ryan A. Rossi, Danai Koutra
While previous work on dynamic modeling and embedding has focused on representing a stream of timestamped edges using a time-series of graphs based on a specific time-scale (e. g., 1 month), we propose the notion of an $\epsilon$-graph time-series that uses a fixed number of edges for each graph, and show its superiority over the time-scale representation used in previous work.
1 code implementation • EMNLP 2020 • Xiaoyu Xing, Zhijing Jin, Di Jin, Bingning Wang, Qi Zhang, Xuanjing Huang
Based on the SemEval 2014 dataset, we construct the Aspect Robustness Test Set (ARTS) as a comprehensive probe of the aspect robustness of ABSA models.
2 code implementations • ECCV 2020 • Xuewen Yang, Heming Zhang, Di Jin, Yingru Liu, Chi-Hao Wu, Jianchao Tan, Dongliang Xie, Jue Wang, Xin Wang
The goal of this work is to develop a novel learning framework for accurate and expressive fashion captioning.
no code implementations • 6 Jul 2020 • Di Jin, Zhizhi Yu, Dongxiao He, Carl Yang, Philip S. Yu, Jiawei Han
Graph neural networks for HIN embeddings typically adopt a hierarchical attention (including node-level and meta-path-level attentions) to capture the information from meta-path-based neighbors.
no code implementations • ACL 2020 • Ming Yan, Hao Zhang, Di Jin, Joey Tianyi Zhou
Multiple-choice question answering (MCQA) is one of the most challenging tasks in machine reading comprehension since it requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations.
no code implementations • 11 Jun 2020 • Yingru Liu, Yucheng Xing, Xuewen Yang, Xin Wang, Jing Shi, Di Jin, Zhaoyue Chen
Learning continuous-time stochastic dynamics is a fundamental and essential problem in modeling sporadic time series, whose observations are irregular and sparse in both time and dimension.
2 code implementations • EMNLP 2020 • John X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, Yanjun Qi
TextAttack also includes data augmentation and adversarial training modules for using components of adversarial attacks to improve model accuracy and robustness.
1 code implementation • WS 2020 • Shuyang Gao, Sanchit Agarwal, Tagyoung Chung, Di Jin, Dilek Hakkani-Tur
In this paper, we propose using machine reading comprehension (RC) in state tracking from two perspectives: model architectures and datasets.
1 code implementation • ACL 2020 • Di Jin, Zhijing Jin, Joey Tianyi Zhou, Lisa Orii, Peter Szolovits
Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure.
1 code implementation • 22 Jan 2020 • Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits
State-of-the-art neural machine translation (NMT) systems are data-hungry and perform poorly on new domains with no supervised data.
2 code implementations • 1 Oct 2019 • Di Jin, Shuyang Gao, Jiun-Yu Kao, Tagyoung Chung, Dilek Hakkani-Tur
Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language.
no code implementations • 22 Aug 2019 • Ryan A. Rossi, Di Jin, Sungchul Kim, Nesreen K. Ahmed, Danai Koutra, John Boaz Lee
Unfortunately, recent work has sometimes confused the notion of structural roles and communities (based on proximity) leading to misleading or incorrect claims about the capabilities of network embedding methods.
6 code implementations • 27 Jul 2019 • Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits
Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art models.
no code implementations • ACL 2019 • Joey Tianyi Zhou, Hao Zhang, Di Jin, Hongyuan Zhu, Meng Fang, Rick Siow Mong Goh, Kenneth Kwok
We propose a new neural transfer method termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER).
no code implementations • ICLR 2019 • Joey Tianyi Zhou, Hao Zhang, Di Jin, Hongyuan Zhu, Rick Siow Mong Goh, Kenneth Kwok
We propose a new architecture termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER).
Low Resource Named Entity Recognition
named-entity-recognition
+2
1 code implementation • 18 Apr 2019 • Di Jin, Mark Heimann, Ryan Rossi, Danai Koutra
Identity stitching, the task of identifying and matching various online references (e. g., sessions over different devices and timespans) to the same user in real-world web services, is crucial for personalization and recommendations.
2 code implementations • WS 2019 • Emily Alsentzer, John R. Murphy, Willie Boag, Wei-Hung Weng, Di Jin, Tristan Naumann, Matthew B. A. McDermott
Contextual word embedding models such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) have dramatically improved performance for many natural language processing (NLP) tasks in recent months.
3 code implementations • IJCNLP 2019 • Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, Enrico Santus
Text attribute transfer aims to automatically rewrite sentences such that they possess certain linguistic attributes, while simultaneously preserving their semantic content.
1 code implementation • 11 Nov 2018 • Di Jin, Ryan Rossi, Danai Koutra, Eunyee Koh, Sungchul Kim, Anup Rao
Motivated by the computational and storage challenges that dense embeddings pose, we introduce the problem of latent network summarization that aims to learn a compact, latent representation of the graph structure with dimensionality that is independent of the input graph size (i. e., #nodes and #edges), while retaining the ability to derive node representations on the fly.
Social and Information Networks
1 code implementation • 30 Oct 2018 • Di Jin, Peter Szolovits
One is the PubMed-PICO dataset, where our best results outperform the previous best by 5. 5%, 7. 9%, and 5. 8% for P, I, and O elements in terms of F1 score, respectively.
1 code implementation • EMNLP 2018 • Di Jin, Peter Szolovits
Prevalent models based on artificial neural network (ANN) for sentence classification often classify sentences in isolation without considering the context in which sentences appear.
Ranked #1 on
Sentence Classification
on PubMed 20k RCT
no code implementations • COLING 2018 • Fengyu Guo, Ruifang He, Di Jin, Jianwu Dang, Longbiao Wang, Xiangang Li
In this paper, we propose a novel neural Tensor network framework with Interactive Attention and Sparse Learning (TIASL) for implicit discourse relation recognition.
no code implementations • COLING 2018 • Ruifang He, Xuefei Zhang, Di Jin, Longbiao Wang, Jianwu Dang, Xiangang Li
They ignore that one discusses diverse topics when dynamically interacting with different people.
2 code implementations • WS 2018 • Di Jin, Peter Szolovits
Successful evidence-based medicine (EBM) applications rely on answering clinical questions by analyzing large medical literature databases.
no code implementations • SEMEVAL 2018 • Di Jin, Franck Dernoncourt, Elena Sergeeva, Matthew McDermott, Geeticka Chauhan
SemEval 2018 Task 7 tasked participants to build a system to classify two entities within a sentence into one of the 6 possible relation types.
1 code implementation • 7 Jan 2018 • Hao Zhang, Xinlin Xie, Chunyu Fang, Yicong Yang, Di Jin, Peng Fei
We combine generative adversarial network (GAN) with light microscopy to achieve deep learning super-resolution under a large field of view (FOV).
no code implementations • SEMEVAL 2017 • I-Ta Lee, Mahak Goindani, Chang Li, Di Jin, Kristen Marie Johnson, Xiao Zhang, Maria Leonor Pacheco, Dan Goldwasser
Our proposed system consists of two subsystems and one regression model for predicting STS scores.
no code implementations • ACL 2017 • Kristen Johnson, Di Jin, Dan Goldwasser
Framing is a political strategy in which politicians carefully word their statements in order to control public perception of issues.