no code implementations • ACL 2022 • Chen Zhao, Yu Su, Adam Pauls, Emmanouil Antonios Platanios
Text-to-SQL parsers map natural language questions to programs that are executable over tables to generate answers, and are typically evaluated on large-scale datasets like Spider (Yu et al., 2018).
1 code implementation • 23 May 2024 • Boshi Wang, Xiang Yue, Yu Su, Huan Sun
The levels of generalization also vary across reasoning types: when faced with out-of-distribution examples, transformers fail to systematically generalize for composition but succeed for comparison.
1 code implementation • 23 May 2024 • Bernal Jiménez Gutiérrez, Yiheng Shu, Yu Gu, Michihiro Yasunaga, Yu Su
In order to thrive in hostile and ever-changing natural environments, mammalian brains evolved to store large amounts of knowledge about the world and continually integrate new information while avoiding catastrophic forgetting.
no code implementations • 28 Mar 2024 • Kai Zhang, Yi Luan, Hexiang Hu, Kenton Lee, Siyuan Qiao, Wenhu Chen, Yu Su, Ming-Wei Chang
Image retrieval, i. e., finding desired images given a reference image, inherently encompasses rich, multi-faceted search intents that are difficult to capture solely using image-based measures.
1 code implementation • 7 Mar 2024 • Boshi Wang, Hao Fang, Jason Eisner, Benjamin Van Durme, Yu Su
We find that existing LLMs, including GPT-4 and open-source LLMs specifically fine-tuned for tool use, only reach a correctness rate in the range of 30% to 60%, far from reliable use in practice.
no code implementations • 22 Feb 2024 • Yu Gu, Yiheng Shu, Hao Yu, Xiao Liu, Yuxiao Dong, Jie Tang, Jayanth Srinivasa, Hugo Latapie, Yu Su
The applications of large language models (LLMs) have expanded well beyond the confines of text processing, signaling a new era where LLMs are envisioned as generalist language agents capable of operating within complex real-world environments.
1 code implementation • 16 Feb 2024 • Ziru Chen, Michael White, Raymond Mooney, Ali Payani, Yu Su, Huan Sun
In this paper, we examine how large language models (LLMs) solve multi-step problems under a language agent framework with three components: a generator, a discriminator, and a planning method.
1 code implementation • 15 Feb 2024 • Lingbo Mo, Zeyi Liao, Boyuan Zheng, Yu Su, Chaowei Xiao, Huan Sun
There is a surprisingly large gap between the speed and scale of their development and deployment and our understanding of their safety risks.
no code implementations • 6 Feb 2024 • Jihyung Kil, Chan Hee Song, Boyuan Zheng, Xiang Deng, Yu Su, Wei-Lun Chao
Automatic web navigation aims to build a web agent that can follow language instructions to execute complex and diverse tasks on real-world websites.
1 code implementation • 2 Feb 2024 • Jian Xie, Kai Zhang, Jiangjie Chen, Tinghui Zhu, Renze Lou, Yuandong Tian, Yanghua Xiao, Yu Su
Are these language agents capable of planning in more complex settings that are out of the reach of prior AI agents?
1 code implementation • 31 Jan 2024 • Tinghui Zhu, Kai Zhang, Jian Xie, Yu Su
Recent advancements have significantly augmented the reasoning capabilities of Large Language Models (LLMs) through various methodologies, especially chain-of-thought (CoT) reasoning.
1 code implementation • 3 Jan 2024 • Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, Yu Su
The recent development on large multimodal models (LMMs), especially GPT-4V(ision) and Gemini, has been quickly expanding the capability boundaries of multimodal models beyond traditional tasks like image captioning and visual question answering.
no code implementations • 31 Dec 2023 • Vardaan Pahuja, Weidi Luo, Yu Gu, Cheng-Hao Tu, Hong-You Chen, Tanya Berger-Wolf, Charles Stewart, Song Gao, Wei-Lun Chao, Yu Su
In this work, we leverage the structured context associated with the camera trap images to improve out-of-distribution generalization for the task of species identification in camera traps.
no code implementations • 5 Dec 2023 • Renze Lou, Kai Zhang, Jian Xie, Yuxuan Sun, Janice Ahn, Hanzi Xu, Yu Su, Wenpeng Yin
In the realm of large language models (LLMs), enhancing instruction-following capability often involves curating expansive training data.
1 code implementation • 30 Nov 2023 • Samuel Stevens, Jiaman Wu, Matthew J Thompson, Elizabeth G Campolongo, Chan Hee Song, David Edward Carlyn, Li Dong, Wasila M Dahdul, Charles Stewart, Tanya Berger-Wolf, Wei-Lun Chao, Yu Su
We then develop BioCLIP, a foundation model for the tree of life, leveraging the unique properties of biology captured by TreeOfLife-10M, namely the abundance and variety of images of plants, animals, and fungi, together with the availability of rich structured biological knowledge.
2 code implementations • 27 Nov 2023 • Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
We introduce MMMU: a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning.
1 code implementation • 25 Nov 2023 • Bernal Jimenez Gutierrez, Yuqing Mao, Vinh Nguyen, Kin Wah Fung, Yu Su, Olivier Bodenreider
In this work, we study the case of UMLS vocabulary insertion, an important real-world task in which hundreds of thousands of new terms, referred to as atoms, are added to the UMLS, one of the most comprehensive open-source biomedical knowledge bases.
1 code implementation • 7 Nov 2023 • Dipanjyoti Paul, Arpita Chowdhury, Xinqi Xiong, Feng-Ju Chang, David Carlyn, Samuel Stevens, Kaiya L. Provost, Anuj Karpatne, Bryan Carstens, Daniel Rubenstein, Charles Stewart, Tanya Berger-Wolf, Yu Su, Wei-Lun Chao
We present a novel usage of Transformers to make image classification interpretable.
1 code implementation • 20 Oct 2023 • Yuxiao Qu, Jinmeng Rao, Song Gao, Qianheng Zhang, Wei-Lun Chao, Yu Su, Michelle Miller, Alfonso Morales, Patrick Huber
This paper proposes FLEE-GNN, a novel Federated Learning System for Edge-Enhanced Graph Neural Network, designed to overcome these challenges and enhance the analysis of geospatial resilience of multicommodity food flow network, which is one type of spatial networks.
1 code implementation • 4 Oct 2023 • Yuxuan Sun, Kai Zhang, Yu Su
In addition, the effectiveness of our framework can successfully transfer to the few-shot setting, enhancing LMMs on a scale of 10B parameters to be competitive or outperform much larger language models such as ChatGPT and GPT-4.
1 code implementation • 11 Sep 2023 • Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen
The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset.
1 code implementation • 1 Sep 2023 • Jiatong Li, Qi Liu, Fei Wang, Jiayu Liu, Zhenya Huang, Fangzhou Yao, Linbo Zhu, Yu Su
However, we notice that this paradigm leads to the inevitable non-identifiability and explainability overfitting problem, which is harmful to the quantification of learners' cognitive states and the quality of web learning services.
1 code implementation • 7 Aug 2023 • Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang
We present AgentBench, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities in a multi-turn open-ended generation setting.
1 code implementation • 29 Jul 2023 • Lingbo Mo, Shijie Chen, Ziru Chen, Xiang Deng, Ashley Lewis, Sunit Singh, Samuel Stevens, Chang-You Tai, Zhen Wang, Xiang Yue, Tianshu Zhang, Yu Su, Huan Sun
We introduce TacoBot, a user-centered task-oriented digital assistant designed to guide users through complex real-world tasks with multiple steps.
no code implementations • 23 Jul 2023 • Shicong Liu, Zhen Gao, Gaojie Chen, Yu Su, Lu Peng
The Space-Air-Ground-Sea integrated network calls for more robust and secure transmission techniques against jamming.
no code implementations • 6 Jul 2023 • Zhen Gao, Shicong Liu, Yu Su, Zhongxiang Li, Dezhi Zheng
Moreover, based on the acquired channel semantic, we further propose a knowledge-driven deep-unfolding multi-user beamformer, which is capable of achieving good spectral efficiency with robustness to imperfect CSI in outdoor XR scenarios.
1 code implementation • 30 Jun 2023 • Bernal Jiménez Gutiérrez, Huan Sun, Yu Su
As opposed to general English, many concepts in biomedical terminology have been designed in recent history by biomedical professionals with the goal of being precise and concise.
1 code implementation • NeurIPS 2023 • Kai Zhang, Lingbo Mo, Wenhu Chen, Huan Sun, Yu Su
To address this issue, we introduce MagicBrush (https://osu-nlp-group. github. io/MagicBrush/), the first large-scale, manually annotated dataset for instruction-guided real image editing that covers diverse scenarios: single-turn, multi-turn, mask-provided, and mask-free editing.
1 code implementation • NeurIPS 2023 • Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, Yu Su
We introduce Mind2Web, the first dataset for developing and evaluating generalist agents for the web that can follow language instructions to complete complex tasks on any website.
1 code implementation • 26 May 2023 • Tianshu Zhang, Changchang Liu, Wei-Han Lee, Yu Su, Huan Sun
By leveraging data from multiple clients, the FL paradigm can be especially beneficial for clients that have little training data to develop a data-hungry neural semantic parser on their own.
1 code implementation • 23 May 2023 • Shijie Chen, Ziru Chen, Huan Sun, Yu Su
Despite remarkable progress in text-to-SQL semantic parsing in recent years, the performance of existing parsers is still far from perfect.
1 code implementation • 22 May 2023 • Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, Yu Su
By providing external information to large language models (LLMs), tool augmentation (including retrieval augmentation) has emerged as a promising solution for addressing the limitations of LLMs' static parametric memory.
1 code implementation • 22 May 2023 • Ziru Chen, Shijie Chen, Michael White, Raymond Mooney, Ali Payani, Jayanth Srinivasa, Yu Su, Huan Sun
Thus, we propose a novel representation for SQL queries and their edits that adheres more closely to the pre-training corpora of language models of code.
1 code implementation • 18 May 2023 • Kai Zhang, Bernal Jiménez Gutiérrez, Yu Su
Recent work has shown that fine-tuning large language models (LLMs) on large-scale instruction-following datasets substantially improves their performance on a wide range of NLP tasks, especially in the zero-shot setting.
Ranked #1 on Relation Extraction on SemEval-2010 Task 8
1 code implementation • 15 May 2023 • Samuel Stevens, Yu Su
Over-parameterized neural language models (LMs) can memorize and recite long sequences of training data.
1 code implementation • 10 May 2023 • Xiang Yue, Boshi Wang, Ziru Chen, Kai Zhang, Yu Su, Huan Sun
We manually curate a set of test examples covering 12 domains from a generative search engine, New Bing.
1 code implementation • 7 May 2023 • Agus Sudjianto, Aijun Zhang, Zebin Yang, Yu Su, Ningzhou Zeng
PiML (read $\pi$-ML, /`pai`em`el/) is an integrated and open-access Python toolbox for interpretable machine learning model development and model diagnostics.
1 code implementation • 2 May 2023 • Tianle Li, Xueguang Ma, Alex Zhuang, Yu Gu, Yu Su, Wenhu Chen
On GrailQA and WebQSP, our model is also on par with other fully-trained models.
no code implementations • 5 Apr 2023 • Shuanghong Shen, Enhong Chen, Bihan Xu, Qi Liu, Zhenya Huang, Linbo Zhu, Yu Su
In this paper, we present the Quiz-based Knowledge Tracing (QKT) model to monitor students' knowledge states according to their quiz-based learning interactions.
no code implementations • 20 Dec 2022 • FatemehSadat Mireshghallah, Yu Su, Tatsunori Hashimoto, Jason Eisner, Richard Shin
Task-oriented dialogue systems often assist users with personal or confidential matters.
2 code implementations • 19 Dec 2022 • Yu Gu, Xiang Deng, Yu Su
Most existing work for grounded language understanding uses LMs to directly generate plans that can be executed in the environment to achieve the desired effects.
1 code implementation • 19 Dec 2022 • Vardaan Pahuja, Boshi Wang, Hugo Latapie, Jayanth Srinivasa, Yu Su
To address the limitations of existing KG link prediction frameworks, we propose a novel retrieve-and-read framework, which first retrieves a relevant subgraph context for the query and then jointly reasons over the context and the query with a high-capacity reader.
Ranked #2 on Link Prediction on FB15k-237
no code implementations • ICCV 2023 • Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M. Sadler, Wei-Lun Chao, Yu Su
In this work, we propose a novel method, LLM-Planner, that harnesses the power of large language models to do few-shot planning for embodied agents.
no code implementations • 25 Sep 2022 • Chenglong Li, Qiwen Zhu, Tubiao Liu, Jin Tang, Yu Su
To address this issue, we design a multi-stage convolution-transformer network for step segmentation.
no code implementations • 12 Sep 2022 • Yu Gu, Vardaan Pahuja, Gong Cheng, Yu Su
In this survey, we situate KBQA in the broader literature of semantic parsing and give a comprehensive account of how existing KBQA approaches attempt to address the unique challenges.
no code implementations • 11 Jul 2022 • Shijie Chen, Ziru Chen, Xiang Deng, Ashley Lewis, Lingbo Mo, Samuel Stevens, Zhen Wang, Xiang Yue, Tianshu Zhang, Yu Su, Huan Sun
We present TacoBot, a task-oriented dialogue system built for the inaugural Alexa Prize TaskBot Challenge, which assists users in completing multi-step cooking and home improvement tasks.
1 code implementation • 24 May 2022 • Elias Stengel-Eskin, Emmanouil Antonios Platanios, Adam Pauls, Sam Thomson, Hao Fang, Benjamin Van Durme, Jason Eisner, Yu Su
Rejecting class imbalance as the sole culprit, we reveal that the trend is closely associated with an effect we call source signal dilution, where strong lexical cues for the new symbol become diluted as the training dataset grows.
1 code implementation • COLING 2022 • Yu Gu, Yu Su
Question answering on knowledge bases (KBQA) poses a unique challenge for semantic parsing research due to two intertwined challenges: large search space and ambiguities in schema linking.
1 code implementation • 16 Mar 2022 • Bernal Jiménez Gutiérrez, Nikolas McNeal, Clay Washington, You Chen, Lang Li, Huan Sun, Yu Su
In this paper, we present the first systematic and comprehensive study to compare the few-shot performance of GPT-3 in-context learning with fine-tuning smaller (i. e., BERT-sized) PLMs on two highly representative biomedical information extraction tasks, named entity recognition and relation extraction.
1 code implementation • CVPR 2022 • Chan Hee Song, Jihyung Kil, Tai-Yu Pan, Brian M. Sadler, Wei-Lun Chao, Yu Su
We study the problem of developing autonomous agents that can follow human instructions to infer and perform a sequence of actions to complete the underlying task.
no code implementations • 9 Dec 2021 • Saghar Hosseini, Ahmed Hassan Awadallah, Yu Su
We define new compositional generalization tasks for NL2API which explore the models' ability to extrapolate from simple API calls in the training set to new and more complex API calls in the inference phase.
1 code implementation • EMNLP 2021 • Xiang Deng, Yu Su, Alyssa Lees, You Wu, Cong Yu, Huan Sun
We present ReasonBert, a pre-training method that augments language models with the ability to reason over long-range relations and multiple, possibly hybrid contexts.
Ranked #1 on Semantic Parsing on GraphQuestions
no code implementations • SEMEVAL 2021 • Genyu Zhang, Yu Su, Changhong He, Lei Lin, Chengjie Sun, Lili Shan
This paper describes the winning system in the End-to-end Pipeline phase for the NLPContributionGraph task.
1 code implementation • ACL 2021 • Vardaan Pahuja, Yu Gu, Wenhu Chen, Mehdi Bahrami, Lei Liu, Wei-Peng Chen, Yu Su
Knowledge bases (KBs) and text often contain complementary knowledge: KBs store structured knowledge that can support long range reasoning, while text stores more comprehensive and timely knowledge in an unstructured way.
no code implementations • NAACL 2021 • Pengcheng Yin, Hao Fang, Graham Neubig, Adam Pauls, Emmanouil Antonios Platanios, Yu Su, Sam Thomson, Jacob Andreas
We describe a span-level supervised attention loss that improves compositional generalization in semantic parsers.
no code implementations • 15 Jan 2021 • Haoyang Bi, Haiping Ma, Zhenya Huang, Yu Yin, Qi Liu, Enhong Chen, Yu Su, Shijin Wang
In this paper, we study a novel model-agnostic CAT problem, where we aim to propose a flexible framework that can adapt to different cognitive models.
no code implementations • 15 Dec 2020 • Yifeng Guo, Yu Su, Zebin Yang, Aijun Zhang
In this paper, we propose the explainable recommendation systems based on a generalized additive model with manifest and latent interactions (GAMMLI).
2 code implementations • EMNLP (BlackboxNLP) 2021 • Samuel Stevens, Yu Su
Pre-trained language models (PLMs) like BERT are being used for almost all language-related tasks, but interpreting their behavior still remains a significant challenge and many important questions remain largely unanswered.
1 code implementation • 16 Nov 2020 • Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, Yu Su
To facilitate the development of KBQA models with stronger generalization, we construct and release a new large-scale, high-quality dataset with 64, 331 questions, GrailQA, and provide evaluation settings for all three levels of generalization.
no code implementations • 21 Oct 2020 • Shutang You, Yilu Liu, Hongyu Li, Shengyuan Liu, Kaiqi Sun, Yinfeng Zhao, Huangqing Xiao, Jiaojiao Dong, Yu Su, Weikang Wang, Yi Cui
Power grid data are going big with the deployment of various sensors.
no code implementations • 10 Oct 2020 • Yao Wang, Yu Su, Rui-Xue Xu, Xiao Zheng, YiJing Yan
In this work, on the basis of the thermodynamic solvation potentials analysis, we reexamine Marcus' formula with respect to the Rice-Ramsperger-Kassel-Marcus (RRKM) theory.
Chemical Physics
1 code implementation • EMNLP 2020 • Wenhu Chen, Yu Su, Xifeng Yan, William Yang Wang
We propose a knowledge-grounded pre-training (KGPT), which consists of two parts, 1) a general knowledge-grounded generation model to generate knowledge-enriched text.
Ranked #8 on KG-to-Text Generation on WebNLG 2.0 (Unconstrained)
1 code implementation • 24 Sep 2020 • Semantic Machines, Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, Hao Fang, Alan Guo, David Hall, Kristin Hayes, Kellie Hill, Diana Ho, Wendy Iwaszuk, Smriti Jha, Dan Klein, Jayant Krishnamurthy, Theo Lanman, Percy Liang, Christopher H Lin, Ilya Lintsbakh, Andy McGovern, Aleksandr Nisnevich, Adam Pauls, Dmitrij Petters, Brent Read, Dan Roth, Subhro Roy, Jesse Rusak, Beth Short, Div Slomin, Ben Snyder, Stephon Striplin, Yu Su, Zachary Tellman, Sam Thomson, Andrei Vorobev, Izabela Witoszko, Jason Wolfe, Abby Wray, Yuchen Zhang, Alexander Zotov
We describe an approach to task-oriented dialogue in which dialogue state is represented as a dataflow graph.
1 code implementation • NLP-COVID19 (ACL) 2020 • Bernal Jiménez Gutiérrez, Juncheng Zeng, Dong-dong Zhang, Ping Zhang, Yu Su
The global pandemic has made it more important than ever to quickly and accurately retrieve relevant scientific literature for effective consumption by researchers in a wide range of fields.
1 code implementation • EMNLP 2020 • Ziyu Yao, Yiqi Tang, Wen-tau Yih, Huan Sun, Yu Su
Despite the widely successful applications, bootstrapping and fine-tuning semantic parsers are still a tedious process with challenges such as costly data annotation and privacy risks.
1 code implementation • ACL 2020 • Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, William Yang Wang
To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset \cite{chen2019tabfact} featured with a wide range of logical/symbolic inferences as our testbed, and propose new automatic metrics to evaluate the fidelity of generation models w. r. t.\ logical inference.
no code implementations • 27 Nov 2019 • Keke Tang, Peng Song, Yuexin Ma, Zhaoquan Gu, Yu Su, Zhihong Tian, Wenping Wang
High-level (e. g., semantic) features encoded in the latter layers of convolutional neural networks are extensively exploited for image classification, leaving low-level (e. g., color) features in the early layers underexplored.
2 code implementations • IJCNLP 2019 • Ziyu Yao, Yu Su, Huan Sun, Wen-tau Yih
As a promising paradigm, interactive semantic parsing has shown to improve both semantic parsing accuracy and user confidence in the results.
1 code implementation • 7 Jun 2019 • Qi Liu, Zhenya Huang, Yu Yin, Enhong Chen, Hui Xiong, Yu Su, Guoping Hu
In EERNN, we simply summarize each student's state into an integrated vector and trace it with a recurrent neural network, where we design a bidirectional LSTM to learn the encoding of each exercise's content.
1 code implementation • ACL 2019 • Zhiyu Chen, Hanwen Zha, Honglei Liu, Wenhu Chen, Xifeng Yan, Yu Su
Pre-trained embeddings such as word embeddings and sentence embeddings are fundamental tools facilitating a wide range of downstream NLP tasks.
Ranked #142 on Action Classification on Kinetics-400
no code implementations • 27 May 2019 • Yu Yin, Qi Liu, Zhenya Huang, Enhong Chen, Wei Tong, Shijin Wang, Yu Su
Then we propose a two-level hierarchical pre-training algorithm to learn better understanding of test questions in an unsupervised way.
1 code implementation • NAACL 2019 • Wenhu Chen, Yu Su, Yilin Shen, Zhiyu Chen, Xifeng Yan, William Wang
Under deep neural networks, a pre-defined vocabulary is required to vectorize text inputs.
no code implementations • 7 Nov 2018 • Xin Wang, Jiawei Wu, Da Zhang, Yu Su, William Yang Wang
Although promising results have been achieved in video captioning, existing models are limited to the fixed inventory of activities in the training corpus, and do not generalize to open vocabulary scenarios.
no code implementations • EMNLP 2018 • Semih Yavuz, Izzeddin Gur, Yu Su, Xifeng Yan
The SQL queries in WikiSQL are simple: Each involves one relation and does not have any join operation.
1 code implementation • EMNLP 2018 • Wenhu Chen, Jianshu Chen, Yu Su, Xin Wang, Dong Yu, Xifeng Yan, William Yang Wang
Then, we pre-train a state tracker for the source language as a teacher, which is able to exploit easy-to-access parallel data.
no code implementations • ACL 2018 • Izzeddin Gur, Semih Yavuz, Yu Su, Xifeng Yan
The recent advance in deep learning and semantic parsing has significantly improved the translation accuracy of natural language questions to structured queries.
no code implementations • 1 Jan 2018 • Farzin Ghorban, Javier Marín, Yu Su, Alessandro Colombo, Anton Kummert
Convolutional neural networks (CNNs) have demonstrated their superiority in numerous computer vision tasks, yet their computational cost results prohibitive for many real-time applications such as pedestrian detection which is usually performed on low-consumption hardware.
no code implementations • EMNLP 2017 • Jie Zhao, Yu Su, Ziyu Guan, Huan Sun
Given a question and a set of answer candidates, answer triggering determines whether the candidate set contains any correct answers.
no code implementations • EMNLP 2017 • Semih Yavuz, Izzeddin Gur, Yu Su, Xifeng Yan
The existing factoid QA systems often lack a post-inspection component that can help models recover from their own mistakes.
1 code implementation • EMNLP 2017 • Yu Su, Xifeng Yan
Existing studies on semantic parsing mainly focus on the in-domain setting.
2 code implementations • NAACL 2018 • Yu Su, Honglei Liu, Semih Yavuz, Izzeddin Gur, Huan Sun, Xifeng Yan
We study the problem of textual relation embedding with distant supervision.