1 code implementation • EMNLP (ACL) 2021 • Wenhao Yu, Meng Jiang, Zhiting Hu, Qingyun Wang, Heng Ji, Nazneen Rajani
Knowledge-enriched text generation poses unique challenges in modeling and learning, driving active research in several core directions, ranging from integrated modeling of neural representations and symbolic information in the sequential/hierarchical/graphical structures, learning without direct supervisions due to the cost of structured annotation, efficient optimization and inference with massive and global constraints, to language grounding on multiple modalities, and generative reasoning with implicit commonsense knowledge and background knowledge.
no code implementations • 30 Jan 2025 • Yumeng Wang, Zhiyuan Fan, Qingyun Wang, May Fung, Heng Ji
To address this, we explore the Cross-Lingual Self-Aligning ability of Language Models (CALM) to align knowledge across languages.
no code implementations • 24 Oct 2024 • Sha Li, Revanth Gangi Reddy, Khanh Duy Nguyen, Qingyun Wang, May Fung, Chi Han, Jiawei Han, Kartik Natarajan, Clare R. Voss, Heng Ji
Complex news events, such as natural disasters and socio-political conflicts, require swift responses from the government and society.
no code implementations • 24 Oct 2024 • Kexuan Xin, Qingyun Wang, Junyu Chen, Pengfei Yu, Huimin Zhao, Heng Ji
In the rapidly evolving field of metabolic engineering, the quest for efficient and precise gene target identification for metabolite production enhancement presents significant challenges.
1 code implementation • 9 Oct 2024 • Cheng Li, May Fung, Qingyun Wang, Chi Han, Manling Li, Jindong Wang, Heng Ji
In this paper, we introduce MentalArena, a self-play framework to train language models by generating domain-specific personalized data, where we obtain a better model capable of making a personalized diagnosis and treatment (as a therapist) and providing information (as a patient).
1 code implementation • 5 Oct 2024 • Jiayi He, Hehai Lin, Qingyun Wang, Yi Fung, Heng Ji
While Vision-Language Models (VLMs) have shown remarkable abilities in visual and language reasoning tasks, they invariably generate flawed responses.
no code implementations • 18 Sep 2024 • Shuowen Liang, Sisi Li, Qingyun Wang, Cen Zhang, Kaiquan Zhu, Tian Yang
In order to enrich the source of skeleton images, recent works have investigated the generation of pose skeletons based on natural language.
1 code implementation • 26 Aug 2024 • Ruochen Li, Teerth Patel, Qingyun Wang, Xinya Du
Machine learning research, crucial for technological advancements and innovation, often faces significant challenges due to its inherent complexity, slow pace of experimentation, and the necessity for specialized expertise.
1 code implementation • 22 Feb 2024 • Carl Edwards, Qingyun Wang, Lawrence Zhao, Heng Ji
Language-molecule models have emerged as an exciting direction for molecular discovery and understanding.
1 code implementation • 19 Jan 2024 • Hongyi Liu, Qingyun Wang, Payam Karisani, Heng Ji
In our experiments, we observed that such a model is prone to mislabeling the source entities, which can often appear in the text, as the target entities.
1 code implementation • 18 Jan 2024 • Qingyun Wang, Zixuan Zhang, Hongxiang Li, Xuan Liu, Jiawei Han, Huimin Zhao, Heng Ji
Fine-grained few-shot entity extraction in the chemical domain faces two unique challenges.
1 code implementation • 23 May 2023 • Qingyun Wang, Doug Downey, Heng Ji, Tom Hope
We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature.
Contextualized Literature-based Discovery
Link Prediction
+1
1 code implementation • 25 Aug 2022 • Qingyun Wang, Manling Li, Hou Pong Chan, Lifu Huang, Julia Hockenmaier, Girish Chowdhary, Heng Ji
Goal-oriented generative script learning aims to generate subsequent steps to reach a particular goal, which is an essential task to assist robots or humans in performing stereotypical activities.
1 code implementation • ACL 2021 • Qingyun Wang, Semih Yavuz, Victoria Lin, Heng Ji, Nazneen Rajani
Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders.
Ranked #3 on
Data-to-Text Generation
on WebNLG Full
(using extra training data)
1 code implementation • INLG (ACL) 2020 • Qingyun Wang, Qi Zeng, Lifu Huang, Kevin Knight, Heng Ji, Nazneen Fatema Rajani
To assist human review process, we build a novel ReviewRobot to automatically assign a review score and write comments for multiple categories such as novelty and meaningful comparison.
3 code implementations • 9 Oct 2020 • Wenhao Yu, Chenguang Zhu, Zaitang Li, Zhiting Hu, Qingyun Wang, Heng Ji, Meng Jiang
To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models.
no code implementations • NAACL 2021 • Qingyun Wang, Manling Li, Xuan Wang, Nikolaus Parulian, Guangxing Han, Jiawei Ma, Jingxuan Tu, Ying Lin, Haoran Zhang, Weili Liu, Aabhas Chauhan, Yingjun Guan, Bangzheng Li, Ruisong Li, Xiangchen Song, Yi R. Fung, Heng Ji, Jiawei Han, Shih-Fu Chang, James Pustejovsky, Jasmine Rah, David Liem, Ahmed Elsayed, Martha Palmer, Clare Voss, Cynthia Schneider, Boyan Onyshkevych
To combat COVID-19, both clinicians and scientists need to digest vast amounts of relevant biomedical knowledge in scientific literature to understand the disease mechanism and related biological functions.
2 code implementations • ACL 2019 • Qingyun Wang, Lifu Huang, Zhiying Jiang, Kevin Knight, Heng Ji, Mohit Bansal, Yi Luan
We present a PaperRobot who performs as an automatic research assistant by (1) conducting deep understanding of a large collection of human-written papers in a target domain and constructing comprehensive background knowledge graphs (KGs); (2) creating new ideas by predicting links from the background KGs, by combining graph attention and contextual text attention; (3) incrementally writing some key elements of a new paper based on memory-attention networks: from the input title along with predicted related entities to generate a paper abstract, from the abstract to generate conclusion and future work, and finally from future work to generate a title for a follow-on paper.
1 code implementation • WS 2018 • Qingyun Wang, Xiaoman Pan, Lifu Huang, Boliang Zhang, Zhiying Jiang, Heng Ji, Kevin Knight
We aim to automatically generate natural language descriptions about an input structured knowledge base (KB).
2 code implementations • ACL 2018 • Qingyun Wang, Zhi-Hao Zhou, Lifu Huang, Spencer Whitehead, Boliang Zhang, Heng Ji, Kevin Knight
We present a paper abstract writing system based on an attentive neural sequence-to-sequence model that can take a title as input and automatically generate an abstract.
Ranked #1 on
Paper generation
on ACL Title and Abstract Dataset
1 code implementation • 1st Proceedings of Alexa Prize (Alexa Prize 2017) 2017 • Jieming Ji, Qingyun Wang, Zev Battad, Jiashun Gou, Jingfei Zhou, Rahul Divekar, Craig Carlson, Mei Si
We experimented with a range of conversational activities, such as providing news and playing games, and strategies for controlling the dialogue flow.
1 code implementation • 2017 Computing in Cardiology (CinC) 2017 • Shenda Hong, Meng Wu, Yuxi Zhou, Qingyun Wang, Junyuan Shang, Hongyan Li, Junqing Xie
We propose ENCASE to combine expert features and DNNs (Deep Neural Networks) together for ECG classification.
Ranked #1 on
Time Series Classification
on Physionet 2017 Atrial Fibrillation
(F1 (Hidden Test Set) metric)
no code implementations • 29 May 2015 • Feifei Shen, Zhenjian Song, Congrui Wu, Jiaqi Geng, Qingyun Wang
Study of general purpose computation by GPU (Graphics Processing Unit) can improve the image processing capability of micro-computer system.