1 code implementation • 2 Aug 2023 • Kanzhi Cheng, Zheng Ma, Shi Zong, Jianbing Zhang, Xinyu Dai, Jiajun Chen
Generating visually grounded image captions with specific linguistic styles using unpaired stylistic corpora is a challenging task, especially since we expect stylized captions with a wide variety of stylistic patterns.
no code implementations • 14 May 2023 • Josh Seltzer, Jiahua, Pan, Kathy Cheng, Yuxiao Sun, Santosh Kolagati, Jimmy Lin, Shi Zong
Market research surveys are a powerful methodology for understanding consumer perspectives at scale, but are limited by depth of understanding and insights.
no code implementations • 17 Jan 2023 • Shi Zong, Josh Seltzer, Jiahua, Pan, Kathy Cheng, Jimmy Lin
Industry practitioners always face the problem of choosing the appropriate model for deployment under different considerations, such as to maximize a metric that is crucial for production, or to reduce the total cost given financial concerns.
no code implementations • 18 Oct 2022 • Zheng Ma, Shi Zong, Mianzhi Pan, Jianbing Zhang, ShuJian Huang, Xinyu Dai, Jiajun Chen
In recent years, vision and language pre-training (VLP) models have advanced the state-of-the-art results in a variety of cross-modal downstream tasks.
no code implementations • 2 Oct 2022 • Zhihuan Kuang, Shi Zong, Jianbing Zhang, Jiajun Chen, Hongfu Liu
In this paper, we consider a novel research problem: music-to-text synaesthesia.
1 code implementation • Findings (NAACL) 2022 • Ming Fang, Shi Zong, Jing Li, Xinyu Dai, ShuJian Huang, Jiajun Chen
Furthermore, we conduct a comprehensive linguistic analysis around complaints, including the connections between complaints and sentiment, and a cross-lingual comparison for complaints expressions used by Chinese and English speakers.
1 code implementation • ACL 2022 • Xiaoxin Lu, Yubo Zhang, Jing Li, Shi Zong
Huge volumes of patient queries are daily generated on online health forums, rendering manual doctor allocation a labor-intensive task.
no code implementations • 15 Dec 2021 • Shuhe Wang, Jiwei Li, Yuxian Meng, Rongbin Ouyang, Guoyin Wang, Xiaoya Li, Tianwei Zhang, Shi Zong
The core idea of Faster $k$NN-MT is to use a hierarchical clustering strategy to approximate the distance between the query and a data point in the datastore, which is decomposed into two parts: the distance between the query and the center of the cluster that the data point belongs to, and the distance between the data point and the cluster center.
1 code implementation • ICLR 2022 • Yuxian Meng, Shi Zong, Xiaoya Li, Xiaofei Sun, Tianwei Zhang, Fei Wu, Jiwei Li
Inspired by the notion that ``{\it to copy is easier than to memorize}``, in this work, we introduce GNN-LM, which extends the vanilla neural language model (LM) by allowing to reference similar contexts in the entire training corpus.
1 code implementation • ACL 2020 • Shi Zong, Alan Ritter, Eduard Hovy
We present a number of linguistic metrics which are computed over text associated with people's predictions about the future including: uncertainty, readability, and emotion.
2 code implementations • COLING 2022 • Shi Zong, Ashutosh Baheti, Wei Xu, Alan Ritter
In this paper, we present a manually annotated corpus of 10, 000 tweets containing public reports of five COVID-19 events, including positive and negative tests, deaths, denied access to testing, claimed cures and preventions.
1 code implementation • NAACL 2019 • Shi Zong, Alan Ritter, Graham Mueller, Evan Wright
In this paper, we investigate methods to analyze the severity of cybersecurity threats based on the language that is used to describe them online.
no code implementations • 25 Jan 2017 • Shi Zong, Branislav Kveton, Shlomo Berkovsky, Azin Ashkan, Nikos Vlassis, Zheng Wen
To the best of our knowledge, this is the first large-scale causal study of the impact of weather on TV watching patterns.
1 code implementation • 17 Mar 2016 • Shi Zong, Hao Ni, Kenny Sung, Nan Rosemary Ke, Zheng Wen, Branislav Kveton
In this work, we study cascading bandits, an online learning variant of the cascade model where the goal is to recommend $K$ most attractive items from a large set of $L$ candidate items.