no code implementations • NLP4ConvAI (ACL) 2022 • JianGuo Zhang, Kazuma Hashimoto, Yao Wan, Zhiwei Liu, Ye Liu, Caiming Xiong, Philip Yu
Pre-trained Transformer-based models were reported to be robust in intent classification.
no code implementations • 17 Mar 2025 • Ye Liu, Kevin Qinghong Lin, Chang Wen Chen, Mike Zheng Shou
In this work, we introduce VideoMind, a novel video-language agent designed for temporal-grounded video understanding.
no code implementations • 20 Feb 2025 • Ye Liu, Yuqing Niu, Chengyan Ma, Ruidong Han, Wei Ma, Yi Li, Debin Gao, David Lo
Smart contracts are highly susceptible to manipulation attacks due to the leakage of sensitive information.
no code implementations • 17 Feb 2025 • Juantao Zhong, Daoyuan Wu, Ye Liu, Maoyi Xie, Yang Liu, Yi Li, Ning Liu
DeFi (Decentralized Finance) is one of the most important applications of today's cryptocurrencies and smart contracts.
no code implementations • 5 Feb 2025 • Fan Lyu, Hanyu Zhao, Ziqi Shi, Ye Liu, Fuyuan Hu, Zhang Zhang, Liang Wang
Continual Test-Time Adaptation (CTTA) aims to adapt models to sequentially changing domains during testing, relying on pseudo-labels for self-adaptation.
no code implementations • 25 Dec 2024 • Fanpu Cao, Shu Yang, Zhengjian Chen, Ye Liu, Laizhong Cui
In long-term time series forecasting, Transformer-based models have achieved great success, due to its ability to capture long-range dependencies.
Computational Efficiency
Multivariate Time Series Forecasting
+1
1 code implementation • 19 Dec 2024 • Jixuan He, Wanhua Li, Ye Liu, Junsik Kim, Donglai Wei, Hanspeter Pfister
As a common image editing operation, image composition involves integrating foreground objects into background scenes.
no code implementations • 12 Dec 2024 • Zihan Ji, Xuetao Tian, Ye Liu
Specifically, the optimal relation mapping between facial expression classes and deception samples is firstly quantified using proposed H-OTKT module and then transfers knowledge from the facial expression dataset to deception samples.
no code implementations • 19 Nov 2024 • Ye Liu, Rui Meng, Shafiq Joty, Silvio Savarese, Caiming Xiong, Yingbo Zhou, Semih Yavuz
This gap leaves existing models unable to effectively capture the diversity of programming languages and tasks across different domains, highlighting the need for more focused research in code retrieval.
no code implementations • 31 Oct 2024 • Tong Niu, Shafiq Joty, Ye Liu, Caiming Xiong, Yingbo Zhou, Semih Yavuz
Accurate document retrieval is crucial for the success of retrieval-augmented generation (RAG) applications, including open-domain question answering and code completion.
3 code implementations • 28 Oct 2024 • Mengke Li, Ye Liu, Yang Lu, Yiqun Zhang, Yiu-ming Cheung, Hui Huang
To address this issue, we propose a novel method called Random SAM prompt tuning (RSAM-PT) to improve the model generalization, requiring only one-step gradient computation at each step.
no code implementations • 13 Oct 2024 • Fei Wang, Li Shen, Liang Ding, Chao Xue, Ye Liu, Changxing Ding
By revisiting the Memory-efficient ZO (MeZO) optimizer, we discover that the full-parameter perturbation and updating processes consume over 50% of its overall fine-tuning time cost.
no code implementations • 11 Oct 2024 • Simeng Han, Aaron Yu, Rui Shen, Zhenting Qi, Martin Riddell, Wenfei Zhou, Yujie Qiao, Yilun Zhao, Semih Yavuz, Ye Liu, Shafiq Joty, Yingbo Zhou, Caiming Xiong, Dragomir Radev, Rex Ying, Arman Cohan
We show that human-written reasoning chains significantly boost the logical reasoning capabilities of LLMs via many-shot prompting and fine-tuning.
1 code implementation • 10 Oct 2024 • Xukai Liu, Ye Liu, Kai Zhang, Kehang Wang, Qi Liu, Enhong Chen
Entity Linking (EL) is the process of associating ambiguous textual mentions to specific entities in a knowledge base.
no code implementations • 5 Oct 2024 • Zhenwen Liang, Ye Liu, Tong Niu, Xiangliang Zhang, Yingbo Zhou, Semih Yavuz
Moreover, to leverage the unique strengths of different reasoning strategies, we propose a novel collaborative method integrating Chain-of-Thought (CoT) and Program-of-Thought (PoT) solutions for verification.
1 code implementation • 3 Oct 2024 • Rui Meng, Ye Liu, Lifu Tu, Daqing He, Yingbo Zhou, Semih Yavuz
Phrases are fundamental linguistic units through which humans convey semantics.
1 code implementation • 26 Sep 2024 • Ye Liu, Zongyang Ma, Zhongang Qi, Yang Wu, Ying Shan, Chang Wen Chen
Bench (Event-Level & Time-Sensitive Video Understanding Benchmark), a large-scale and high-quality benchmark for open-ended event-level video understanding.
1 code implementation • 6 Aug 2024 • Yanghai Zhang, Ye Liu, Shiwei Wu, Kai Zhang, Xukai Liu, Qi Liu, Enhong Chen
The rapid increase in multimedia data has spurred advancements in Multimodal Summarization with Multimodal Output (MSMO), which aims to produce a multimodal summary that integrates both text and relevant images.
no code implementations • 25 Jul 2024 • Haoyu Tang, Ye Liu, Xukai Liu, Kai Zhang, Yanghai Zhang, Qi Liu, Enhong Chen
Recent advancements in machine learning, particularly in Natural Language Processing (NLP), have led to the development of sophisticated models trained on extensive datasets, yet raising concerns about the potential leakage of sensitive information.
1 code implementation • 23 Jul 2024 • Jihyung Kil, Zheda Mai, Justin Lee, Zihe Wang, Kerrie Cheng, Lemeng Wang, Ye Liu, Arpita Chowdhury, Wei-Lun Chao
In this paper, we introduce MLLM-CompBench, a benchmark designed to evaluate the comparative reasoning capability of multimodal large language models (MLLMs).
no code implementations • 12 Jul 2024 • Ye Liu, Jiajun Zhu, Kai Zhang, Haoyu Tang, Yanghai Zhang, Xukai Liu, Qi Liu, Enhong Chen
To address these shortcomings, we propose a Dual-perspective Augmented Fake News Detection (DAFND) model, designed to enhance LLMs from both inside and outside perspectives.
1 code implementation • 12 Jul 2024 • Ye Liu, Kai Zhang, Aoran Gan, Linan Yue, Feng Hu, Qi Liu, Enhong Chen
Specifically, DSARE innovatively injects the prior knowledge of LLMs into traditional RE models, and conversely enhances LLMs' task-specific aptitude for RE through relation extraction augmentation.
2 code implementations • 23 May 2024 • Ziqi Shi, Fan Lyu, Ye Liu, Fanhua Shang, Fuyuan Hu, Wei Feng, Zhang Zhang, Liang Wang
Continual Test-Time Adaptation (CTTA) is an emerging and challenging task where a model trained in a source domain must adapt to continuously changing conditions during testing, without access to the original source data.
1 code implementation • 20 May 2024 • Ye Liu, Xuelei Lin, Yejia Chen, Reynold Cheng
In this paper, we propose a multi-order graph clustering model (MOGC) to integrate multiple higher-order structures and edge connections at node level.
1 code implementation • 4 May 2024 • Ye Liu, Yue Xue, Daoyuan Wu, Yuqiang Sun, Yi Li, Miaolei Shi, Yang Liu
With recent advances in large language models (LLMs), this paper explores the potential of leveraging state-of-the-art LLMs, such as GPT-4, to transfer existing human-written properties (e. g., those from Certora auditing reports) and automatically generate customized properties for unknown code.
no code implementations • 29 Apr 2024 • Ye Liu, Jie-Ying Li, Li-Sheng Zhang, Lei-Lei Guo, Zhi-Yong Zhang
Specifically, for the forward problem, we first deploy the symmetry group to generate the dividing-lines having known solution information which can be adjusted flexibly and are used to divide the whole training domain into a finite number of non-overlapping sub-domains, then utilize the PINN and the symmetry-enhanced PINN methods to learn the solutions in each sub-domain and finally stitch them to the overall solution of PDEs.
no code implementations • 19 Apr 2024 • Hoang H. Nguyen, Chenwei Zhang, Ye Liu, Natalie Parde, Eugene Rohrbaugh, Philip S. Yu
Naively assuming English as a source language may hinder cross-lingual transfer for many languages by failing to consider the importance of language contact.
no code implementations • 13 Apr 2024 • Gang Liao, Ye Liu, Jianjun Chen, Daniel J. Abadi
The past two decades have witnessed significant success in applying columnar storage to data warehousing and analytics.
1 code implementation • 2 Apr 2024 • Ye Liu, Jixuan He, Wanhua Li, Junsik Kim, Donglai Wei, Hanspeter Pfister, Chang Wen Chen
Video temporal grounding (VTG) is a fine-grained video understanding problem that aims to ground relevant clips in untrimmed videos given natural language queries.
1 code implementation • 31 Mar 2024 • Ye Liu, Jixuan He, Wanhua Li, Junsik Kim, Donglai Wei, Hanspeter Pfister, Chang Wen Chen
Video temporal grounding (VTG) is a fine-grained video understanding problem that aims to ground relevant clips in untrimmed videos given natural language queries.
Ranked #6 on
Highlight Detection
on QVHighlights
1 code implementation • 10 Mar 2024 • Linan Yue, Qi Liu, Ye Liu, Weibo Gao, Fangzhou Yao, Wenfeng Li
To address these challenges, in this paper, we propose a Cooperative Classification and Rationalization (C2R) method, consisting of the classification and the rationalization module.
no code implementations • 17 Dec 2023 • Wenting Zhao, Ye Liu, Yao Wan, Yibo Wang, Qingyang Wu, Zhongfen Deng, Jiangshu Du, Shuaiqi Liu, Yunlong Xu, Philip S. Yu
Task-Oriented Parsing (TOP) enables conversational assistants to interpret user commands expressed in natural language, transforming them into structured outputs that combine elements of both natural language and intent/slot tags.
no code implementations • 31 Oct 2023 • Wenting Zhao, Ye Liu, Tong Niu, Yao Wan, Philip S. Yu, Shafiq Joty, Yingbo Zhou, Semih Yavuz
Moreover, a significant gap in the current landscape is the absence of a realistic benchmark for evaluating the effectiveness of grounding LLMs on heterogeneous knowledge sources (e. g., knowledge base and text).
1 code implementation • 23 Oct 2023 • Hoang H. Nguyen, Ye Liu, Chenwei Zhang, Tao Zhang, Philip S. Yu
While Chain-of-Thought prompting is popular in reasoning tasks, its application to Large Language Models (LLMs) in Natural Language Understanding (NLU) is under-explored.
Abstract Meaning Representation
Natural Language Understanding
no code implementations • 18 Oct 2023 • Siyu An, Ye Liu, Haoyuan Peng, Di Yin
Extracting structured information from videos is critical for numerous downstream applications in the industry.
no code implementations • 29 Sep 2023 • Ansong Ni, Pengcheng Yin, Yilun Zhao, Martin Riddell, Troy Feng, Rui Shen, Stephen Yin, Ye Liu, Semih Yavuz, Caiming Xiong, Shafiq Joty, Yingbo Zhou, Dragomir Radev, Arman Cohan
Recently, large language models (LLMs), especially those that are pretrained on code, have demonstrated strong capabilities in generating programs from natural language inputs in a few-shot or even zero-shot manner.
1 code implementation • ICCV 2023 • Yijun Yang, Angelica I. Aviles-Rivero, Huazhu Fu, Ye Liu, Weiming Wang, Lei Zhu
In this work, we propose the first framework for restoring videos from all adverse weather conditions by developing a video adverse-weather-component suppression network (ViWS-Net).
no code implementations • 20 Sep 2023 • Wenting Zhao, Ye Liu, Yao Wan, Yibo Wang, Zhongfen Deng, Philip S. Yu
Furthermore, TAG-QA outperforms the end-to-end model T5 by 16% and 12% on BLEU-4 and PARENT F-score, respectively.
no code implementations • 15 Sep 2023 • Meghana Moorthy Bhat, Rui Meng, Ye Liu, Yingbo Zhou, Semih Yavuz
As we embark on a new era of LLMs, it becomes increasingly crucial to understand their capabilities, limitations, and differences.
2 code implementations • 15 Sep 2023 • Linan Yue, Qi Liu, Yichao Du, Weibo Gao, Ye Liu, Fangzhou Yao
To this end, in this paper, we propose the first Federated Legal Large Language Model (FedJudge) framework, which fine-tunes Legal LLMs efficiently and effectively.
1 code implementation • 7 Sep 2023 • Erik Nijkamp, Tian Xie, Hiroaki Hayashi, Bo Pang, Congying Xia, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, Senthil Purushwalkam, Tong Niu, Wojciech Kryściński, Lidiya Murakhovs'ka, Prafulla Kumar Choubey, Alex Fabbri, Ye Liu, Rui Meng, Lifu Tu, Meghana Bhat, Chien-Sheng Wu, Silvio Savarese, Yingbo Zhou, Shafiq Joty, Caiming Xiong
Most open-source LLMs, on the other hand, are limited in their ability to support longer sequence lengths, which is a key requirement for many tasks that require inference over an input context.
no code implementations • 24 Aug 2023 • Ye Liu, Semih Yavuz, Rui Meng, Meghana Moorthy, Shafiq Joty, Caiming Xiong, Yingbo Zhou
This paper aims to fill this gap by investigating different methods of combining retrieved passages with LLMs to enhance answer generation.
1 code implementation • 24 Aug 2023 • Fei Wang, Liang Ding, Jun Rao, Ye Liu, Li Shen, Changxing Ding
The multimedia community has shown a significant interest in perceiving and representing the physical world with multimodal pretrained neural network models, and among them, the visual-language pertaining (VLP) is, currently, the most captivating topic.
1 code implementation • 23 Aug 2023 • Zhen Zhao, Ye Liu, Meng Zhao, Di Yin, Yixuan Yuan, Luping Zhou
Studies on semi-supervised medical image segmentation (SSMIS) have seen fast progress recently.
1 code implementation • 9 Aug 2023 • Hoang H. Nguyen, Chenwei Zhang, Ye Liu, Philip S. Yu
Recent advanced methods in Natural Language Understanding for Task-oriented Dialogue (TOD) Systems (e. g., intent detection and slot filling) require a large amount of annotated data to achieve competitive performance.
no code implementations • 6 Aug 2023 • Ye Liu, Stefan Ultes, Wolfgang Minker, Wolfgang Maier
In this work, we study dialogue scenarios that start from chit-chat but eventually switch to task-related services, and investigate how a unified dialogue model, which can engage in both chit-chat and task-oriented dialogues, takes the initiative during the dialogue mode transition from chit-chat to task-oriented in a coherent and cooperative manner.
1 code implementation • 19 Jul 2023 • JianGuo Zhang, Kun Qian, Zhiwei Liu, Shelby Heinecke, Rui Meng, Ye Liu, Zhou Yu, Huan Wang, Silvio Savarese, Caiming Xiong
Despite advancements in conversational AI, language models encounter challenges to handle diverse conversational tasks, and existing dialogue dataset collections often lack diversity and comprehensiveness.
no code implementations • 13 Jul 2023 • Amandeep Singh, Ye Liu, Hema Yoganarasimhan
We demonstrate how non-parametric estimators like neural nets can easily approximate such functionals and overcome the curse of dimensionality that is inherent in the non-parametric estimation of choice functions.
no code implementations • 4 Jul 2023 • Ye Liu, Stefan Ultes, Wolfgang Minker, Wolfgang Maier
We contribute two efficient prompt models which can proactively generate a transition sentence to trigger system-initiated transitions in a unified dialogue model.
no code implementations • 20 May 2023 • Wei Ma, Shangqing Liu, ZhiHao Lin, Wenhan Wang, Qiang Hu, Ye Liu, Cen Zhang, Liming Nie, Li Li, Yang Liu
We break down the abilities needed for artificial intelligence~(AI) models to address SE tasks related to code analysis into three categories: 1) syntax understanding, 2) static behavior understanding, and 3) dynamic behavior understanding.
no code implementations • 12 May 2023 • Ye Liu, Semih Yavuz, Rui Meng, Dragomir Radev, Caiming Xiong, Yingbo Zhou
It comprises two central pillars: (1) We parse the question of varying complexity into an intermediate representation, named H-expression, which is composed of simple questions as the primitives and symbolic operations representing the relationships among them; (2) To execute the resulting H-expressions, we design a hybrid executor, which integrates the deterministic rules to translate the symbolic operations with a drop-in neural reader network to answer each decomposed simple question.
3 code implementations • 18 Apr 2023 • Zheng Lian, Haiyang Sun, Licai Sun, Kang Chen, Mingyu Xu, Kexin Wang, Ke Xu, Yu He, Ying Li, Jinming Zhao, Ye Liu, Bin Liu, Jiangyan Yi, Meng Wang, Erik Cambria, Guoying Zhao, Björn W. Schuller, JianHua Tao
The first Multimodal Emotion Recognition Challenge (MER 2023) was successfully held at ACM Multimedia.
1 code implementation • 9 Apr 2023 • Yan Luo, Haoyi Duan, Ye Liu, Fu-Lai Chung
In this paper, we revisit the problem of location recommendation and point out that explicitly modeling temporal information is a great help when the model needs to predict not only the next location but also further locations.
no code implementations • 22 Mar 2023 • Yan Luo, Ye Liu, Fu-Lai Chung, Yu Liu, Chang Wen Chen
History encoder is designed to model mobility patterns from historical check-in sequences, while query generator explicitly learns user preferences to generate user-specific intention queries.
no code implementations • 18 Mar 2023 • Wuyuan Xie, Shukang Wang, Sukun Tian, Lirong Huang, Ye Liu, Miaohui Wang
Just noticeable difference (JND) refers to the maximum visual change that human eyes cannot perceive, and it has a wide range of applications in multimedia systems.
no code implementations • 11 Feb 2023 • Zetian Zheng, Shaowei Huang, Jun Yan, Qiangsheng Bu, Chen Shen, Mingzhong Zheng, Ye Liu
The oscillation phenomena associated with the control of voltage source converters (VSCs) are widely concerning, and locating the source of these oscillations is crucial to suppressing them; therefore, this paper presents a locating scheme, based on the energy structure and nonlinearity detection.
no code implementations • CVPR 2023 • Liangdao Wang, Yan Pan, Cong Liu, Hanjiang Lai, Jian Yin, Ye Liu
This paper presents an optimization method that finds hash centers with a constraint on the minimal distance between any pair of hash centers, which is non-trivial due to the non-convex nature of the problem.
no code implementations • CVPR 2023 • Ye Liu, Lingfeng Qiao, Changchong Lu, Di Yin, Chen Lin, Haoyuan Peng, Bo Ren
An intuitive way to handle these two problems is to fulfill these tasks in two separate stages: aligning modalities followed by domain adaptation, or vice versa.
1 code implementation • CVPR 2023 • Qian Li, Yuxiao Hu, Ye Liu, Dongxiao Zhang, Xin Jin, Yuntian Chen
Classical adversarial attacks for Face Recognition (FR) models typically generate discrete examples for target identity with a single state image.
1 code implementation • 17 Dec 2022 • Rui Meng, Ye Liu, Semih Yavuz, Divyansh Agarwal, Lifu Tu, Ning Yu, JianGuo Zhang, Meghana Bhat, Yingbo Zhou
In this study, we aim to develop unsupervised methods for improving dense retrieval models.
no code implementations • 1 Dec 2022 • Ye Liu, Chen Shen
First, an equivalent-scenario-based method is proposed to evaluate the equivalent inertia provided by the droop control, which shows that the droop control with a constant droop coefficient provides time-variant equivalent inertia.
no code implementations • 23 Nov 2022 • Dongsheng Li, Chen Shen, Ye Liu, Ying Chen, Shaowei Huang
In order to reduce the complexity of simulation of power systems including large-scale wind farms, it is critical to develop dynamic equivalent methods for wind farms which are applicable to the expected contingency analysis.
no code implementations • 14 Nov 2022 • Lingfeng Qiao, Chen Wu, Ye Liu, Haoyuan Peng, Di Yin, Bo Ren
In this paper, we propose a novel approach to graft the video encoder from the pre-trained video-language model on the generative pre-trained language model.
no code implementations • 9 Nov 2022 • Ye Liu, Semih Yavuz, Rui Meng, Dragomir Radev, Caiming Xiong, Yingbo Zhou
Parsing natural language questions into executable logical forms is a useful and interpretable way to perform question answering on structured data such as knowledge bases (KB) or databases (DB).
no code implementations • 9 Nov 2022 • Chen Lin, Ye Liu, Siyu An, Di Yin
In the scenario of unsupervised extractive summarization, learning high-quality sentence representations is essential to select salient sentences from the input document.
no code implementations • 6 Nov 2022 • Qianni Cao, Ye Liu, Chen Shen
This paper develops a fully data-driven linear quadratic regulator (LQR) for the HVDC to provide temporal frequency support.
no code implementations • 14 Oct 2022 • Ye Liu, Chen Shen, Zhaojian Wang
Both of these two control laws can guarantee transient stability constraints, restore system frequency and achieve the defined optimal control objective.
no code implementations • 29 Sep 2022 • Ye Liu, Wolfgang Maier, Wolfgang Minker, Stefan Ultes
The pre-trained conversational models still fail to capture the implicit commonsense (CS) knowledge hidden in the dialogue interaction, even though they were pre-trained with an enormous dataset.
1 code implementation • 2 Sep 2022 • Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Wenfei Zhou, James Coady, David Peng, Yujie Qiao, Luke Benson, Lucy Sun, Alex Wardle-Solano, Hannah Szabo, Ekaterina Zubova, Matthew Burtell, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Alexander R. Fabbri, Wojciech Kryscinski, Semih Yavuz, Ye Liu, Xi Victoria Lin, Shafiq Joty, Yingbo Zhou, Caiming Xiong, Rex Ying, Arman Cohan, Dragomir Radev
We present FOLIO, a human-annotated, logically complex and diverse dataset for reasoning in natural language (NL), equipped with first-order logic (FOL) annotations.
no code implementations • 4 Jul 2022 • Ye Liu, Lingfeng Qiao, Di Yin, Zhuoxuan Jiang, Xinghua Jiang, Deqiang Jiang, Bo Ren
In this paper, from an alternate perspective to overcome the above challenges, we unite these two tasks into one task by a new form of predicting shots link: a link connects two adjacent shots, indicating that they belong to the same scene or category.
no code implementations • 6 Jun 2022 • Ye Liu, Changchong Lu, Chen Lin, Di Yin, Bo Ren
However, to our knowledge, there is no existing work focused on the second step of video text classification, which will limit the guidance to downstream tasks such as video indexing and browsing.
no code implementations • 28 May 2022 • Ye Liu, Chen Shen, Zhaojian Wang, Feng Liu
In multi-infeed hybrid AC-DC (MIDC) systems, the emergency frequency control (EFC) with LCC-HVDC systems participating is of vital importance for system frequency stability.
3 code implementations • CVPR 2022 • Ye Liu, Siyuan Li, Yang Wu, Chang Wen Chen, Ying Shan, XiaoHu Qie
Finding relevant moments and highlights in videos according to natural language queries is a natural and highly valuable common need in the current video content explosion era.
Ranked #5 on
Video Grounding
on QVHighlights
1 code implementation • CVPR 2022 • Ye Liu, Yaya Cheng, Lianli Gao, Xianglong Liu, Qilong Zhang, Jingkuan Song
Specifically, by observing that adversarial examples to a specific defense model follow some regularities in their starting points, we design an Adaptive Direction Initialization strategy to speed up the evaluation.
no code implementations • 8 Mar 2022 • Jibing Gong, Yao Wan, Ye Liu, Xuewen Li, Yi Zhao, Cheng Wang, YuTing Lin, Xiaohan Fang, Wenzheng Feng, Jingyi Zhang, Jie Tang
Despite the usefulness of this service, we consider that recommending courses to users directly may neglect their varying degrees of expertise.
1 code implementation • Findings (EMNLP) 2021 • Wenting Zhao, Ye Liu, Yao Wan, Philip S. Yu
Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data.
no code implementations • ICON 2021 • Ye Liu, Wolfgang Maier, Wolfgang Minker, Stefan Ultes
We utilize the pre-trained multi-context ConveRT model for context representation in a model trained from scratch; and leverage the immediate preceding user utterance for context generation in a model adapted from the pre-trained GPT-2.
no code implementations • 23 Nov 2021 • Ye Liu, Sophia J. Wagner, Tingying Peng
Annotating microscopy images for nuclei segmentation is laborious and time-consuming.
1 code implementation • 22 Nov 2021 • Ye Liu, Huifang Li, Chao Hu, Shuang Luo, Yan Luo, Chang Wen Chen
The proposed model exploits three lightweight plug-and-play modules, namely dense feature pyramid network (DenseFPN), spatial context pyramid (SCP), and hierarchical region of interest extractor (HRoIE), to aggregate global visual context at feature, spatial, and instance domains, respectively.
no code implementations • 6 Nov 2021 • Ye Liu, Rui Song, Wenbin Lu, Yanghua Xiao
A large number of models and algorithms have been proposed to perform link prediction, among which tensor factorization method has proven to achieve state-of-the-art performance in terms of computation efficiency and prediction accuracy.
1 code implementation • Findings (EMNLP) 2021 • Ye Liu, Kazuma Hashimoto, Yingbo Zhou, Semih Yavuz, Caiming Xiong, Philip S. Yu
In this work, we propose Dense Hierarchical Retrieval (DHR), a hierarchical framework that can generate accurate dense representations of passages by utilizing both macroscopic semantics in the document and microscopic semantics specific to each passage.
1 code implementation • 15 Oct 2021 • Yinpeng Dong, Qi-An Fu, Xiao Yang, Wenzhao Xiang, Tianyu Pang, Hang Su, Jun Zhu, Jiayu Tang, Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Ye Liu, Qilong Zhang, Lianli Gao, Yunrui Yu, Xitong Gao, Zhe Zhao, Daquan Lin, Jiadong Lin, Chuanbiao Song, ZiHao Wang, Zhennan Wu, Yang Guo, Jiequan Cui, Xiaogang Xu, Pengguang Chen
Due to the vulnerability of deep neural networks (DNNs) to adversarial examples, a large number of defense techniques have been proposed to alleviate this problem in recent years.
1 code implementation • EMNLP 2021 • Ye Liu, Jian-Guo Zhang, Yao Wan, Congying Xia, Lifang He, Philip S. Yu
To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model.
no code implementations • RANLP 2021 • Ye Liu, Wolfgang Maier, Wolfgang Minker, Stefan Ultes
This paper presents an automatic method to evaluate the naturalness of natural language generation in dialogue systems.
no code implementations • 7 Sep 2021 • Ye Liu, Wolfgang Maier, Wolfgang Minker, Stefan Ultes
One challenge for dialogue agents is to recognize feelings of the conversation partner and respond accordingly.
1 code implementation • 6 Aug 2021 • Ye Liu, Lei Zhu, Shunda Pei, Huazhu Fu, Jing Qin, Qing Zhang, Liang Wan, Wei Feng
Our DID-Net predicts the three component maps by progressively integrating features across scales, and refines each map by passing an independent refinement network.
Ranked #8 on
Image Dehazing
on Haze4k
1 code implementation • 8 Jun 2021 • JianGuo Zhang, Kazuma Hashimoto, Yao Wan, Zhiwei Liu, Ye Liu, Caiming Xiong, Philip S. Yu
Pre-trained Transformer-based models were reported to be robust in intent classification.
no code implementations • EACL 2021 • Ye Liu, Yao Wan, JianGuo Zhang, Wenting Zhao, Philip Yu
In this paper, we claim that the syntactic and semantic structures among natural language are critical for non-autoregressive machine translation and can further improve the performance.
no code implementations • 22 Jan 2021 • Ye Liu, Yao Wan, Jian-Guo Zhang, Wenting Zhao, Philip S. Yu
In this paper, we claim that the syntactic and semantic structures among natural language are critical for non-autoregressive machine translation and can further improve the performance.
no code implementations • ICCV 2021 • Jianping Wu, Liang Zhang, Ye Liu, Ke Chen
We propose a novel approach that integrates under-parameterized RANSAC (UPRANSAC) with Hough Transform to detect vanishing points (VPs) from un-calibrated monocular images.
1 code implementation • EMNLP 2020 • Ye Liu, Sheng Zhang, Rui Song, Suo Feng, Yanghua Xiao
Effectively filtering out noisy articles as well as bad answers is the key to improving extraction accuracy.
1 code implementation • 26 Sep 2020 • Ye Liu, Yao Wan, Lifang He, Hao Peng, Philip S. Yu
To promote the ability of commonsense reasoning for text generation, we propose a novel knowledge graph augmented pre-trained language generation model KG-BART, which encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output.
2 code implementations • 14 Aug 2020 • Ye Liu, Junsong Yuan, Chang Wen Chen
We consider the problem of Human-Object Interaction (HOI) Detection, which aims to locate and recognize HOI instances in the form of <human, action, object> in images.
no code implementations • 6 Aug 2020 • Ye Liu, Shaika Chowdhury, Chenwei Zhang, Cornelia Caragea, Philip S. Yu
Unlike most other QA tasks that focus on linguistic understanding, HeadQA requires deeper reasoning involving not only knowledge extraction, but also complex reasoning with healthcare knowledge.
no code implementations • SIGDIAL (ACL) 2020 • Ye Liu, Tao Yang, Zeyu You, Wei Fan, Philip S. Yu
Human tackle reading comprehension not only based on the given context itself but often rely on the commonsense beyond.
1 code implementation • 28 Feb 2020 • Yuxuan Liang, Kun Ouyang, Yiwei Wang, Ye Liu, Junbo Zhang, Yu Zheng, David S. Rosenblum
This framework consists of three parts: 1) a local feature extraction module to learn representations for each region; 2) a global context module to extract global contextual priors and upsample them to generate the global features; and 3) a region-specific predictor based on tensor decomposition to provide customized predictions for each region, which is very parameter-efficient compared to previous methods.
1 code implementation • 5 Feb 2020 • Kun Ouyang, Yuxuan Liang, Ye Liu, Zekun Tong, Sijie Ruan, Yu Zheng, David S. Rosenblum
To tackle these issues, we develop a model entitled UrbanFM which consists of two major parts: 1) an inference network to generate fine-grained flow distributions from coarse-grained inputs that uses a feature extraction module and a novel distributional upsampling module; 2) a general fusion subnet to further boost the performance by considering the influence of different external factors.
no code implementations • 21 Oct 2019 • Junjun Pan, Michael K. Ng, Ye Liu, Xiongjun Zhang, Hong Yan
In this paper, we study the nonnegative tensor data and propose an orthogonal nonnegative Tucker decomposition (ONTD).
1 code implementation • 13 Aug 2019 • Ye Liu, Chenwei Zhang, Xiaohui Yan, Yi Chang, Philip S. Yu
To improve the quality and retrieval performance of the generated questions, we make two major improvements: 1) To better encode the semantics of ill-formed questions, we enrich the representation of questions with character embedding and the recent proposed contextual word embedding such as BERT, besides the traditional context-free word embeddings; 2) To make it capable to generate desired questions, we train the model with deep reinforcement learning techniques that considers an appropriate wording of the generation as an immediate reward and the correlation between generated question and answer as time-delayed long-term rewards.
no code implementations • 23 May 2019 • Ye Liu, Junjun Pan, Michael Ng
Deep neural networks have achieved a great success in solving many machine learning and computer vision problems.
5 code implementations • 13 Mar 2019 • Ye Liu, Hui Li, Alberto Garcia-Duran, Mathias Niepert, Daniel Onoro-Rubio, David S. Rosenblum
We present MMKG, a collection of three knowledge graphs that contain both numerical features and (links to) images for all entities as well as entity alignments between pairs of KGs.
1 code implementation • 6 Feb 2019 • Yuxuan Liang, Kun Ouyang, Lin Jing, Sijie Ruan, Ye Liu, Junbo Zhang, David S. Rosenblum, Yu Zheng
In this paper, we aim to infer the real-time and fine-grained crowd flows throughout a city based on coarse-grained observations.
Ranked #2 on
Fine-Grained Urban Flow Inference
on TaxiBJ-P4
no code implementations • 3 Feb 2019 • Hui Li, Ye Liu, Yan Qiu Chen
Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years.
no code implementations • 11 Nov 2018 • Jian-Guo Zhang, Pengcheng Zou, Zhao Li, Yao Wan, Ye Liu, Xiuming Pan, Yu Gong, Philip S. Yu
Nowadays, an increasing number of customers are in favor of using E-commerce Apps to browse and purchase products.
no code implementations • 24 Oct 2018 • Ye Liu, Jiawei Zhang, Chenwei Zhang, Philip S. Yu
After a thorough investigation of an online movie knowledge library, a novel movie planning framework "Blockbuster Planning with Maximized Movie Configuration Acquaintance" (BigMovie) is introduced in this paper.
no code implementations • 11 Jul 2018 • Bo Jiang, Ye Liu, W. K. Chan
Decentralized cryptocurrencies feature the use of blockchain to transfer values among peers on networks without central agency.
Software Engineering Cryptography and Security
no code implementations • 19 Jun 2018 • Ye Liu, Lifang He, Bokai Cao, Philip S. Yu, Ann B. Ragin, Alex D. Leow
Network analysis of human brain connectivity is critically important for understanding brain function and disease states.
no code implementations • ACL 2017 • Daniel Preo{\c{t}}iuc-Pietro, Ye Liu, Daniel Hopkins, Lyle Ungar
Automatic political orientation prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US.
no code implementations • 7 Nov 2016 • Ye Liu, Liqiang Nie, Lei Han, Luming Zhang, David S. Rosenblum
As compared to simple actions, activities are much more complex, but semantically consistent with a human's real life.
no code implementations • CVPR 2015 • Hanjiang Lai, Yan Pan, Ye Liu, Shuicheng Yan
Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks.