no code implementations • Findings (EMNLP) 2021 • Yu Feng, Jing Zhang, Gaole He, Wayne Xin Zhao, Lemao Liu, Quan Liu, Cuiping Li, Hong Chen
Knowledge Base Question Answering (KBQA) is to answer natural language questions posed over knowledge bases (KBs).
no code implementations • dialdoc (ACL) 2022 • Tianda Li, Jia-Chen Gu, Zhen-Hua Ling, Quan Liu
When multiple conversations occur simultaneously, a listener must decide which conversation each utterance is part of in order to interpret and respond to it appropriately.
no code implementations • 6 Feb 2025 • Juming Xiong, Hou Xiong, Quan Liu, Ruining Deng, Regina N Tyree, Girish Hiremath, Yuankai Huo
Eosinophilic esophagitis (EoE) is a chronic esophageal disorder marked by eosinophil-dominated inflammation.
1 code implementation • 8 Jan 2025 • Kangsheng Yin, Quan Liu, Xuelin Shen, Yulin He, Wenhan Yang, Shiqi Wang
The image compression model has long struggled with adaptability and generalization, as the decoded bitstream typically serves only human or machine needs and fails to preserve information for unseen visual tasks.
no code implementations • 3 Jan 2025 • Yunzhe Li, Facheng Hu, Hongzi Zhu, Quan Liu, Xiaoke Zhao, Jiangang Shen, Shan Chang, Minyi Guo
To achieve uncontrolled online prediction on mobile devices, referred to as the flexible user perception (FUP) problem, is attractive but hard.
no code implementations • 27 Nov 2024 • Jialin Yue, Tianyuan Yao, Ruining Deng, Siqi Lu, Junlin Guo, Quan Liu, Mengmeng Yin, Juming Xiong, Haichun Yang, Yuankai Huo
Artificial intelligence (AI) has demonstrated significant success in automating the detection of glomeruli, the key functional units of the kidney, from whole slide images (WSIs) in kidney pathology.
1 code implementation • 25 Nov 2024 • Lining Yu, Mengmeng Yin, Ruining Deng, Quan Liu, Tianyuan Yao, Can Cui, Junlin Guo, Yu Wang, Yaohong Wang, Shilin Zhao, Haichun Yang, Yuankai Huo
In this study, we leverage the Glo-In-One toolkit to version 2 with fine-grained segmentation capabilities, curating 14 distinct labels for tissue regions, cells, and lesions across a dataset of 23, 529 annotated glomeruli across human and mouse histopathology data.
1 code implementation • 31 Oct 2024 • Junlin Guo, Siqi Lu, Can Cui, Ruining Deng, Tianyuan Yao, Zhewen Tao, Yizhe Lin, Marilyn Lionts, Quan Liu, Juming Xiong, Yu Wang, Shilin Zhao, Catie Chang, Mitchell Wilkes, Mengmeng Yin, Haichun Yang, Yuankai Huo
This study establishes a benchmark for the development and deployment of cell vision foundation models tailored for real-world data applications.
no code implementations • 14 Oct 2024 • Yaxuan Wang, Jiaheng Wei, Chris Yuhao Liu, Jinlong Pang, Quan Liu, Ankit Parag Shah, Yujia Bao, Yang Liu, Wei Wei
Existing approaches to LLM unlearning often rely on retain data or a reference LLM, yet they struggle to adequately balance unlearning performance with overall model utility.
1 code implementation • 14 Aug 2024 • Quan Liu, Zhenhong Zhou, Longzhu He, Yi Liu, Wei zhang, Sen Su
Large language models are susceptible to jailbreak attacks, which can result in the generation of harmful content.
4 code implementations • 9 Aug 2024 • Junlin Guo, Siqi Lu, Can Cui, Ruining Deng, Tianyuan Yao, Zhewen Tao, Yizhe Lin, Marilyn Lionts, Quan Liu, Juming Xiong, Yu Wang, Shilin Zhao, Catie Chang, Mitchell Wilkes, Mengmeng Yin, Haichun Yang, Yuankai Huo
Among the evaluated models, CellViT demonstrated superior performance in segmenting nuclei in kidney pathology.
no code implementations • 25 Jul 2024 • Lining Yu, Mengmeng Yin, Ruining Deng, Quan Liu, Tianyuan Yao, Can Cui, Yitian Long, Yu Wang, Yaohong Wang, Shilin Zhao, Haichun Yang, Yuankai Huo
To answer this question, we introduced GLAM, a deep learning study for fine-grained segmentation of human kidney lesions using a mouse model, addressing mouse-to-human transfer learning, by evaluating different learning strategies for segmenting human pathological lesions using zero-shot transfer learning and hybrid learning by leveraging mouse samples.
no code implementations • 19 Jul 2024 • Muyang Li, Can Cui, Quan Liu, Ruining Deng, Tianyuan Yao, Marilyn Lionts, Yuankai Huo
Our extensive experiments across multiple medical datasets reveal that data distillation can significantly reduce dataset size while maintaining comparable model performance to that achieved with the full dataset, suggesting that a small, representative sample of images can serve as a reliable indicator of distillation success.
no code implementations • 13 Jul 2024 • Can Cui, Ruining Deng, Junlin Guo, Quan Liu, Tianyuan Yao, Haichun Yang, Yuankai Huo
The Vision Foundation Model has recently gained attention in medical image analysis.
no code implementations • 3 Jul 2024 • Yucheng Tang, Yufan He, Vishwesh Nath, Pengfeig Guo, Ruining Deng, Tianyuan Yao, Quan Liu, Can Cui, Mengmeng Yin, Ziyue Xu, Holger Roth, Daguang Xu, Haichun Yang, Yuankai Huo
In this paper, we propose the holistic histopathology (HoloHisto) segmentation method to achieve end-to-end segmentation on gigapixel WSIs, whose maximum resolution is above 80, 000$\times$70, 000 pixels.
1 code implementation • 30 Jun 2024 • Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Juming Xiong, Shunxing Bao, Hao Li, Mengmeng Yin, Yu Wang, Shilin Zhao, Yucheng Tang, Haichun Yang, Yuankai Huo
Panoramic image segmentation in computational pathology presents a remarkable challenge due to the morphologically complex and variably scaled anatomy.
1 code implementation • 27 Jun 2024 • Jialin Yue, Tianyuan Yao, Ruining Deng, Quan Liu, Juming Xiong, Junlin Guo, Haichun Yang, Yuankai Huo
Additionally, we assess the efficiency of two annotation methods, fully manual annotation and a human-in-the-loop (HITL) approach, in labeling 200, 000 glomeruli.
no code implementations • 28 May 2024 • Quan Liu, Ruining Deng, Can Cui, Tianyuan Yao, Vishwesh Nath, Yucheng Tang, Yuankai Huo
Multi-modal learning adeptly integrates visual and textual data, but its application to histopathology image and text analysis remains challenging, particularly with large, high-resolution images like gigapixel Whole Slide Images (WSIs).
no code implementations • 27 May 2024 • Quan Liu, Brandon T. Swartz, Ivan Kravchenko, Jason G. Valentine, Yuankai Huo
In this paper, we propose a large kernel lightweight segmentation model, ExtremeMETA.
1 code implementation • CVPR 2024 • Quan Liu, Hongzi Zhu, Zhenxi Wang, Yunsong Zhou, Shan Chang, Minyi Guo
Registration of point clouds collected from a pair of distant vehicles provides a comprehensive and accurate 3D view of the driving scenario, which is vital for driving safety related applications, yet existing literature suffers from the expensive pose label acquisition and the deficiency to generalize to new data distributions.
no code implementations • CVPR 2024 • Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Jialin Yue, Juming Xiong, Lining Yu, Yifei Wu, Mengmeng Yin, Yu Wang, Shilin Zhao, Yucheng Tang, Haichun Yang, Yuankai Huo
Understanding the anatomy of renal pathology is crucial for advancing disease diagnostics, treatment evaluation, and clinical research.
no code implementations • 27 Feb 2024 • Zhenhong Zhou, Jiuyang Xiang, Haopeng Chen, Quan Liu, Zherui Li, Sen Su
Large Language Models (LLMs) have been demonstrated to generate illegal or unethical responses, particularly when subjected to "jailbreak."
no code implementations • 15 Jan 2024 • Quan Liu, Jiawen Yao, Lisha Yao, Xin Chen, Jingren Zhou, Le Lu, Ling Zhang, Zaiyi Liu, Yuankai Huo
The contribution of the paper is three-fold: (1) $M^{2}$Fusion is the first pipeline of multi-level fusion on pathology WSI and 3D radiology CT image for MSI prediction; (2) CT images are the first time integrated into multimodal fusion for CRC MSI prediction; (3) feature-level fusion strategy is evaluated on both Transformer-based and CNN-based method.
1 code implementation • 16 Oct 2023 • Jun-Yu Ma, Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Cong Liu
A new evaluation metric of reversibility is introduced, and a benchmark dubbed as Bidirectional Assessment for Knowledge Editing (BAKE) is constructed to evaluate the reversibility of edited models in recalling knowledge in the reverse direction of editing.
1 code implementation • 30 Sep 2023 • Ho Hin Lee, Quan Liu, Qi Yang, Xin Yu, Shunxing Bao, Yuankai Huo, Bennett A. Landman
We hypothesize that deformable convolution can be an exploratory alternative to combine all advantages from the previous operators, providing long-range dependency, adaptive spatial aggregation and computational efficiency as a foundation backbone.
no code implementations • 5 Sep 2023 • Muhao Liu, Chenyang Qi, Shunxing Bao, Quan Liu, Ruining Deng, Yu Wang, Shilin Zhao, Haichun Yang, Yuankai Huo
However, very few, if any, deep learning based approaches have been applied to kidney layer structure segmentation.
1 code implementation • 10 Aug 2023 • Jiayuan Chen, Yu Wang, Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Yilin Liu, Jianyong Zhong, Agnes B. Fogo, Haichun Yang, Shilin Zhao, Yuankai Huo
Podocytes, specialized epithelial cells that envelop the glomerular capillaries, play a pivotal role in maintaining renal health.
no code implementations • 26 Jul 2023 • Xumei Xi, Yuke Zhao, Quan Liu, Liwen Ouyang, Yang Wu
To this end, we train a farsighted recommender by using an offline RL algorithm with the policy network in our model architecture that has been initialized from a pre-trained transformer model.
no code implementations • 21 Jul 2023 • Quan Liu, Hanyu Zheng, Brandon T. Swartz, Ho Hin Lee, Zuhayr Asad, Ivan Kravchenko, Jason G. Valentine, Yuankai Huo
However, the digital design of the metamaterial neural network (MNN) is fundamentally limited by its physical limitations, such as precision, noise, and bandwidth during fabrication.
2 code implementations • ICCV 2023 • Quan Liu, Hongzi Zhu, Yunsong Zhou, Hongyang Li, Shan Chang, Minyi Guo
Registration of distant outdoor LiDAR point clouds is crucial to extending the 3D vision of collaborative autonomous vehicles, and yet is challenging due to small overlapping area and a huge disparity between observed point densities.
Ranked #1 on
Point Cloud Registration
on nuScenes (Distant PCR)
no code implementations • 1 Jul 2023 • Can Cui, Ruining Deng, Quan Liu, Tianyuan Yao, Shunxing Bao, Lucas W. Remedios, Yucheng Tang, Yuankai Huo
The Segment Anything Model (SAM) is a recently proposed prompt-based segmentation model in a generic zero-shot segmentation approach.
no code implementations • 12 Jun 2023 • Hanyu Zheng, Quan Liu, Ivan I. Kravchenko, Xiaomeng Zhang, Yuankai Huo, Jason G. Valentine
Rapid developments in machine vision have led to advances in a variety of industries, from medical image analysis to autonomous systems.
1 code implementation • 31 May 2023 • Ruining Deng, Yanwei Li, Peize Li, Jiacheng Wang, Lucas W. Remedios, Saydolimkhon Agzamkhodjaev, Zuhayr Asad, Quan Liu, Can Cui, Yaohong Wang, Yihan Wang, Yucheng Tang, Haichun Yang, Yuankai Huo
The contribution of this paper is threefold: (1) We proposed a molecular-empowered learning scheme for multi-class cell segmentation using partial labels from lay annotators; (2) The proposed method integrated Giga-pixel level molecular-morphology cross-modality registration, molecular-informed annotation, and molecular-oriented segmentation model, so as to achieve significantly superior performance via 3 lay annotators as compared with 2 experienced pathologists; (3) A deep corrective learning (learning with imperfect label) method is proposed to further improve the segmentation performance using partially annotated noisy data.
1 code implementation • 22 May 2023 • Jia-Chen Gu, Chao-Hong Tan, Caiyuan Chu, Zhen-Hua Ling, Chongyang Tao, Quan Liu, Cong Liu
Given an MPC with a few addressee labels missing, existing methods fail to build a consecutively connected conversation graph, but only a few separate conversation fragments instead.
no code implementations • 21 May 2023 • Jun-Yu Ma, Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Cong Liu, Guoping Hu
The proposed encoder is capable of interactively capturing complementary information between features and contextual information, to derive language-agnostic representations for various IE tasks.
1 code implementation • 16 May 2023 • Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Cong Liu, Guoping Hu
Addressing the issues of who saying what to whom in multi-party conversations (MPCs) has recently attracted a lot of research attention.
1 code implementation • 4 May 2023 • Quan Liu, Yunsong Zhou, Hongzi Zhu, Shan Chang, Minyi Guo
Such features are then used for online distant point cloud registration.
Ranked #3 on
Point Cloud Registration
on nuScenes (Distant PCR)
no code implementations • 4 May 2023 • Jun-Yu Ma, Jia-Chen Gu, Jiajun Qi, Zhen-Hua Ling, Quan Liu, Xiaoyi Zhao
A method named Statistical Construction and Dual Adaptation of Gazetteer (SCDAG) is proposed for Multilingual Complex NER.
no code implementations • 9 Apr 2023 • Ruining Deng, Can Cui, Quan Liu, Tianyuan Yao, Lucas W. Remedios, Shunxing Bao, Bennett A. Landman, Lee E. Wheless, Lori A. Coburn, Keith T. Wilson, Yaohong Wang, Shilin Zhao, Agnes B. Fogo, Haichun Yang, Yucheng Tang, Yuankai Huo
However, it does not consistently achieve satisfying performance for dense instance object segmentation, even with 20 prompts (clicks/boxes) on each image.
no code implementations • CVPR 2023 • Yunsong Zhou, Hongzi Zhu, Quan Liu, Shan Chang, Minyi Guo
Mobile monocular 3D object detection (Mono3D) (e. g., on a vehicle, a drone, or a robot) is an important yet challenging task.
no code implementations • 23 Mar 2023 • Yunsong Zhou, Quan Liu, Hongzi Zhu, Yunzhe Li, Shan Chang, Minyi Guo
To this end, we utilize a pose detection network to estimate the pose of the camera and then construct a feature map portraying pixel-level ground depth according to the 3D-to-2D perspective geometry.
2 code implementations • 10 Mar 2023 • Ho Hin Lee, Quan Liu, Shunxing Bao, Qi Yang, Xin Yu, Leon Y. Cai, Thomas Li, Yuankai Huo, Xenofon Koutsoukos, Bennett A. Landman
We hypothesize that convolution with LK sizes is limited to maintain an optimal convergence for locality learning.
no code implementations • 9 Mar 2023 • Caiyuan Chu, Ya Li, Yifan Liu, Jia-Chen Gu, Quan Liu, Yongxin Ge, Guoping Hu
The key to automatic intention induction is that, for any given set of new data, the sentence representation obtained by the model can be well distinguished from different labels.
1 code implementation • 7 Dec 2022 • Jun-Yu Ma, Beiduo Chen, Jia-Chen Gu, Zhen-Hua Ling, Wu Guo, Quan Liu, Zhigang Chen, Cong Liu
In this study, a mixture of short-channel distillers (MSD) method is proposed to fully interact the rich hierarchical information in the teacher model and to transfer knowledge to the student model sufficiently and efficiently.
no code implementations • 30 Aug 2022 • Tianyuan Yao, Chang Qu, Jun Long, Quan Liu, Ruining Deng, Yuanhan Tian, Jiachen Xu, Aadarsh Jha, Zuhayr Asad, Shunxing Bao, Mengyang Zhao, Agnes B. Fogo, Bennett A. Landman, Haichun Yang, Catie Chang, Yuankai Huo
In order to extract and separate compound figures into usable individual images for downstream learning, we propose a simple compound figure separation (SimCFS) framework without using the traditionally required detection bounding box annotations, with a new loss function and a hard case simulation.
1 code implementation • 27 Jun 2022 • Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Jun Long, Zuhayr Asad, R. Michael Womick, Zheyu Zhu, Agnes B. Fogo, Shilin Zhao, Haichun Yang, Yuankai Huo
The contribution of this paper is three-fold: (1) a novel scale-aware controller is proposed to generalize the dynamic neural network from single-scale to multi-scale; (2) semi-supervised consistency regularization of pseudo-labels is introduced to model the inter-scale correlation of unannotated tissue types into a single end-to-end learning paradigm; and (3) superior scale-aware generalization is evidenced by directly applying a model trained on human kidney images to mouse kidney images, without retraining.
no code implementations • 2 Jun 2022 • Mingyuan Cheng, Xinru Liao, Quan Liu, Bin Ma, Jian Xu, Bo Zheng
Learning individual-level treatment effect is a fundamental problem in causal inference and has received increasing attention in many areas, especially in the user growth area which concerns many internet companies.
no code implementations • 17 May 2022 • Beiduo Chen, Wu Guo, Quan Liu, Kun Tao
Multilingual BERT (mBERT), a language model pre-trained on large multilingual corpora, has impressive zero-shot cross-lingual transfer capabilities and performs surprisingly well on zero-shot POS tagging and Named Entity Recognition (NER), as well as on cross-lingual model transfer.
no code implementations • 8 Mar 2022 • Can Cui, Han Liu, Quan Liu, Ruining Deng, Zuhayr Asad, Yaohong WangShilin Zhao, Haichun Yang, Bennett A. Landman, Yuankai Huo
Thus, there are still open questions on how to effectively predict brain cancer survival from the incomplete radiological, pathological, genomic, and demographic data (e. g., one or more modalities might not be collected for a patient).
1 code implementation • SemEval (NAACL) 2022 • Beiduo Chen, Jun-Yu Ma, Jiajun Qi, Wu Guo, Zhen-Hua Ling, Quan Liu
The proposed method is applied to several state-of-the-art Transformer-based NER models with a gazetteer built from Wikidata, and shows great generalization ability across them.
no code implementations • 26 Feb 2022 • Beiduo Chen, Wu Guo, Bin Gu, Quan Liu, Yongchao Wang
Cross-language pre-trained models such as multilingual BERT (mBERT) have achieved significant performance in various cross-lingual downstream NLP tasks.
no code implementations • 18 Jan 2022 • Qianqian Zhang, Xinru Liao, Quan Liu, Jian Xu, Bo Zheng
Advertisers play an essential role in many e-commerce platforms like Taobao and Amazon.
1 code implementation • 23 Dec 2021 • Ruining Deng, Quan Liu, Can Cui, Zuhayr Asad, Haichun Yang, Yuankai Huo
Computer-assisted quantitative analysis on Giga-pixel pathology images has provided a new avenue in histology examination.
no code implementations • 13 Dec 2021 • Jiachen Xu, Junlin Guo, James Zimmer-Dauphinee, Quan Liu, Yuxuan Shi, Zuhayr Asad, D. Mitchell Wilkes, Parker VanValkenburgh, Steven A. Wernke, Yuankai Huo
Recently, systematic manual survey of satellite and aerial imagery has enabled continuous distributional views of archaeological phenomena at interregional scales.
no code implementations • 1 Nov 2021 • Zongtao Liu, Bin Ma, Quan Liu, Jian Xu, Bo Zheng
When speaking of sponsored search, bid keyword recommendation is the fundamental service.
1 code implementation • EMNLP 2021 • Jia-Chen Gu, Zhen-Hua Ling, Yu Wu, Quan Liu, Zhigang Chen, Xiaodan Zhu
This is a many-to-many semantic matching task because both contexts and personas in SPD are composed of multiple sentences.
no code implementations • 19 Jul 2021 • Tianyuan Yao, Chang Qu, Quan Liu, Ruining Deng, Yuanhan Tian, Jiachen Xu, Aadarsh Jha, Shunxing Bao, Mengyang Zhao, Agnes B. Fogo, Bennett A. Landman, Catie Chang, Haichun Yang, Yuankai Huo
Our technical contribution is three-fold: (1) we introduce a new side loss that is designed for compound figure separation; (2) we introduce an intra-class image augmentation method to simulate hard cases; (3) the proposed framework enables an efficient deployment to new classes of images, without requiring resource extensive bounding box annotations.
no code implementations • 22 Jun 2021 • Mengyang Zhao, Quan Liu, Aadarsh Jha, Ruining Deng, Tianyuan Yao, Anita Mahadevan-Jansen, Matthew J. Tyska, Bryan A. Millis, Yuankai Huo
Recently, pixel embedding-based cell instance segmentation and tracking provided a neat and generalizable computing paradigm for understanding cellular dynamics.
1 code implementation • SEMEVAL 2021 • Boyuan Zheng, Xiaoyu Yang, Yu-Ping Ruan, ZhenHua Ling, Quan Liu, Si Wei, Xiaodan Zhu
Given a passage and the corresponding question, a participating system is expected to choose the correct answer from five candidates of abstract concepts in a cloze-style machine reading comprehension setup.
1 code implementation • 19 May 2021 • Jia-Chen Gu, Hui Liu, Zhen-Hua Ling, Quan Liu, Zhigang Chen, Xiaodan Zhu
Empirical studies on the Persona-Chat dataset show that the partner personas neglected in previous studies can improve the accuracy of response selection in the IMN- and BERT-based models.
1 code implementation • 9 Mar 2021 • Quan Liu, Peter C. Louis, Yuzhe Lu, Aadarsh Jha, Mengyang Zhao, Ruining Deng, Tianyuan Yao, Joseph T. Roland, Haichun Yang, Shilin Zhao, Lee E. Wheless, Yuankai Huo
The contribution of the paper is three-fold: (1) The proposed SimTriplet method takes advantage of the multi-view nature of medical images beyond self-augmentation; (2) The method maximizes both intra-sample and inter-sample similarities via triplets from positive pairs, without using negative samples; and (3) The recent mix precision training is employed to advance the training by only using a single GPU with 16GB memory.
no code implementations • 4 Mar 2021 • Zhekun Shi, Di Tan, Quan Liu, Fandong Meng, Bo Zhu, Longjian Xue
Bioinspired structure adhesives have received increasing interest for many applications, such as climbing robots and medical devices.
Soft Condensed Matter
no code implementations • 3 Jan 2021 • Quan Liu, Isabella M. Gaeta, Mengyang Zhao, Ruining Deng, Aadarsh Jha, Bryan A. Millis, Anita Mahadevan-Jansen, Matthew J. Tyska, Yuankai Huo
Contribution: The contribution of this paper is three-fold: (1) the proposed method aggregates adversarial simulations and single-stage pixel-embedding based deep learning; (2) the method is assessed with both the cellular (i. e., HeLa cells) and subcellular (i. e., microvilli) objects; and (3) to the best of our knowledge, this is the first study to explore annotation-free instance segmentation and tracking study for microscope videos.
1 code implementation • 22 Dec 2020 • Chao-Hong Tan, Xiaoyu Yang, Zi'ou Zheng, Tianda Li, Yufei Feng, Jia-Chen Gu, Quan Liu, Dan Liu, Zhen-Hua Ling, Xiaodan Zhu
Task-oriented conversational modeling with unstructured knowledge access, as track 1 of the 9th Dialogue System Technology Challenges (DSTC 9), requests to build a system to generate response given dialogue history and knowledge access.
1 code implementation • COLING 2020 • Yufei Feng, Zi'ou Zheng, Quan Liu, Michael Greenspan, Xiaodan Zhu
We explore end-to-end trained differentiable models that integrate natural logic with neural networks, aiming to keep the backbone of natural language reasoning based on the natural logic formalism while introducing subsymbolic vector representations and neural components.
1 code implementation • 2 Nov 2020 • Ruining Deng, Quan Liu, Shunxing Bao, Aadarsh Jha, Catie Chang, Bryan A. Millis, Matthew J. Tyska, Yuankai Huo
Our contribution is three-fold: (1) we approach the weakly supervised segmentation from a novel codebook learning perspective; (2) the CaCL algorithm segments diffuse image patterns rather than focal objects; and (3) the proposed algorithm is implemented in a multi-task framework based on Vector Quantised-Variational AutoEncoder (VQ-VAE) via joint image reconstruction, classification, feature embedding, and segmentation.
no code implementations • 2 Nov 2020 • Quan Liu, Isabella M. Gaeta, Mengyang Zhao, Ruining Deng, Aadarsh Jha, Bryan A. Millis, Anita Mahadevan-Jansen, Matthew J. Tyska, Yuankai Huo
Instance object segmentation and tracking provide comprehensive quantification of objects across microscope videos.
no code implementations • 22 Oct 2020 • Quan Liu, Isabella M. Gaeta, Bryan A. Millis, Matthew J. Tyska, Yuankai Huo
To match the number of objects at the micro-level, the novel fluorescence-based micro-level matching approach was presented.
1 code implementation • EMNLP 2020 • Xiaoyu Yang, Feng Nie, Yufei Feng, Quan Liu, Zhigang Chen, Xiaodan Zhu
Built on that, we construct the graph attention verification networks, which are designed to fuse different sources of evidences from verbalized program execution, program structures, and the original statements and tables, to make the final verification decision.
1 code implementation • 28 Jul 2020 • Mengyang Zhao, Aadarsh Jha, Quan Liu, Bryan A. Millis, Anita Mahadevan-Jansen, Le Lu, Bennett A. Landman, Matthew J. Tyskac, Yuankai Huo
With both embedding simulation and empirical validation via the four cohorts from the ISBI cell tracking challenge, the proposed Faster Mean-shift algorithm achieved 7-10 times speedup compared to the state-of-the-art embedding based cell instance segmentation and tracking algorithm.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Zhigang Chen, Xiaodan Zhu
The challenges of building knowledge-grounded retrieval-based chatbots lie in how to ground a conversation on its background knowledge and how to match response candidates with both context and knowledge simultaneously.
1 code implementation • 8 Apr 2020 • Tianda Li, Jia-Chen Gu, Xiaodan Zhu, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei
Disentanglement is a problem in which multiple conversations occur in the same channel simultaneously, and the listener should decide which utterance is part of the conversation he will respond to.
2 code implementations • 7 Apr 2020 • Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, Xiaodan Zhu
In this paper, we study the problem of employing pre-trained language models for multi-turn response selection in retrieval-based chatbots.
Ranked #2 on
Conversational Response Selection
on Ubuntu IRC
no code implementations • 4 Apr 2020 • Jia-Chen Gu, Tianda Li, Quan Liu, Xiaodan Zhu, Zhen-Hua Ling, Yu-Ping Ruan
The NOESIS II challenge, as the Track 2 of the 8th Dialogue System Technology Challenges (DSTC 8), is the extension of DSTC 7.
Ranked #1 on
Conversation Disentanglement
on irc-disentanglement
no code implementations • 1 Feb 2020 • Yu-Ping Ruan, Zhen-Hua Ling, Jia-Chen Gu, Quan Liu
We present our work on Track 4 in the Dialogue System Technology Challenges 8 (DSTC8).
1 code implementation • 16 Nov 2019 • Jia-Chen Gu, Zhen-Hua Ling, Quan Liu
The distances between context and response utterances are employed as a prior component when calculating the attention weights.
Ranked #10 on
Conversational Response Selection
on E-commerce
1 code implementation • IJCNLP 2019 • Jia-Chen Gu, Zhen-Hua Ling, Xiaodan Zhu, Quan Liu
Compared with previous persona fusion approaches which enhance the representation of a context by calculating its similarity with a given persona, the DIM model adopts a dual matching architecture, which performs interactive matching between responses and contexts and between responses and personas respectively for ranking response candidates.
1 code implementation • 27 Apr 2019 • Tianda Li, Xiaodan Zhu, Quan Liu, Qian Chen, Zhigang Chen, Si Wei
Natural language inference (NLI) is among the most challenging tasks in natural language understanding.
no code implementations • 24 Apr 2019 • Yu-Ping Ruan, Zhen-Hua Ling, Quan Liu, Zhigang Chen, Nitin Indurkhya
This paper proposes a new model, called condition-transforming variational autoencoder (CTVAE), to improve the performance of conversation response generation using conditional variational autoencoders (CVAEs).
no code implementations • 22 Apr 2019 • Yu-Ping Ruan, Xiaodan Zhu, Zhen-Hua Ling, Zhan Shi, Quan Liu, Si Wei
Winograd Schema Challenge (WSC) was proposed as an AI-hard problem in testing computers' intelligence on common sense representation and reasoning.
no code implementations • 27 Jan 2019 • Yu-Ping Ruan, Zhen-Hua Ling, Quan Liu, Jia-Chen Gu, Xiaodan Zhu
At this stage, two different models are proposed, i. e., a variational generative (VariGen) model and a retrieval based (Retrieval) model.
1 code implementation • 7 Jan 2019 • Jia-Chen Gu, Zhen-Hua Ling, Quan Liu
In this paper, we propose an interactive matching network (IMN) for the multi-turn response selection task.
Ranked #9 on
Conversational Response Selection
on E-commerce
no code implementations • IWSLT (EMNLP) 2018 • Dan Liu, Junhua Liu, Wu Guo, Shifu Xiong, Zhiqiang Ma, Rui Song, Chongliang Wu, Quan Liu
This paper describes the USTC-NEL system to the speech translation task of the IWSLT Evaluation 2018.
1 code implementation • 3 Dec 2018 • Jia-Chen Gu, Zhen-Hua Ling, Yu-Ping Ruan, Quan Liu
This paper presents an end-to-end response selection model for Track 1 of the 7th Dialogue System Technology Challenges (DSTC7).
Ranked #5 on
Conversational Response Selection
on DSTC7 Ubuntu
no code implementations • EMNLP 2017 • Joseph Sanu, MingBin Xu, Hui Jiang, Quan Liu
In this paper, we propose to learn word embeddings based on the recent fixed-size ordinally forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence into a fixed-size representation.
no code implementations • 13 Nov 2016 • Quan Liu, Hui Jiang, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, Yu Hu
The PDP task we investigate in this paper is a complex coreference resolution task which requires the utilization of commonsense knowledge.
Ranked #63 on
Coreference Resolution
on Winograd Schema Challenge
no code implementations • 24 Mar 2016 • Quan Liu, Zhen-Hua Ling, Hui Jiang, Yu Hu
The model proposed in this paper paper jointly optimizes word vectors and the POS relevance matrices.
no code implementations • 24 Mar 2016 • Quan Liu, Hui Jiang, Andrew Evdokimov, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, Yu Hu
We propose to use neural networks to model association between any two events in a domain.
Ranked #11 on
Natural Language Understanding
on PDP60
no code implementations • 7 Sep 2015 • Quan Liu, Wu Guo, Zhen-Hua Ling
The confidence measure of each term occurrence is then re-estimated through linear interpolation with the calculated document ranking weight to improve its reliability by integrating document-level information.