no code implementations • ACL 2022 • Bingsheng Yao, Dakuo Wang, Tongshuang Wu, Zheng Zhang, Toby Li, Mo Yu, Ying Xu
Existing question answering (QA) techniques are created mainly to answer questions asked by humans.
no code implementations • ACL (WebNLG, INLG) 2020 • Qipeng Guo, Zhijing Jin, Ning Dai, Xipeng Qiu, xiangyang xue, David Wipf, Zheng Zhang
Text verbalization of knowledge graphs is an important problem with wide application to natural language generation (NLG) systems.
no code implementations • EMNLP 2020 • Hui Su, Xiaoyu Shen, Zhou Xiao, Zheng Zhang, Ernie Chang, Cheng Zhang, Cheng Niu, Jie zhou
In this work, we take a close look at the movie domain and present a large-scale high-quality corpus with fine-grained annotations in hope of pushing the limit of movie-domain chatbots.
no code implementations • COLING (LaTeCHCLfL, CLFL, LaTeCH) 2020 • Alex Zhai, Zheng Zhang, Amel Fraisse, Ronald Jenn, Shelley Fisher Fishkin, Pierre Zweigenbaum
TL-Explorer is a digital humanities tool for mapping and analyzing translated literature, encompassing the World Map and the Translation Dashboard.
no code implementations • ECCV 2020 • Guo-Sen Xie, Li Liu, Fan Zhu, Fang Zhao, Zheng Zhang, Yazhou Yao, Jie Qin, Ling Shao
To exploit the progressive interactions among these regions, we represent them as a region graph, on which the parts relation reasoning is performed with graph convolutions, thus leading to our PRR branch.
1 code implementation • NAACL 2022 • Zheng Zhang, Zili Zhou, Yanna Wang
Furthermore, to combine syntactic structure and semantic information, we equip the attention score matrices by syntactic mask matrices.
no code implementations • ACL 2022 • Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Li, Nora Bradford, Branda Sun, Tran Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, Mark Warschauer
Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.
no code implementations • 18 Jan 2023 • Kezhao Huang, Haitian Jiang, Minjie Wang, Guangxuan Xiao, David Wipf, Xiang Song, Quan Gan, Zengfeng Huang, Jidong Zhai, Zheng Zhang
A key performance bottleneck when training graph neural network (GNN) models on large, real-world graphs is loading node features onto a GPU.
1 code implementation • 5 Jan 2023 • Jia Ning, Chen Li, Zheng Zhang, Zigang Geng, Qi Dai, Kun He, Han Hu
With these new techniques and other designs, we show that the proposed general-purpose task-solver can perform both instance segmentation and depth estimation well.
Ranked #1 on
Monocular Depth Estimation
on NYU-Depth V2
1 code implementation • 3 Jan 2023 • Sucheng Ren, Fangyun Wei, Zheng Zhang, Han Hu
Our TinyMIM model of tiny size achieves 79. 6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget.
no code implementations • 25 Dec 2022 • Jiarui Jin, Yangkun Wang, Weinan Zhang, Quan Gan, Xiang Song, Yong Yu, Zheng Zhang, David Wipf
However, existing methods lack elaborate design regarding the distinctions between two tasks that have been frequently overlooked: (i) edges only constitute the topology in the node classification task but can be used as both the topology and the supervisions (i. e., labels) in the edge prediction task; (ii) the node classification makes prediction over each individual node, while the edge prediction is determinated by each pair of nodes.
no code implementations • 7 Dec 2022 • Zheng Zhang, Qingrui Zhang, Bo Zhu, Xiaohan Wang, Tianjiang Hu
Though transfer learning is promising to increase the learning efficiency, the existing methods are still subject to the challenges from long-horizon tasks, especially when expert policies are sub-optimal and partially useful.
no code implementations • 5 Dec 2022 • Xi Zhao, Wei Feng, Zheng Zhang, Jingjing Lv, Xin Zhu, Zhangang Lin, Jinghe Hu, Jingping Shao
Recently, segmentation-based methods are quite popular in scene text detection, which mainly contain two steps: text kernel segmentation and expansion.
no code implementations • 4 Dec 2022 • Qi Zhu, Fei Mi, Zheng Zhang, Yasheng Wang, Yitong Li, Xin Jiang, Qun Liu, Xiaoyan Zhu, Minlie Huang
For the former, the grounding knowledge consists of keywords extracted from the response.
1 code implementation • 30 Nov 2022 • Qi Zhu, Christian Geishauser, Hsien-Chin Lin, Carel van Niekerk, Baolin Peng, Zheng Zhang, Michael Heck, Nurul Lubis, Dazhen Wan, Xiaochen Zhu, Jianfeng Gao, Milica Gašić, Minlie Huang
To address this issue, we present ConvLab-3, a flexible dialogue system toolkit based on a unified TOD data format.
1 code implementation • 21 Nov 2022 • Zixin Zhu, Yixuan Wei, JianFeng Wang, Zhe Gan, Zheng Zhang, Le Wang, Gang Hua, Lijuan Wang, Zicheng Liu, Han Hu
The image captioning task is typically realized by an auto-regressive method that decodes the text tokens one by one.
no code implementations • 3 Nov 2022 • Yutong Lin, Ze Liu, Zheng Zhang, Han Hu, Nanning Zheng, Stephen Lin, Yue Cao
In this paper, we present a study of frozen pretrained models when applied to diverse and representative computer vision tasks, including object detection, semantic segmentation and video action recognition.
Ranked #3 on
Action Recognition In Videos
on Kinetics-400
no code implementations • 31 Oct 2022 • Tengxiao Liu, Qipeng Guo, Xiangkun Hu, Yue Zhang, Xipeng Qiu, Zheng Zhang
RLET iteratively performs single step reasoning with sentence selection and deduction generation modules, from which the training signal is accumulated across the tree with elaborately designed aligned reward function that is consistent with the evaluation.
1 code implementation • 28 Oct 2022 • Qipeng Guo, Yuqing Yang, Hang Yan, Xipeng Qiu, Zheng Zhang
In this paper, we investigate the root cause of the underwhelming performance of the existing generative DocRE models and discover that the culprit is the inadequacy of the training paradigm, instead of the capacities of the models.
no code implementations • 27 Oct 2022 • Chengyu Huang, Zheng Zhang, Hao Fei, Lizi Liao
Conversation disentanglement aims to group utterances into detached sessions, which is a fundamental task in processing multi-party conversations.
no code implementations • 23 Oct 2022 • Jian Yao, Yuxin Hong, Chiyu Wang, Tianjun Xiao, Tong He, Francesco Locatello, David Wipf, Yanwei Fu, Zheng Zhang
The key intuition is that the occluded part of an object can be explained away if that part is visible in other frames, possibly deformed as long as the deformation can be reasonably learned.
no code implementations • 23 Oct 2022 • Xinling Yu, José E. C. Serrallés, Ilias I. Giannakopoulos, Ziyue Liu, Luca Daniel, Riccardo Lattanzi, Zheng Zhang
Electrical properties (EP), namely permittivity and electric conductivity, dictate the interactions between electromagnetic waves and biological tissue.
no code implementations • 3 Oct 2022 • Weicong Liang, Yuhui Yuan, Henghui Ding, Xiao Luo, WeiHong Lin, Ding Jia, Zheng Zhang, Chao Zhang, Han Hu
Vision transformers have recently achieved competitive results across various vision tasks but still suffer from heavy computation costs when processing a large number of tokens.
no code implementations • 29 Sep 2022 • Maximilian Seitzer, Max Horn, Andrii Zadaianchuk, Dominik Zietlow, Tianjun Xiao, Carl-Johann Simon-Gabriel, Tong He, Zheng Zhang, Bernhard Schölkopf, Thomas Brox, Francesco Locatello
Humans naturally decompose their environment into entities at the appropriate level of abstraction to act in the world.
no code implementations • 20 Sep 2022 • Changtong Zan, Keqin Peng, Liang Ding, Baopu Qiu, Boan Liu, Shwai He, Qingyu Lu, Zheng Zhang, Chuang Liu, Weifeng Liu, Yibing Zhan, DaCheng Tao
We describe the JD Explore Academy's submission of the WMT 2022 shared general translation task.
1 code implementation • 16 Sep 2022 • Jia Zhang, Yukun Huang, Zheng Zhang, Yuhang Shi
There has been growing research interest in using deep learning based method to achieve fully automated segmentation of lesion in Positron emission tomography computed tomography(PET CT) scans for the prognosis of various cancers.
no code implementations • 31 Aug 2022 • Kun Liu, Huiyuan Fu, Zheng Zhang, Huanpu Yin
This paper provides a brief overview of our submission to the no interaction track of SAPIEN ManiSkill Challenge 2021.
1 code implementation • 23 Aug 2022 • Lingfeng li, Huaiwei Cong, Gangming Zhao, Junran Peng, Zheng Zhang, Jinpeng Li
However, due to the tissue overlap, X-ray images are difficult to provide fine-grained features for early diagnosis.
1 code implementation • 23 Aug 2022 • Xin Wei, Huaiwei Cong, Zheng Zhang, Junran Peng, Guoping Chen, Jinpeng Li
Long-term vertebral fractures severely affect the life quality of patients, causing kyphotic, lumbar deformity and even paralysis.
no code implementations • 18 Aug 2022 • Jinfeng Zhou, Chujie Zheng, Bo wang, Zheng Zhang, Minlie Huang
Empathy is a trait that naturally manifests in human conversation.
1 code implementation • 17 Aug 2022 • Jie Wen, Zheng Zhang, Lunke Fei, Bob Zhang, Yong Xu, Zhao Zhang, Jinxing Li
However, in practical applications, such as disease diagnosis, multimedia analysis, and recommendation system, it is common to observe that not all views of samples are available in many cases, which leads to the failure of the conventional multi-view clustering methods.
1 code implementation • 2 Aug 2022 • Eyal Shnarch, Alon Halfon, Ariel Gera, Marina Danilevsky, Yannis Katsis, Leshem Choshen, Martin Santillan Cooper, Dina Epelboim, Zheng Zhang, Dakuo Wang, Lucy Yip, Liat Ein-Dor, Lena Dankin, Ilya Shnayderman, Ranit Aharonov, Yunyao Li, Naftali Liberman, Philip Levin Slesarev, Gwilym Newton, Shila Ofek-Koifman, Noam Slonim, Yoav Katz
Text classification can be useful in many real-world scenarios, saving a lot of time for end users.
no code implementations • 11 Jul 2022 • Jie Qin, Shuaihang Yuan, Jiaxin Chen, Boulbaba Ben Amor, Yi Fang, Nhat Hoang-Xuan, Chi-Bien Chu, Khoi-Nguyen Nguyen-Ngoc, Thien-Tri Cao, Nhat-Khang Ngo, Tuan-Luc Huynh, Hai-Dang Nguyen, Minh-Triet Tran, Haoyang Luo, Jianning Wang, Zheng Zhang, Zihao Xin, Yang Wang, Feng Wang, Ying Tang, Haiqin Chen, Yan Wang, Qunying Zhou, Ji Zhang, Hongyuan Wang
We define two SBSR tasks and construct two benchmarks consisting of more than 46, 000 CAD models, 1, 700 realistic models, and 145, 000 sketches in total.
no code implementations • 9 Jul 2022 • Lin Wu, Lingqiao Liu, Yang Wang, Zheng Zhang, Farid Boussaid, Mohammed Bennamoun
It is a challenging and practical problem since the query images often suffer from resolution degradation due to the different capturing conditions from real-world cameras.
no code implementations • 4 Jul 2022 • Canran Li, Dongnan Liu, Haoran Li, Zheng Zhang, Guangming Lu, Xiaojun Chang, Weidong Cai
In this work, we propose a novel deep neural network, namely Category-Aware feature alignment and Pseudo-Labelling Network (CAPL-Net) for UDA nuclei instance segmentation and classification.
no code implementations • 4 Jul 2022 • Ziyue Liu, Xinling Yu, Zheng Zhang
Physics-informed neural networks (PINNs) have been increasingly employed due to their capability of modeling complex physics systems.
1 code implementation • 26 Jun 2022 • Zhuotong Chen, Qianxiao Li, Zheng Zhang
While numerous attack and defense techniques have been developed, this work investigates the robustness issue from a new angle: can we design a self-healing neural network that can automatically detect and fix the vulnerability issue by itself?
1 code implementation • 14 Jun 2022 • Kounianhua Du, Weinan Zhang, Ruiwen Zhou, Yangkun Wang, Xilong Zhao, Jiarui Jin, Quan Gan, Zheng Zhang, David Wipf
Prediction over tabular data is an essential and fundamental problem in many important downstream tasks.
1 code implementation • 9 Jun 2022 • Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Yixuan Wei, Qi Dai, Han Hu
Our study reveals that: (i) Masked image modeling is also demanding on larger data.
no code implementations • 6 Jun 2022 • Mengqi Yao, Meghana Bharadwaj, Zheng Zhang, Baihong Jin, Duncan S. Callaway
Our data include historical ignition and wire-down points triggered by grid infrastructure collected between 2015 to 2019 in Pacific Gas & Electricity territory along with various weather, vegetation, and very high resolution data on grid infrastructure including location, age, materials.
1 code implementation • 27 May 2022 • Yixuan Wei, Han Hu, Zhenda Xie, Zheng Zhang, Yue Cao, Jianmin Bao, Dong Chen, Baining Guo
These properties, which we aggregately refer to as optimization friendliness, are identified and analyzed by a set of attention- and optimization-related diagnosis tools.
Ranked #2 on
Instance Segmentation
on COCO test-dev
(using extra training data)
1 code implementation • 26 May 2022 • Zhenda Xie, Zigang Geng, Jingcheng Hu, Zheng Zhang, Han Hu, Yue Cao
In this paper, we compare MIM with the long-dominant supervised pre-trained models from two perspectives, the visualizations and the experiments, to uncover their key representational differences.
Ranked #1 on
Monocular Depth Estimation
on KITTI Eigen split
no code implementations • 21 May 2022 • Ryan Solgi, Zichang He, William Jiahua Liang, Zheng Zhang
Various tensor decomposition methods have been proposed for data compression.
1 code implementation • 24 Apr 2022 • Zheng Zhang, Yingsheng Ji, Jiachen Shen, Xi Zhang, Guangwen Yang
Risk assessment is a substantial problem for financial institutions that has been extensively studied both for its methodological richness and its various practical applications.
1 code implementation • 23 Apr 2022 • Xiangkun Hu, Junqi Dai, Hang Yan, Yi Zhang, Qipeng Guo, Xipeng Qiu, Zheng Zhang
We propose Dialogue Meaning Representation (DMR), a pliable and easily extendable representation for task-oriented dialogue.
no code implementations • 22 Apr 2022 • Yixuan Wei, Yue Cao, Zheng Zhang, Zhuliang Yao, Zhenda Xie, Han Hu, Baining Guo
Second, we convert the image classification problem from learning parametric category classifier weights to learning a text encoder as a meta network to generate category classifier weights.
no code implementations • 16 Apr 2022 • Zheng Zhang, Liang Ding, Dazhao Cheng, Xuebo Liu, Min Zhang, DaCheng Tao
Data augmentations (DA) are the cores to achieving robust sequence-to-sequence learning on various natural language processing (NLP) tasks.
no code implementations • 28 Mar 2022 • Junyong You, Zheng Zhang
Meanwhile, representative features for image quality perception in the spatial and frequency domains can also be derived from the IQA model, which are then fed into another windowed transformer architecture for video quality assessment (VQA).
no code implementations • 26 Mar 2022 • Ying Xu, Dakuo Wang, Mo Yu, Daniel Ritchie, Bingsheng Yao, Tongshuang Wu, Zheng Zhang, Toby Jia-Jun Li, Nora Bradford, Branda Sun, Tran Bao Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, Mark Warschauer
Through benchmarking with QG models, we show that the QG model trained on FairytaleQA is capable of asking high-quality and more diverse questions.
Ranked #1 on
Question Generation
on FairytaleQA
1 code implementation • 17 Mar 2022 • Yuxian Gu, Jiaxin Wen, Hao Sun, Yi Song, Pei Ke, Chujie Zheng, Zheng Zhang, Jianzhu Yao, Xiaoyan Zhu, Jie Tang, Minlie Huang
Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.
no code implementations • 9 Mar 2022 • Zheng Zhang, Xiaohan Wang, Qingrui Zhang, Tianjiang Hu
It is shown by numerical simulations that the proposed hybrid design outperforms the pursuit policies either learned from vanilla reinforcement learning or designed by the potential field method.
no code implementations • 13 Feb 2022 • Zheng Zhang, Ying Xu, Yanhao Wang, Bingsheng Yao, Daniel Ritchie, Tongshuang Wu, Mo Yu, Dakuo Wang, Toby Jia-Jun Li
Despite its benefits for children's skill development and parent-child bonding, many parents do not often engage in interactive storytelling by having story-related dialogues with their child due to limited availability or challenges in coming up with appropriate questions.
no code implementations • 24 Jan 2022 • Xiangkun Hu, Hang Yan, Qipeng Guo, Xipeng Qiu, Weinan Zhang, Zheng Zhang
Knowledge and expertise in the real-world can be disjointedly owned.
1 code implementation • 18 Jan 2022 • Cole Hawkins, Alec Koppel, Zheng Zhang
A fundamental challenge in Bayesian inference is efficient representation of a target distribution.
no code implementations • 31 Dec 2021 • Zheng Zhang, Levent Yilmaz, Bo Liu
Despite recent advances in modern machine learning algorithms, the opaqueness of their underlying mechanisms continues to be an obstacle in adoption.
BIG-bench Machine Learning
Explainable artificial intelligence
+2
1 code implementation • 29 Dec 2021 • Mengde Xu, Zheng Zhang, Fangyun Wei, Yutong Lin, Yue Cao, Han Hu, Xiang Bai
However, semantic segmentation and the CLIP model perform on different visual granularity, that semantic segmentation processes on pixels while CLIP performs on images.
no code implementations • NeurIPS 2021 • Longyuan Li, Jian Yao, Li Wenliang, Tong He, Tianjun Xiao, Junchi Yan, David Wipf, Zheng Zhang
Learning the distribution of future trajectories conditioned on the past is a crucial problem for understanding multi-agent systems.
1 code implementation • NeurIPS 2021 • Zheng Zhang, Liang Zhao
Specifically, a provably information-lossless and roto-translation invariant representation of spatial information on networks is presented.
2 code implementations • CVPR 2022 • Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, Han Hu
We also leverage this approach to facilitate the training of a 3B model (SwinV2-G), that by $40\times$ less data than that in previous practice, we achieve the state-of-the-art on four representative vision benchmarks.
Representation Learning
Self-Supervised Image Classification
14 code implementations • CVPR 2022 • Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo
Three main techniques are proposed: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images.
Ranked #3 on
Instance Segmentation
on COCO minival
(using extra training data)
1 code implementation • NeurIPS 2021 • Mengde Xu, Zheng Zhang, Fangyun Wei, Yutong Lin, Yue Cao, Stephen Lin, Han Hu, Xiang Bai
We introduce MixTraining, a new training paradigm for object detection that can improve the performance of existing detectors for free.
no code implementations • ICLR 2022 • Yangkun Wang, Jiarui Jin, Weinan Zhang, Yongyi Yang, Jiuhai Chen, Quan Gan, Yong Yu, Zheng Zhang, Zengfeng Huang, David Wipf
In this regard, it has recently been proposed to use a randomly-selected portion of the training labels as GNN inputs, concatenated with the original node features for making predictions on the remaining labels.
no code implementations • ICLR 2022 • Jiarui Jin, Yangkun Wang, Kounianhua Du, Weinan Zhang, Zheng Zhang, David Wipf, Yong Yu, Quan Gan
Prevailing methods for relation prediction in heterogeneous graphs aim at learning latent representations (i. e., embeddings) of observed nodes and relations, and thus are limited to the transductive setting where the relation types must be known during training.
no code implementations • 29 Sep 2021 • Minjie Wang, Haoming Lu, Yu Gai, Lesheng Jin, Zihao Ye, Zheng Zhang
Despite substantial efforts from the deep learning system community to relieve researchers and practitioners from the burden of implementing models with ever-growing complexity, a considerable lingual gap remains between developing models in the language of mathematics and implementing them in the languages of computer.
1 code implementation • 8 Sep 2021 • Bingsheng Yao, Dakuo Wang, Tongshuang Wu, Zheng Zhang, Toby Jia-Jun Li, Mo Yu, Ying Xu
Existing question answering (QA) techniques are created mainly to answer questions asked by humans.
no code implementations • 7 Sep 2021 • Guan-Nan Dong, Chi-Man Pun, Zheng Zhang
To this end, we propose a novel deep collaborative multi-modal learning (DCML) to integrate the underlying information presented in facial properties in an adaptive manner to strengthen the facial details for effective unsupervised kinship verification.
no code implementations • 7 Sep 2021 • Guan-Nan Dong, Chi-Man Pun, Zheng Zhang
Specifically, we take parents and children as a whole to extract the expressive local and non-local features.
2 code implementations • 3 Aug 2021 • Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, Bosi Wen, Xiaoyan Zhu, Minlie Huang, Jie Tang
Although pre-trained language models have remarkably enhanced the generation ability of dialogue systems, open-domain Chinese dialogue systems are still limited by the dialogue data and the model size compared with English ones.
1 code implementation • 12 Jul 2021 • Bingzhi Chen, Yishu Liu, Zheng Zhang, Guangming Lu, Adams Wai Kin Kong
Accurate segmentation of organs or lesions from medical images is crucial for reliable diagnosis of diseases and organ morphometry.
2 code implementations • ICCV 2021 • Yifan Xing, Tong He, Tianjun Xiao, Yongxin Wang, Yuanjun Xiong, Wei Xia, David Wipf, Zheng Zhang, Stefano Soatto
Our hierarchical GNN uses a novel approach to merge connected components predicted at each level of the hierarchy to form a new graph at the next level.
no code implementations • 30 Jun 2021 • Yang Li, Yadan Luo, Zheng Zhang, Shazia W. Sadiq, Peng Cui
It aims at suggesting the next POI to a user in spatial and temporal context, which is a practical yet challenging task in various applications.
12 code implementations • CVPR 2022 • Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, Han Hu
The vision community is witnessing a modeling shift from CNNs to Transformers, where pure Transformer architectures have attained top accuracy on the major video recognition benchmarks.
Ranked #20 on
Action Classification
on Kinetics-600
(using extra training data)
5 code implementations • ICCV 2021 • Mengde Xu, Zheng Zhang, Han Hu, JianFeng Wang, Lijuan Wang, Fangyun Wei, Xiang Bai, Zicheng Liu
This paper presents an end-to-end semi-supervised object detection approach, in contrast to previous more complex multi-stage methods.
Ranked #4 on
Semi-Supervised Object Detection
on COCO 100% labeled data
(using extra training data)
1 code implementation • 12 Jun 2021 • Ailiang Lin, Bingzhi Chen, Jiayu Xu, Zheng Zhang, Guangming Lu
To alleviate these problems, we propose a novel deep medical image segmentation framework called Dual Swin Transformer U-Net (DS-TransUNet), which might be the first attempt to concurrently incorporate the advantages of hierarchical Swin Transformer into both encoder and decoder of the standard U-shaped architecture to enhance the semantic segmentation quality of varying medical images.
3 code implementations • ACL 2021 • Hang Yan, Junqi Dai, Tuo ji, Xipeng Qiu, Zheng Zhang
Aspect-based Sentiment Analysis (ABSA) aims to identify the aspect terms, their corresponding sentiment polarities, and the opinion terms.
Ranked #1 on
Aspect Sentiment Triplet Extraction
on SemEval
Aspect-oriented Opinion Extraction
Aspect Sentiment Triplet Extraction
+1
1 code implementation • ACL 2021 • Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, Xipeng Qiu
To that end, we propose to formulate the NER subtasks as an entity span sequence generation task, which can be solved by a unified sequence-to-sequence (Seq2Seq) framework.
Ranked #9 on
Nested Named Entity Recognition
on GENIA
1 code implementation • CVPR 2021 • Xunguang Wang, Zheng Zhang, Baoyuan Wu, Fumin Shen, Guangming Lu
However, deep hashing networks are vulnerable to adversarial examples, which is a practical secure problem but seldom studied in hashing-based retrieval field.
1 code implementation • 12 May 2021 • Yansong Tang, Zhenyu Jiang, Zhenda Xie, Yue Cao, Zheng Zhang, Philip H. S. Torr, Han Hu
Previous cycle-consistency correspondence learning methods usually leverage image patches for training.
no code implementations • 11 May 2021 • Yao Chen, Cole Hawkins, Kaiqi Zhang, Zheng Zhang, Cong Hao
This paper emphasizes the importance and efficacy of training, quantization and accelerator design, and calls for more research breakthroughs in the area for AI on the edge.
3 code implementations • 10 May 2021 • Zhenda Xie, Yutong Lin, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao, Han Hu
We are witnessing a modeling shift from CNN to Transformers in computer vision.
Ranked #52 on
Self-Supervised Image Classification
on ImageNet
1 code implementation • Findings (EMNLP) 2021 • Ruiqi Zhong, Kristy Lee, Zheng Zhang, Dan Klein
However, the next word prediction training objective is still misaligned with the target zero-shot learning objective.
3 code implementations • ICCV 2021 • Ze Liu, Zheng Zhang, Yue Cao, Han Hu, Xin Tong
Instead of grouping local points to each object candidate, our method computes the feature of an object from all the points in the point cloud with the help of an attention mechanism in the Transformers \cite{vaswani2017attention}, where the contribution of each point is automatically learned in the network training.
Ranked #3 on
3D Object Detection
on SUN-RGBD
no code implementations • 31 Mar 2021 • Zichang He, Zheng Zhang
Recently, low-rank tensor methods have been developed to mitigate this issue, but two fundamental challenges remain open: how to automatically determine the tensor rank and how to adaptively pick the informative simulation samples.
61 code implementations • ICCV 2021 • Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision.
Ranked #2 on
Image Classification
on OmniBenchmark
1 code implementation • 24 Mar 2021 • Yangkun Wang, Jiarui Jin, Weinan Zhang, Yong Yu, Zheng Zhang, David Wipf
Over the past few years, graph neural networks (GNN) and label propagation-based methods have made significant progress in addressing node classification tasks on graphs.
Ranked #1 on
Node Property Prediction
on ogbn-proteins
1 code implementation • 10 Mar 2021 • Yongyi Yang, Tang Liu, Yangkun Wang, Jinjing Zhou, Quan Gan, Zhewei Wei, Zheng Zhang, Zengfeng Huang, David Wipf
Despite the recent success of graph neural networks (GNN), common architectures often exhibit significant limitations, including sensitivity to oversmoothing, long-range dependencies, and spurious edges, e. g., as can occur as a result of graph heterophily or adversarial attacks.
no code implementations • 16 Feb 2021 • Wei Huang, Oliver Linton, Zheng Zhang
We propose a general framework for the specification testing of continuous treatment effect models.
1 code implementation • 10 Feb 2021 • Matthew Wicker, Luca Laurenti, Andrea Patane, Zhoutong Chen, Zheng Zhang, Marta Kwiatkowska
We consider adversarial training of deep neural networks through the lens of Bayesian learning, and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees.
1 code implementation • ICLR 2021 • Zhuotong Chen, Qianxiao Li, Zheng Zhang
We connect the robustness of neural networks with optimal control using the geometrical information of underlying data to design the control objective.
1 code implementation • ICCV 2021 • Zhi Chen, Yadan Luo, Ruihong Qiu, Sen Wang, Zi Huang, Jingjing Li, Zheng Zhang
Generalized zero-shot learning (GZSL) aims to classify samples under the assumption that some classes are not observable during training.
no code implementations • 9 Jan 2021 • Zhi Chen, Zi Huang, Jingjing Li, Zheng Zhang
To address these issues, in this paper, we propose a novel framework that leverages dual variational autoencoders with a triplet loss to learn discriminative latent features and applies the entropy-based calibration to minimize the uncertainty in the overlapped area between the seen and unseen classes.
no code implementations • 1 Jan 2021 • Xinyang Zhang, Zheng Zhang, Ting Wang
One intriguing property of deep neural networks (DNNs) is their vulnerability to adversarial perturbations.
no code implementations • 1 Jan 2021 • Jiarui Jin, Sijin Zhou, Weinan Zhang, Rasool Fakoor, David Wipf, Tong He, Yong Yu, Zheng Zhang, Alex Smola
In reinforcement learning, a map with states and transitions built based on historical trajectories is often helpful in exploration and exploitation.
no code implementations • 23 Dec 2020 • Zichang He, Bo Zhao, Zheng Zhang
In this paper, we introduce an active low-rank tensor model for fast MR imaging.
1 code implementation • 16 Dec 2020 • Ren Pang, Zheng Zhang, Xiangshan Gao, Zhaohan Xi, Shouling Ji, Peng Cheng, Xiapu Luo, Ting Wang
To bridge this gap, we design and implement TROJANZOO, the first open-source platform for evaluating neural backdoor attacks/defenses in a unified, holistic, and practical manner.
1 code implementation • 14 Dec 2020 • Qipeng Guo, Zhijing Jin, Ziyu Wang, Xipeng Qiu, Weinan Zhang, Jun Zhu, Zheng Zhang, David Wipf
Cycle-consistent training is widely used for jointly learning a forward and inverse mapping between two domains of interest without the cumbersome requirement of collecting matched pairs within each domain.
1 code implementation • COLING 2020 • Zhijing Jin, Qipeng Guo, Xipeng Qiu, Zheng Zhang
With a human-annotated test set, we provide this new benchmark dataset for future research on unsupervised text generation from knowledge graphs.
Ranked #1 on
Unsupervised KG-to-Text Generation
on GenWiki (Fine)
1 code implementation • 25 Nov 2020 • Jiarui Jin, Kounianhua Du, Weinan Zhang, Jiarui Qin, Yuchen Fang, Yong Yu, Zheng Zhang, Alexander J. Smola
Heterogeneous information network (HIN) has been widely used to characterize entities of various types and their complex relations.
7 code implementations • CVPR 2021 • Zhenda Xie, Yutong Lin, Zheng Zhang, Yue Cao, Stephen Lin, Han Hu
We argue that the power of contrastive learning has yet to be fully unleashed, as current methods are trained only on instance-level pretext tasks, leading to representations that may be sub-optimal for downstream tasks requiring dense pixel predictions.
no code implementations • 12 Nov 2020 • Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D'Haro, Abhinav Rastogi, Yun-Nung Chen, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, Dilek Hakkani-Tür, Jinchao Li, Qi Zhu, Lingxiao Luo, Lars Liden, Kaili Huang, Shahin Shayandeh, Runze Liang, Baolin Peng, Zheng Zhang, Swadheen Shukla, Minlie Huang, Jianfeng Gao, Shikib Mehri, Yulan Feng, Carla Gordon, Seyed Hossein Alavi, David Traum, Maxine Eskenazi, Ahmad Beirami, Eunjoon, Cho, Paul A. Crook, Ankita De, Alborz Geramifard, Satwik Kottur, Seungwhan Moon, Shivani Poddar, Rajen Subba
Interactive evaluation of dialog, and 4.
1 code implementation • 17 Oct 2020 • Cole Hawkins, Xing Liu, Zheng Zhang
This paper presents a novel end-to-end framework for low-rank tensorized training of neural networks.
1 code implementation • 11 Oct 2020 • Da Zheng, Chao Ma, Minjie Wang, Jinjing Zhou, Qidong Su, Xiang Song, Quan Gan, Zheng Zhang, George Karypis
To minimize the overheads associated with distributed computations, DistDGL uses a high-quality and light-weight min-cut graph partitioning algorithm along with multiple balancing constraints.
1 code implementation • COLING 2020 • Tianxiang Sun, Yunfan Shao, Xipeng Qiu, Qipeng Guo, Yaru Hu, Xuanjing Huang, Zheng Zhang
With the emerging branch of incorporating factual knowledge into pre-trained language models such as BERT, most existing models consider shallow, static, and separately pre-trained entity embeddings, which limits the performance gains of these models.
no code implementations • 15 Sep 2020 • Xiaohong Chen, Ying Liu, Shujie Ma, Zheng Zhang
The estimation of causal effects is a primary goal of behavioral, social, economic and biomedical sciences.
no code implementations • 8 Sep 2020 • Yan Zhang, Zhao Zhang, Yang Wang, Zheng Zhang, Li Zhang, Shuicheng Yan, Meng Wang
Nonnegative matrix factorization is usually powerful for learning the "shallow" parts-based representation, but it clearly fails to discover deep hierarchical information within both the basis and representation spaces.
no code implementations • 26 Aug 2020 • Yuwei Hu, Zihao Ye, Minjie Wang, Jiali Yu, Da Zheng, Mu Li, Zheng Zhang, Zhiru Zhang, Yida Wang
FeatGraph provides a flexible programming interface to express diverse GNN models by composing coarse-grained sparse templates with fine-grained user-defined functions (UDFs) on each vertex/edge.
1 code implementation • 1 Aug 2020 • Xinyang Zhang, Zheng Zhang, Shouling Ji, Ting Wang
Recent years have witnessed the emergence of a new paradigm of building natural language processing (NLP) systems: general-purpose, pre-trained language models (LMs) are composed with simple downstream models and fine-tuned for a variety of NLP tasks.
1 code implementation • 31 Jul 2020 • Yadan Luo, Zi Huang, Zijian Wang, Zheng Zhang, Mahsa Baktashmotlagh
To further enhance the model capacity and testify the robustness of the proposed architecture on difficult transfer tasks, we extend our model to work in a semi-supervised setting using an additional video-level bipartite graph.
Ranked #2 on
Domain Adaptation
on HMDB --> UCF (full)
1 code implementation • NeurIPS 2020 • Yihong Chen, Zheng Zhang, Yue Cao, Li-Wei Wang, Stephen Lin, Han Hu
Though RepPoints provides high performance, we find that its heavy reliance on regression for object localization leaves room for improvement.
Ranked #66 on
Object Detection
on COCO test-dev
1 code implementation • ECCV 2020 • Ze Liu, Han Hu, Yue Cao, Zheng Zhang, Xin Tong
Our investigation reveals that despite the different designs of these operators, all of these operators make surprisingly similar contributions to the network performance under the same network input and feature numbers and result in the state-of-the-art accuracy on standard benchmarks.
Ranked #3 on
3D Semantic Segmentation
on PartNet
1 code implementation • 1 Jul 2020 • Jiarui Jin, Jiarui Qin, Yuchen Fang, Kounianhua Du, Wei-Nan Zhang, Yong Yu, Zheng Zhang, Alexander J. Smola
To the best of our knowledge, this is the first work providing an efficient neighborhood-based interaction model in the HIN-based recommendations.
no code implementations • NeurIPS 2020 • Yue Cao, Zhenda Xie, Bin Liu, Yutong Lin, Zheng Zhang, Han Hu
This paper presents parametric instance classification (PIC) for unsupervised visual feature learning.
4 code implementations • ECCV 2020 • Minghao Yin, Zhuliang Yao, Yue Cao, Xiu Li, Zheng Zhang, Stephen Lin, Han Hu
This paper first studies the non-local block in depth, where we find that its attention computation can be split into two terms, a whitened pairwise term accounting for the relationship between two pixels and a unary term representing the saliency of every pixel.
Ranked #13 on
Semantic Segmentation
on Cityscapes test
1 code implementation • 10 Jun 2020 • Lei Zhu, Hui Cui, Zhiyong Cheng, Jingjing Li, Zheng Zhang
Specifically, we design a complementary dual-level semantic transfer mechanism to efficiently discover the potential semantics of tags and seamlessly transfer them into binary hash codes.
2 code implementations • ACL (WebNLG, INLG) 2020 • Qipeng Guo, Zhijing Jin, Xipeng Qiu, Wei-Nan Zhang, David Wipf, Zheng Zhang
Due to the difficulty and high cost of data collection, the supervised data available in the two fields are usually on the magnitude of tens of thousands, for example, 18K in the WebNLG~2017 dataset after preprocessing, which is far fewer than the millions of data for other tasks such as machine translation.
1 code implementation • 5 Jun 2020 • Zhijing Jin, Yongyi Yang, Xipeng Qiu, Zheng Zhang
In natural language, often multiple entities appear in the same text.
no code implementations • 21 May 2020 • Xiangxiang Zeng, Xiang Song, Tengfei Ma, Xiaoqin Pan, Yadi Zhou, Yuan Hou, Zheng Zhang, George Karypis, Feixiong Cheng
While this study, by no means recommends specific drugs, it demonstrates a powerful deep learning methodology to prioritize existing drugs for further investigation, which holds the potential of accelerating therapeutic development for COVID-19.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Zheng Zhang, Lizi Liao, Xiaoyan Zhu, Tat-Seng Chua, Zitao Liu, Yan Huang, Minlie Huang
Most existing approaches for goal-oriented dialogue policy learning used reinforcement learning, which focuses on the target agent policy and simply treat the opposite agent policy as part of the environment.
1 code implementation • 18 Apr 2020 • Da Zheng, Xiang Song, Chao Ma, Zeyuan Tan, Zihao Ye, Jin Dong, Hao Xiong, Zheng Zhang, George Karypis
Experiments on knowledge graphs consisting of over 86M nodes and 338M edges show that DGL-KE can compute embeddings in 100 minutes on an EC2 instance with 8 GPUs and 30 minutes on an EC2 cluster with 4 machines with 48 cores/machine.
Distributed, Parallel, and Cluster Computing
no code implementations • 1 Apr 2020 • Fengling Li, Tong Wang, Lei Zhu, Zheng Zhang, Xinhua Wang
Unlike previous cross-modal hashing approaches, our learning framework jointly optimizes semantic preserving that transforms deep features of multimedia data into binary hash codes, and the semantic regression which directly regresses query modality representation to explicit label.
1 code implementation • ECCV 2020 • Bin Liu, Yue Cao, Yutong Lin, Qi Li, Zheng Zhang, Mingsheng Long, Han Hu
This paper introduces a negative margin loss to metric learning based few-shot learning methods.
1 code implementation • ECCV 2020 • Zhenda Xie, Zheng Zhang, Xizhou Zhu, Gao Huang, Stephen Lin
In the feature maps of CNNs, there commonly exists considerable spatial redundancy that leads to much repetitive processing.
no code implementations • 17 Mar 2020 • Zheng Zhang, Ryuichi Takanobu, Qi Zhu, Minlie Huang, Xiaoyan Zhu
Due to the significance and value in human-computer interaction and natural language processing, task-oriented dialog systems are attracting more and more attention in both academic and industrial communities.
2 code implementations • TACL 2020 • Qi Zhu, Kaili Huang, Zheng Zhang, Xiaoyan Zhu, Minlie Huang
To advance multi-domain (cross-domain) dialogue modeling as well as alleviate the shortage of Chinese task-oriented datasets, we propose CrossWOZ, the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset.
1 code implementation • 14 Feb 2020 • Chenguang Wang, Zihao Ye, Aston Zhang, Zheng Zhang, Alexander J. Smola
Transformer has been widely used thanks to its ability to capture sequence information in an efficient way.
1 code implementation • ACL 2020 • Qi Zhu, Zheng Zhang, Yan Fang, Xiang Li, Ryuichi Takanobu, Jinchao Li, Baolin Peng, Jianfeng Gao, Xiaoyan Zhu, Minlie Huang
We present ConvLab-2, an open-source toolkit that enables researchers to build task-oriented dialogue systems with state-of-the-art models, perform an end-to-end evaluation, and diagnose the weakness of systems.
no code implementations • 23 Jan 2020 • Zhao Zhang, Zemin Tang, Yang Wang, Zheng Zhang, Choujun Zhan, ZhengJun Zha, Meng Wang
To construct FDRN, we propose a new fast residual dense block (f-RDB) to retain the ability of local feature fusion and local residual learning of original RDB, which can reduce the computing efforts at the same time.
2 code implementations • ECCV 2020 • Ze Yang, Yinghao Xu, Han Xue, Zheng Zhang, Raquel Urtasun, Li-Wei Wang, Stephen Lin, Han Hu
We present a new object representation, called Dense RepPoints, that utilizes a large set of points to describe an object at multiple levels, including both box level and pixel level.
no code implementations • 13 Dec 2019 • Zhao Zhang, Zemin Tang, Zheng Zhang, Yang Wang, Jie Qin, Meng Wang
But existing CNNs based frameworks still have several drawbacks: 1) the traditaional pooling operation may lose important feature information and is unlearnable; 2) the tradi-tional convolution operation optimizes slowly and the hierar-chical features from different layers are not fully utilized.
no code implementations • 13 Dec 2019 • Yan Zhang, Zhao Zhang, Zheng Zhang, Mingbo Zhao, Li Zhang, Zheng-Jun Zha, Meng Wang
In this paper, we investigate the unsupervised deep representation learning issue and technically propose a novel framework called Deep Self-representative Concept Factorization Network (DSCF-Net), for clustering deep features.
1 code implementation • 4 Dec 2019 • Ziming Liu, Zheng Zhang
Hamiltonian Monte Carlo (HMC) is an efficient Bayesian sampling method that can make distant proposals in the parameter space by simulating a Hamiltonian dynamical system.
no code implementations • 2 Dec 2019 • Qipeng Guo, Xipeng Qiu, PengFei Liu, xiangyang xue, Zheng Zhang
In this paper, we introduce the prior knowledge, multi-scale structure, into self-attention modules.
no code implementations • 20 Nov 2019 • Yulin Sun, Zhao Zhang, Weiming Jiang, Zheng Zhang, Li Zhang, Shuicheng Yan, Meng Wang
In this paper, we propose a structured Robust Adaptive Dic-tionary Pair Learning (RA-DPL) framework for the discrim-inative sparse representation learning.
1 code implementation • 17 Nov 2019 • Qin Zou, Zheng Zhang, Ling Cao, Long Chen, Song Wang
Given semantic annotations such as class labels and pairwise similarities of the training data, hashing methods can learn and generate effective and compact binary codes.
no code implementations • 12 Nov 2019 • Yadan Luo, Zi Huang, Zheng Zhang, Ziwei Wang, Mahsa Baktashmotlagh, Yang Yang
Meta-learning for few-shot learning allows a machine to leverage previously acquired knowledge as a prior, thus improving the performance on novel tasks with only small amounts of data.
2 code implementations • 11 Nov 2019 • Zihao Ye, Qipeng Guo, Quan Gan, Xipeng Qiu, Zheng Zhang
The Transformer model is widely successful on many natural language processing tasks.
Ranked #1 on
Machine Translation
on IWSLT2015 Chinese-English
no code implementations • 5 Nov 2019 • Zijian Wang, Zheng Zhang, Yadan Luo, Zi Huang
Existing deep hashing approaches fail to fully explore semantic correlations and neglect the effect of linguistic context on visual attention learning, leading to inferior performance.
1 code implementation • 29 Oct 2019 • Chunfeng Cui, Kaiqi Zhang, Talgat Daulbaev, Julia Gusak, Ivan Oseledets, Zheng Zhang
Secondly, we propose analyzing the vulnerability of a neural network using active subspace and finding an additive universal adversarial attack vector that can misclassify a dataset with a high probability.
no code implementations • 25 Sep 2019 • Mufei Li, Hao Zhang, Xingjian Shi, Minjie Wang, Yixing Guan, Zheng Zhang
Does attention matter and, if so, when and how?
no code implementations • 18 Sep 2019 • Zheng Zhang, Ruiqing Yin, Jun Zhu, Pierre Zweigenbaum
Recent work in cross-lingual contextual word embedding learning cannot handle multi-sense words well.
no code implementations • 7 Sep 2019 • Qiong Wu, Christopher G. Brinton, Zheng Zhang, Andrea Pizzoferrato, Zhenming Liu, Mihai Cucuringu
Pricing assets has attracted significant attention from the financial technology community.
7 code implementations • 3 Sep 2019 • Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, Tianjun Xiao, Tong He, George Karypis, Jinyang Li, Zheng Zhang
Advancing research in the emerging field of deep graph learning requires new tools to support tensor computation over graphs.
Ranked #32 on
Node Classification
on Cora
no code implementations • 21 Aug 2019 • Zhao Zhang, Lei Wang, Sheng Li, Yang Wang, Zheng Zhang, Zheng-Jun Zha, Meng Wang
Specifically, AS-LRC performs the latent decomposition of given data into a low-rank reconstruction by a block-diagonal codes matrix, a group sparse locality-adaptive salient feature part and a sparse error part.
no code implementations • 21 Aug 2019 • Zhao Zhang, Yulin Sun, Zheng Zhang, Yang Wang, Guangcan Liu, Meng Wang
In this setting, our TP-DPL integrates the twin-incoherence based latent flexible DPL and the joint embedding of codes as well as salient features by twin-projection into a unified model in an adaptive neighborhood-preserving manner.
no code implementations • 1 Aug 2019 • Yadan Luo, Zi Huang, Zheng Zhang, Ziwei Wang, Jingjing Li, Yang Yang
Visual paragraph generation aims to automatically describe a given image from different perspectives and organize sentences in a coherent way.
no code implementations • 28 Jun 2019 • Kaiqi Zhang, Xiyuan Zhang, Zheng Zhang
This paper presents an hardware accelerator for a classical tensor computation framework, Tucker decomposition.
Signal Processing Hardware Architecture
146 code implementations • 17 Jun 2019 • Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, Dahua Lin
In this paper, we introduce the various features of this toolbox.
no code implementations • 11 Jun 2019 • Zhao Zhang, Jiahuan Ren, Weiming Jiang, Zheng Zhang, Richang Hong, Shuicheng Yan, Meng Wang
We propose a joint subspace recovery and enhanced locality based robust flexible label consistent dictionary learning method called Robust Flexible Discriminative Dictionary Learning (RFDDL).
no code implementations • 25 May 2019 • Zhao Zhang, Weiming Jiang, Zheng Zhang, Sheng Li, Guangcan Liu, Jie Qin
More importantly, LC-PDL avoids using the complementary data matrix to learn the sub-dictionary over each class.
1 code implementation • 24 May 2019 • Cole Hawkins, Zheng Zhang
Tensor decomposition is an effective approach to compress over-parameterized neural networks and to enable their deployment on resource-constrained hardware platforms.
no code implementations • ICLR 2019 • Yu Gai, Zheng Zhang, Kyunghyun Cho
Many important classification performance metrics, e. g. $F$-measure, are non-differentiable and non-decomposable, and are thus unfriendly to gradient descent algorithm.
no code implementations • ICCV 2019 • Jiarui Xu, Yue Cao, Zheng Zhang, Han Hu
Recent progress in multiple object tracking (MOT) has shown that a robust similarity score is key to the success of trackers.
3 code implementations • ICCV 2019 • Han Hu, Zheng Zhang, Zhenda Xie, Stephen Lin
The convolution layer has been the dominant feature extractor in computer vision for years.
Ranked #738 on
Image Classification
on ImageNet
2 code implementations • ACL 2019 • Sungjin Lee, Qi Zhu, Ryuichi Takanobu, Xiang Li, Yaoqin Zhang, Zheng Zhang, Jinchao Li, Baolin Peng, Xiujun Li, Minlie Huang, Jianfeng Gao
We present ConvLab, an open-source multi-domain end-to-end dialog system platform, that enables researchers to quickly set up experiments with reusable components and compare a large set of different approaches, ranging from conventional pipeline systems to end-to-end neural models, in common environments.
1 code implementation • ICCV 2019 • Xizhou Zhu, Dazhi Cheng, Zheng Zhang, Stephen Lin, Jifeng Dai
Attention mechanisms have become a popular component in deep neural networks, yet there has been little examination of how different influencing factors and methods for computing attention from these factors affect performance.