no code implementations • 11 May 2023 • Jia Li, Ge Li, Yongmin Li, Zhi Jin
A large-scale study released that writing programs requires programming thinking, i. e., analyzing and implementing requirements in programming logic (e. g., sequence, branch, loop).
no code implementations • 6 May 2023 • Kechi Zhang, Zhuo Li, Jia Li, Ge Li, Zhi Jin
Inspired by the process of human programming, we propose a generate-and-edit approach named Self-Edit that utilizes execution results of the generated code from LLMs to improve the code quality on the competitive programming task.
no code implementations • 31 Mar 2023 • Jia Li, YunFei Zhao, Yongmin Li, Ge Li, Zhi Jin
In-context learning (ICL) with pre-trained language models (PTLMs) has shown great success in code generation.
1 code implementation • 14 Mar 2023 • Kechi Zhang, Zhuo Li, Zhi Jin, Ge Li
Furthermore, we propose the Hierarchy Transformer (HiT), a simple but effective sequence model to incorporate the complete hierarchical embeddings of source code into a Transformer model.
no code implementations • 5 Mar 2023 • Xiaodan Xi, Ge Li, Ye Wang, Yeonsoo Jeon, Michael Orshansky
We construct lattice PUF with a physically obfuscated key and an LWE decryption function block.
1 code implementation • ICLR 2023 • Ruyang Liu, Jingjia Huang, Ge Li, Thomas H. Li
Visual attention does not always capture the essential object representation desired for robust predictions.
Ranked #1 on Multi-Label Image Classification on MSCOCO
1 code implementation • CVPR 2023 • Ruyang Liu, Jingjia Huang, Ge Li, Jiashi Feng, Xinglong Wu, Thomas H. Li
In this paper, based on the CLIP model, we revisit temporal modeling in the context of image-to-video knowledge transferring, which is the key point for extending image-text pretrained models to the video domain.
Ranked #5 on Video Retrieval on MSR-VTT-1kA (using extra training data)
no code implementations • CVPR 2023 • Nan Zhang, Zhiyi Pan, Thomas H. Li, Wei Gao, Ge Li
Recently, self-attention networks achieve impressive performance in point cloud segmentation due to their superiority in modeling long-range dependencies.
no code implementations • CVPR 2023 • Rui Song, Chunyang Fu, Shan Liu, Ge Li
Learning an accurate entropy model is a fundamental way to remove the redundancy in point cloud compression.
1 code implementation • 3 Nov 2022 • Haojie Zhang, Ge Li, Jia Li, Zhongjin Zhang, Yuqi Zhu, Zhi Jin
Large-scale pre-trained language models have achieved impressive results on a wide range of downstream tasks recently.
no code implementations • 2 Nov 2022 • Yihong Dong, Xue Jiang, Yuchen Liu, Ge Li, Zhi Jin
CodePAD can leverage existing sequence-based models, and we show that it can achieve 100\% grammatical correctness percentage on these benchmark datasets.
no code implementations • 31 Oct 2022 • Jia Li, Zhuo Li, Huangzhao Zhang, Ge Li, Zhi Jin, Xing Hu, Xin Xia
The attackers aim to inject insidious backdoors into models by poisoning the training data with poison samples.
no code implementations • 31 Oct 2022 • Jia Li, Ge Li, Zhuo Li, Zhi Jin, Xing Hu, Kechi Zhang, Zhiyi Fu
Pre-trained models are first pre-trained with pre-training tasks and fine-tuned with the code editing task.
1 code implementation • 11 Oct 2022 • Xingyu Chen, Thomas H. Li, Ruonan Zhang, Ge Li
We present two versatile methods to generally enhance self-supervised monocular depth estimation (MDE) models.
no code implementations • 4 Oct 2022 • Ge Li, Zeqi Jin, Michael Volpp, Fabian Otto, Rudolf Lioutikov, Gerhard Neumann
MPs can be broadly categorized into two types: (a) dynamics-based approaches that generate smooth trajectories from any initial state, e. g., Dynamic Movement Primitives (DMPs), and (b) probabilistic approaches that capture higher-order statistics of the motion, e. g., Probabilistic Movement Primitives (ProMPs).
1 code implementation • 2 Oct 2022 • Xingyu Chen, Ruonan Zhang, Ji Jiang, Yan Wang, Ge Li, Thomas H. Li
In this paper, we redesign the patch-based triplet loss in MDE to alleviate the ubiquitous edge-fattening issue.
Ranked #1 on Unsupervised Monocular Depth Estimation on Kitti Raw
no code implementations • 22 Aug 2022 • Yihong Dong, Ge Li, Zhi Jin
To evaluate the effectiveness of our proposed loss, we implement and train an Antecedent Prioritized Tree-based code generation model called APT.
no code implementations • 22 Aug 2022 • Sijie Shen, Xiang Zhu, Yihong Dong, Qizhi Guo, Yankun Zhen, Ge Li
However, in some domain-specific scenarios, building such a large paired corpus for code generation is difficult because there is no directly available pairing data, and a lot of effort is required to manually write the code descriptions to construct a high-quality training dataset.
no code implementations • 18 Aug 2022 • Wenhan Wang, Kechi Zhang, Ge Li, Shangqing Liu, Anran Li, Zhi Jin, Yang Liu
Learning vector representations for programs is a critical step in applying deep learning techniques for program understanding tasks.
no code implementations • 11 Aug 2022 • Jia-Xin Zhuang, Xiansong Huang, Yang Yang, Jiancong Chen, Yue Yu, Wei Gao, Ge Li, Jie Chen, Tong Zhang
In this paper, we present OpenMedIA, an open-source toolbox library containing a rich set of deep learning methods for medical image analysis under heterogeneous Artificial Intelligence (AI) computing platforms.
1 code implementation • 25 Jul 2022 • Songlin Fan, Wei Gao, Ge Li
This paper researches the unexplored task-point cloud salient object detection (SOD).
no code implementations • 18 Jul 2022 • Kechi Zhang, Ge Li, Zhi Jin
In the field of source code processing, the transformer-based representation models have shown great powerfulness and have achieved state-of-the-art (SOTA) performance in many tasks.
1 code implementation • 29 May 2022 • Shangkun Sun, Yuanqi Chen, Yu Zhu, Guodong Guo, Ge Li
In this paper, we propose the Super Kernel Flow Network (SKFlow), a CNN architecture to ameliorate the impacts of occlusions on optical flow estimation.
1 code implementation • 19 May 2022 • Yang Xiang, Zhihua Wu, Weibao Gong, Siyu Ding, Xianjie Mo, Yuang Liu, Shuohuan Wang, Peng Liu, Yongshuai Hou, Long Li, Bin Wang, Shaohuai Shi, Yaqian Han, Yue Yu, Ge Li, Yu Sun, Yanjun Ma, dianhai yu
We took natural language processing (NLP) as an example to show how Nebula-I works in different training phases that include: a) pre-training a multilingual language model using two remote clusters; and b) fine-tuning a machine translation model using knowledge distilled from pre-trained models, which run through the most popular paradigm of recent deep learning.
Cross-Lingual Natural Language Inference Distributed Computing +2
1 code implementation • 29 Apr 2022 • Xiaoqing Fan, Ge Li, Dingquan Li, Yurui Ren, Wei Gao, Thomas H. Li
Point cloud compression plays a crucial role in reducing the huge cost of data storage and transmission.
1 code implementation • CVPR 2022 • Wenbo Zhao, Xianming Liu, Zhiwei Zhong, Junjun Jiang, Wei Gao, Ge Li, Xiangyang Ji
Most existing methods either take the end-to-end supervised learning based manner, where large amounts of pairs of sparse input and dense ground-truth are exploited as supervision information; or treat up-scaling of different scale factors as independent tasks, and have to build multiple networks to handle upsampling with varying factors.
1 code implementation • CVPR 2022 • Yurui Ren, Xiaoqing Fan, Ge Li, Shan Liu, Thomas H. Li
Our model is trained to predict human images in arbitrary poses, which encourages it to extract disentangled and expressive neural textures representing the appearance of different semantic entities.
1 code implementation • 12 Feb 2022 • Xianghao Zang, Ge Li, Wei Gao
To fuse multi-scale feature representation, this paper presents a pyramid structure containing global-level information and many pieces of local-level information from different scales.
1 code implementation • 12 Feb 2022 • Chunyang Fu, Ge Li, Rui Song, Wei Gao, Shan Liu
In point cloud compression, sufficient contexts are significant for modeling the point cloud distribution.
1 code implementation • 25 Jan 2022 • Sicen Liu, Xiaolong Wang, Yongshuai Hou, Ge Li, Hui Wang, Hui Xu, Yang Xiang, Buzhou Tang
As two important textual modalities in electronic health records (EHR), both structured data (clinical codes) and unstructured data (clinical narratives) have recently been increasingly applied to the healthcare domain.
no code implementations • 11 Jan 2022 • Zhengying Liu, Adrien Pavao, Zhen Xu, Sergio Escalera, Fabio Ferreira, Isabelle Guyon, Sirui Hong, Frank Hutter, Rongrong Ji, Julio C. S. Jacques Junior, Ge Li, Marius Lindauer, Zhipeng Luo, Meysam Madadi, Thomas Nierhoff, Kangning Niu, Chunguang Pan, Danny Stoll, Sebastien Treguer, Jin Wang, Peng Wang, Chenglin Wu, Youcheng Xiong, Arbe r Zela, Yang Zhang
Code submissions were executed on hidden tasks, with limited time and computational resources, pushing solutions that get results quickly.
no code implementations • CVPR 2022 • Ruyang Liu, Hao liu, Ge Li, Haodi Hou, TingHao Yu, Tao Yang
As a common problem in the visual world, contextual bias means the recognition may depend on the co-occurrence context rather than the objects themselves, which is even more severe in multi-label tasks due to multiple targets and the absence of location.
Ranked #9 on Multi-Label Classification on MS-COCO
3 code implementations • 23 Dec 2021 • Shuohuan Wang, Yu Sun, Yang Xiang, Zhihua Wu, Siyu Ding, Weibao Gong, Shikun Feng, Junyuan Shang, Yanbin Zhao, Chao Pang, Jiaxiang Liu, Xuyi Chen, Yuxiang Lu, Weixin Liu, Xi Wang, Yangfan Bai, Qiuliang Chen, Li Zhao, Shiyong Li, Peng Sun, dianhai yu, Yanjun Ma, Hao Tian, Hua Wu, Tian Wu, Wei Zeng, Ge Li, Wen Gao, Haifeng Wang
A unified framework named ERNIE 3. 0 was recently proposed for pre-training large-scale knowledge enhanced models and trained a model with 10 billion parameters.
2 code implementations • 16 Dec 2021 • Yuxuan Yi, Ge Li, YaoWei Wang, Zongqing Lu
Inspired by the fact that sharing plays a key role in human's learning of cooperation, we propose LToS, a hierarchically decentralized MARL framework that enables agents to learn to dynamically share reward with neighbors so as to encourage agents to cooperate on the global objective through collectives.
Multi-agent Reinforcement Learning reinforcement-learning +1
1 code implementation • 8 Dec 2021 • Onur Celik, Dongzhuoran Zhou, Ge Li, Philipp Becker, Gerhard Neumann
This local and incremental learning results in a modular MoE model of high accuracy and versatility, where both properties can be scaled by adding more components on the fly.
1 code implementation • NeurIPS 2021 • Han Peng, Ge Li, Wenhan Wang, YunFei Zhao, Zhi Jin
Learning distributed representation of source code requires modelling its syntax and semantics.
no code implementations • 20 Nov 2021 • Zhehao Zhao, Bo Yang, Ge Li, Huai Liu, Zhi Jin
Based on that, we also designed a neural network that depends on the graph attention mechanism. Specifically, we introduced the syntactic structural of the basic block, i. e., its corresponding AST, in source code model to provide sufficient information and fill the gap.
1 code implementation • 10 Nov 2021 • Xianghao Zang, Ge Li, Wei Gao, Xiujun Shu
In this way, the complex scenes in the ReID task are effectively disentangled, and the burden of each branch is relieved.
Ranked #2 on Person Re-Identification on P-DukeMTMC-reID
no code implementations • 9 Nov 2021 • Ziyi Liu, JiaQi Zhang, Yongshuai Hou, Xinran Zhang, Ge Li, Yang Xiang
Background: Electronic Health Records (EHRs) contain rich information of patients' health history, which usually include both structured and unstructured data.
1 code implementation • 9 Nov 2021 • Xianghao Zang, Ge Li, Wei Gao, Xiujun Shu
A local-aware module is employed to explore the poentials of local-level feature for unsupervised learning.
Ranked #1 on Unsupervised Person Re-Identification on PRID2011
Unsupervised Person Re-Identification Video-Based Person Re-Identification
1 code implementation • ICCV 2021 • Yurui Ren, Ge Li, Yuanqi Chen, Thomas H. Li, Shan Liu
The proposed model can generate photo-realistic portrait images with accurate movements according to intuitive modifications.
no code implementations • 28 Aug 2021 • Ge Li, Mohit Tiwari, Michael Orshansky
Spatial accelerators, that parallelize matrix/vector operations, are utilized for enhancing energy efficiency of DNN computation.
no code implementations • 4 Aug 2021 • Yurui Ren, Yubo Wu, Thomas H. Li, Shan Liu, Ge Li
Pose-guided person image synthesis aims to synthesize person images by transforming reference images into target poses.
1 code implementation • 24 Jul 2021 • Xiujun Shu, Ge Li, Xiao Wang, Weijian Ruan, Qi Tian
The key to this task is to exploit cloth-irrelevant cues.
1 code implementation • 31 May 2021 • Xiujun Shu, Xiao Wang, Xianghao Zang, Shiliang Zhang, Yuanqi Chen, Ge Li, Qi Tian
We also verified that models pre-trained on LaST can generalize well on existing datasets with short-term and cloth-changing scenarios.
no code implementations • 23 Apr 2021 • Cece Jin, Yuanqi Chen, Ge Li, Tao Zhang, Thomas Li
This paper aims to verify the existence of aliasing in TAL methods and investigate utilizing low pass filters to solve this problem by inhibiting the high-frequency band.
3 code implementations • 9 Feb 2021 • Shuai Lu, Daya Guo, Shuo Ren, JunJie Huang, Alexey Svyatkovskiy, Ambrosio Blanco, Colin Clement, Dawn Drain, Daxin Jiang, Duyu Tang, Ge Li, Lidong Zhou, Linjun Shou, Long Zhou, Michele Tufano, Ming Gong, Ming Zhou, Nan Duan, Neel Sundaresan, Shao Kun Deng, Shengyu Fu, Shujie Liu
Benchmark datasets have a significant impact on accelerating research in programming language tasks.
Ranked #1 on Cloze Test on CodeXGLUE - CT-maxmin
no code implementations • ICCV 2021 • Munan Xu, Yuanqi Chen, Shan Liu, Thomas H. Li, Ge Li
Pose-guided virtual try-on task aims to modify the fashion item based on pose transfer task.
1 code implementation • 10 Dec 2020 • Yuanqi Chen, Ge Li, Cece Jin, Shan Liu, Thomas Li
This issue makes the generator lack the incentive from the discriminator to learn high-frequency content of data, resulting in a significant spectrum discrepancy between generated images and real images.
no code implementations • 8 Dec 2020 • Kechi Zhang, Wenhan Wang, Huangzhao Zhang, Ge Li, Zhi Jin
To address the information of node and edge types, we bring the idea of heterogeneous graphs to learning on source code and present a new formula of building heterogeneous program graphs from ASTs with additional type information for nodes and edges.
no code implementations • 29 Oct 2020 • Yueru Chen, Yiting shao, Jing Wang, Ge Li, C. -C. Jay Kuo
Inspired by the recently proposed successive subspace learning (SSL) principles, we develop a successive subspace graph transform (SSGT) to address point cloud attribute compression in this work.
1 code implementation • 9 Oct 2020 • Bolin Wei, Yongmin Li, Ge Li, Xin Xia, Zhi Jin
Inspired by the IR-based and template-based approaches, in this paper, we propose a neural comment generation approach where we use the existing comments of similar code snippets as exemplars to guide comment generation.
no code implementations • 18 Sep 2020 • Wenhan Wang, Sijie Shen, Ge Li, Zhi Jin
In this paper, we take a further step and discuss the probability of directly completing a whole line of code instead of a single token.
1 code implementation • 27 Aug 2020 • Yurui Ren, Ge Li, Shan Liu, Thomas H. Li
We show that our framework can spatially transform the inputs in an efficient manner.
1 code implementation • 28 Jul 2020 • Yuanqi Chen, Xiaoming Yu, Shan Liu, Ge Li
Recent studies have shown remarkable success in unsupervised image-to-image translation.
no code implementations • 17 Jul 2020 • Zhipeng Luo, Ge Li, Zhiguang Zhang
This paper is a brief report to our submission to the VIPriors Image Classification Challenge.
no code implementations • 11 Mar 2020 • Dongming Yang, Yuexian Zou, Jian Zhang, Ge Li
GID block breaks through the local neighborhoods and captures long-range dependency of pixels both in global-level and instance-level from the scene to help detecting interactions between instances.
2 code implementations • CVPR 2020 • Yurui Ren, Xiaoming Yu, Junming Chen, Thomas H. Li, Ge Li
Finally, we warp the source features using a content-aware sampling method with the obtained local attention coefficients.
1 code implementation • 20 Feb 2020 • Wenhan Wang, Ge Li, Bo Ma, Xin Xia, Zhi Jin
As far as we have concerned, we are the first to apply graph neural networks on the domain of code clone detection.
1 code implementation • 26 Jan 2020 • Wenjie Zhang, Zeyu Sun, Qihao Zhu, Ge Li, Shaowei Cai, Yingfei Xiong, Lu Zhang
However, in this method, the initialization is assigned in a random manner, which impacts the effectiveness of SLS solvers.
1 code implementation • 8 Nov 2019 • Wei-Hong Lin, Jia-Xing Zhong, Shan Liu, Thomas Li, Ge Li
Generic object detection algorithms have proven their excellent performance in recent years.
no code implementations • 30 Oct 2019 • Munan Xu, Junming Chen, Haiqiang Wang, Shan Liu, Ge Li, Zhiqiang Bai
However, video quality exhibits different characteristics from static image quality due to the existence of temporal masking effects.
2 code implementations • NeurIPS 2019 • Bolin Wei, Ge Li, Xin Xia, Zhiyi Fu, Zhi Jin
Code summarization (CS) and code generation (CG) are two crucial tasks in the field of automatic software development.
1 code implementation • NeurIPS 2019 • Xiaoming Yu, Yuanqi Chen, Thomas Li, Shan Liu, Ge Li
Recent advances of image-to-image translation focus on learning the one-to-many mapping from two aspects: multi-modal translation and multi-domain translation.
no code implementations • 16 Sep 2019 • Fang Liu, Ge Li, Bolin Wei, Xin Xia, Zhiyi Fu, Zhi Jin
To enable the knowledge sharing between related tasks, we creatively propose a Multi-Task Learning (MTL) framework to learn two related tasks in code completion jointly.
no code implementations • 19 Aug 2019 • Dongming Yang, Yuexian Zou, Jian Zhang, Ge Li
Although two-stage detectors like Faster R-CNN achieved big successes in object detection due to the strategy of extracting region proposals by region proposal network, they show their poor adaption in real-world object detection as a result of without considering mining hard samples during extracting region proposals.
1 code implementation • ICCV 2019 • Yurui Ren, Xiaoming Yu, Ruonan Zhang, Thomas H. Li, Shan Liu, Ge Li
Image inpainting techniques have shown significant improvements by using deep neural networks recently.
1 code implementation • 28 Jun 2019 • Zhangheng Li, Jia-Xing Zhong, Jingjia Huang, Tao Zhang, Thomas Li, Ge Li
In recent years, memory-augmented neural networks(MANNs) have shown promising power to enhance the memory ability of neural networks for sequential processing tasks.
no code implementations • 18 Apr 2019 • Wei Yan, Yiting shao, Shan Liu, Thomas H. Li, Zhu Li, Ge Li
Point cloud is a fundamental 3D representation which is widely used in real world applications such as autonomous driving.
1 code implementation • CVPR 2019 • Jia-Xing Zhong, Nannan Li, Weijie Kong, Shan Liu, Thomas H. Li, Ge Li
Remarkably, we obtain the frame-level AUC score of 82. 12% on UCF-Crime.
Anomaly Detection In Surveillance Videos Multiple Instance Learning +2
no code implementations • 24 Feb 2019 • Yiwei Zhang, Chunbiao Zhu, Ge Li, Yuan Zhao, Haifeng Shen
A fast and effective motion deblurring method has great application values in real life.
1 code implementation • 14 Nov 2018 • Zeyu Sun, Qihao Zhu, Lili Mou, Yingfei Xiong, Ge Li, Lu Zhang
In this paper, we propose a grammar-based structural convolutional neural network (CNN) for code generation.
no code implementations • 6 Nov 2018 • Weijie Kong, Nannan Li, Shan Liu, Thomas Li, Ge Li
Despite tremendous progress achieved in temporal action detection, state-of-the-art methods still suffer from the sharp performance deterioration when localizing the starting and ending temporal action boundaries.
1 code implementation • 11 Oct 2018 • Xiaoming Yu, Xing Cai, Zhenqiang Ying, Thomas Li, Ge Li
Besides, we explore variants of SingleGAN for different tasks, including one-to-many domain translation, many-to-many domain translation and one-to-one domain translation with multimodality.
1 code implementation • 30 Sep 2018 • Sangkug Lym, Armand Behroozi, Wei Wen, Ge Li, Yongkee Kwon, Mattan Erez
Training convolutional neural networks (CNNs) requires intense computations and high memory bandwidth.
no code implementations • 27 Sep 2018 • Zhangheng Li, Jia-Xing Zhong, Jingjia Huang, Tao Zhang, Thomas Li, Ge Li
Processing sequential data with long term dependencies and learn complex transitions are two major challenges in many deep learning applications.
no code implementations • 9 Jul 2018 • Jia-Xing Zhong, Nannan Li, Weijie Kong, Tao Zhang, Thomas H. Li, Ge Li
Weakly supervised temporal action detection is a Herculean task in understanding untrimmed videos, since no supervisory signal except the video-level category label is available on training data.
no code implementations • 26 Jun 2018 • Xiaoming Yu, Zhenqiang Ying, Thomas Li, Shan Liu, Ge Li
Recent advances in image-to-image translation have seen a rise in approaches generating diverse images through a single network.
no code implementations • 14 May 2018 • Chunbiao Zhu, Wen-Hao Zhang, Thomas H. Li, Ge Li
In this paper, we propose a novel salient object detection algorithm for RGB-D images using center-dark channel priors.
no code implementations • 13 May 2018 • Xiaochen Li, He Jiang, Zhilei Ren, Ge Li, Jing-Xuan Zhang
To answer these questions, we conduct a bibliography analysis on 98 research papers in SE that use deep learning techniques.
no code implementations • 28 Apr 2018 • Yiting Shao, Qi Zhang, Ge Li, Zhu Li
In intra-frame compression of point cloud color attributes, results demonstrate that our method performs better than the state-of-the-art region-adaptive hierarchical transform (RAHT) system, and on average a 29. 37$\%$ BD-rate gain is achieved.
no code implementations • 26 Mar 2018 • Chunbiao Zhu, Ge Li
In this paper, we propose a multilayer backpropagation saliency detection algorithm based on depth mining by which we exploit depth cue from three different layers of images.
2 code implementations • 23 Mar 2018 • Chunbiao Zhu, Xing Cai, Kan Huang, Thomas H. Li, Ge Li
One is the lack of tremendous amount of annotated data to train a network.
no code implementations • 6 Dec 2017 • Bolin Wei, Shuai Lu, Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, Zhi Jin
This paper addresses the question: Why do neural dialog systems generate short and meaningless replies?
no code implementations • 2 Nov 2017 • Zhenqiang Ying, Ge Li, Wen Gao
Inspired by human visual system, we design a multi-exposure fusion framework for low-light image enhancement.
no code implementations • 1 Nov 2017 • Kan Huang, Chunbiao Zhu, Ge Li
Automatic Salient object detection has received tremendous attention from research community and has been an increasingly important tool in many computer vision tasks.
no code implementations • 10 Oct 2017 • Chunbiao Zhu, Kan Huang, Ge Li
In this paper, we propose a novel bottom-up salient object detection framework for panoramic images.
no code implementations • 3 Aug 2017 • Zhenqiang Ying, Ge Li, Sixin Wen, Guozhen Tan
This paper is concerned with the detection and correction of the offset between the intersection and origin.
1 code implementation • 22 Jun 2017 • Jingjia Huang, Nannan Li, Tao Zhang, Ge Li
Existing action detection algorithms usually generate action proposals through an extensive search over the video at multiple temporal scales, which brings about huge computational overhead and deviates from the human perception procedure.
1 code implementation • ACL 2016 • Yunchuan Chen, Lili Mou, Yan Xu, Ge Li, Zhi Jin
Such approaches are time- and memory-intensive because of the large numbers of parameters for word embeddings and the output layer.
no code implementations • 6 Oct 2016 • Wenhao Huang, Ge Li, Zhi Jin
Knowledge base completion aims to infer new relations from existing information.
no code implementations • 23 Aug 2016 • Nannan Li, Dan Xu, Zhenqiang Ying, Zhihao LI, Ge Li
In this paper, we address the problem of searching action proposals in unconstrained video clips.
no code implementations • COLING 2016 • Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, Zhi Jin
Using neural networks to generate replies in human-computer dialogue systems is attracting increasing attention over the past few years.
no code implementations • EMNLP 2016 • Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, Zhi Jin
Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain.
no code implementations • COLING 2016 • Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin
However, existing neural networks for relation classification are usually of shallow architectures (e. g., one-layer convolutional neural networks or recurrent networks).
Ranked #2 on Relation Classification on SemEval 2010 Task 8
no code implementations • ACL 2016 • Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, Zhi Jin
In this paper, we propose the TBCNN-pair model to recognize entailment and contradiction between two sentences.
Ranked #88 on Natural Language Inference on SNLI
no code implementations • 21 Dec 2015 • Lili Mou, Rui Yan, Ge Li, Lu Zhang, Zhi Jin
Provided a specific word, we use RNNs to generate previous words and future words, either simultaneously or asynchronously, resulting in two model variants.
no code implementations • 25 Oct 2015 • Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin
This paper envisions an end-to-end program generation scenario using recurrent neural networks (RNNs): Users can express their intention in natural language; an RNN then automatically generates corresponding code in a characterby-by-character fashion.
no code implementations • EMNLP 2015 • Hao Peng, Lili Mou, Ge Li, Yunchuan Chen, Yangyang Lu, Zhi Jin
This paper aims to compare different regularization strategies to address a common phenomenon, severe overfitting, in embedding-based neural networks for NLP.
no code implementations • 15 Aug 2015 • Xu Yan, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, Zhi Jin
Relation classification is an important research arena in the field of natural language processing (NLP).
Ranked #4 on Relation Classification on SemEval 2010 Task 8
no code implementations • 15 Jun 2015 • Lili Mou, Ran Jia, Yan Xu, Ge Li, Lu Zhang, Zhi Jin
Distilling knowledge from a well-trained cumbersome network to a small one has recently become a new research topic, as lightweight neural networks with high performance are particularly in need in various resource-restricted systems.
no code implementations • EMNLP 2015 • Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, Zhi Jin
This paper proposes a tree-based convolutional neural network (TBCNN) for discriminative sentence modeling.
Ranked #6 on Text Classification on TREC-6
7 code implementations • 18 Sep 2014 • Lili Mou, Ge Li, Lu Zhang, Tao Wang, Zhi Jin
Programming language processing (similar to natural language processing) is a hot research topic in the field of software engineering; it has also aroused growing interest in the artificial intelligence community.
1 code implementation • 11 Sep 2014 • Lili Mou, Ge Li, Yuxuan Liu, Hao Peng, Zhi Jin, Yan Xu, Lu Zhang
In this pioneering paper, we propose the "coding criterion" to build program vector representations, which are the premise of deep learning for program analysis.