no code implementations • EAMT 2020 • Hao Yang, Minghan Wang, Ning Xie, Ying Qin, Yao Deng
Compared with the commonly used NuQE baseline, BAL-QE achieves 47% (En-Ru) and 75% (En-De) of performance promotions.
no code implementations • IWSLT (ACL) 2022 • Zongyao Li, Jiaxin Guo, Daimeng Wei, Hengchao Shang, Minghan Wang, Ting Zhu, Zhanglin Wu, Zhengzhe Yu, Xiaoyu Chen, Lizhi Lei, Hao Yang, Ying Qin
This paper presents our submissions to the IWSLT 2022 Isometric Spoken Language Translation task.
no code implementations • IWSLT (ACL) 2022 • Jiaxin Guo, Yinglu Li, Minghan Wang, Xiaosong Qiao, Yuxia Wang, Hengchao Shang, Chang Su, Yimeng Chen, Min Zhang, Shimin Tao, Hao Yang, Ying Qin
The paper presents the HW-TSC’s pipeline and results of Offline Speech to Speech Translation for IWSLT 2022.
no code implementations • IWSLT (ACL) 2022 • Minghan Wang, Jiaxin Guo, Yinglu Li, Xiaosong Qiao, Yuxia Wang, Zongyao Li, Chang Su, Yimeng Chen, Min Zhang, Shimin Tao, Hao Yang, Ying Qin
The cascade system is composed of a chunking-based streaming ASR model and the SimulMT model used in the T2T track.
no code implementations • IWSLT (ACL) 2022 • Minghan Wang, Jiaxin Guo, Xiaosong Qiao, Yuxia Wang, Daimeng Wei, Chang Su, Yimeng Chen, Min Zhang, Shimin Tao, Hao Yang, Ying Qin
For machine translation part, we pretrained three translation models on WMT21 dataset and fine-tuned them on in-domain corpora.
no code implementations • WMT (EMNLP) 2021 • Zhengzhe Yu, Daimeng Wei, Zongyao Li, Hengchao Shang, Xiaoyu Chen, Zhanglin Wu, Jiaxin Guo, Minghan Wang, Lizhi Lei, Min Zhang, Hao Yang, Ying Qin
This paper presents the submission of Huawei Translation Services Center (HW-TSC) to the WMT 2021 Large-Scale Multilingual Translation Task.
no code implementations • WMT (EMNLP) 2021 • Zongyao Li, Daimeng Wei, Hengchao Shang, Xiaoyu Chen, Zhanglin Wu, Zhengzhe Yu, Jiaxin Guo, Minghan Wang, Lizhi Lei, Min Zhang, Hao Yang, Ying Qin
This paper presents the submission of Huawei Translation Service Center (HW-TSC) to WMT 2021 Triangular MT Shared Task.
no code implementations • WMT (EMNLP) 2021 • Daimeng Wei, Zongyao Li, Zhanglin Wu, Zhengzhe Yu, Xiaoyu Chen, Hengchao Shang, Jiaxin Guo, Minghan Wang, Lizhi Lei, Min Zhang, Hao Yang, Ying Qin
This paper presents the submission of Huawei Translate Services Center (HW-TSC) to the WMT 2021 News Translation Shared Task.
no code implementations • INLG (ACL) 2021 • Minghan Wang, Guo Jiaxin, Yuxia Wang, Yimeng Chen, Su Chang, Daimeng Wei, Min Zhang, Shimin Tao, Hao Yang
Mask-predict CMLM (Ghazvininejad et al., 2019) has achieved stunning performance among non-autoregressive NMT models, but we find that the mechanism of predicting all of the target words only depending on the hidden state of [MASK] is not effective and efficient in initial iterations of refinement, resulting in ungrammatical repetitions and slow convergence.
no code implementations • EMNLP (BlackboxNLP) 2021 • Minghan Wang, Guo Jiaxin, Yuxia Wang, Yimeng Chen, Su Chang, Hengchao Shang, Min Zhang, Shimin Tao, Hao Yang
Length prediction is a special task in a series of NAT models where target length has to be determined before generation.
no code implementations • WMT (EMNLP) 2020 • Minghan Wang, Hao Yang, Hengchao Shang, Daimeng Wei, Jiaxin Guo, Lizhi Lei, Ying Qin, Shimin Tao, Shiliang Sun, Yimeng Chen, Liangyou Li
This paper presents our work in the WMT 2020 Word and Sentence-Level Post-Editing Quality Estimation (QE) Shared Task.
no code implementations • WMT (EMNLP) 2020 • Wei Peng, Jianfeng Liu, Minghan Wang, Liangyou Li, Xupeng Meng, Hao Yang, Qun Liu
This paper describes Huawei’s submissions to the WMT20 biomedical translation shared task.
no code implementations • WMT (EMNLP) 2020 • Hao Yang, Minghan Wang, Daimeng Wei, Hengchao Shang, Jiaxin Guo, Zongyao Li, Lizhi Lei, Ying Qin, Shimin Tao, Shiliang Sun, Yimeng Chen
The paper presents the submission by HW-TSC in the WMT 2020 Automatic Post Editing Shared Task.
no code implementations • WMT (EMNLP) 2020 • Daimeng Wei, Hengchao Shang, Zhanglin Wu, Zhengzhe Yu, Liangyou Li, Jiaxin Guo, Minghan Wang, Hao Yang, Lizhi Lei, Ying Qin, Shiliang Sun
We also conduct experiment with similar language augmentation, which lead to positive results, although not used in our submission.
no code implementations • AACL (WAT) 2020 • Zhengzhe Yu, Zhanglin Wu, Xiaoyu Chen, Daimeng Wei, Hengchao Shang, Jiaxin Guo, Zongyao Li, Minghan Wang, Liangyou Li, Lizhi Lei, Hao Yang, Ying Qin
This paper describes our work in the WAT 2020 Indic Multilingual Translation Task.
no code implementations • EAMT 2020 • Minghan Wang, Hao Yang, Ying Qin, Shiliang Sun, Yao Deng
We propose a unified multilingual model for humor detection which can be trained under a transfer learning framework.
no code implementations • MTSummit 2021 • Minghan Wang, Jiaxin Guo, Yimeng Chen, Chang Su, Min Zhang, Shimin Tao, Hao Yang
Based on large-scale pretrained networks and the liability to be easily overfitting with limited labelled training data of multimodal translation (MMT) is a critical issue in MMT.
no code implementations • Findings (ACL) 2022 • Yuxia Wang, Minghan Wang, Yimeng Chen, Shimin Tao, Jiaxin Guo, Chang Su, Min Zhang, Hao Yang
Natural Language Inference (NLI) datasets contain examples with highly ambiguous labels due to its subjectivity.
no code implementations • WMT (EMNLP) 2021 • Yimeng Chen, Chang Su, Yingtao Zhang, Yuxia Wang, Xiang Geng, Hao Yang, Shimin Tao, Guo Jiaxin, Wang Minghan, Min Zhang, Yujia Liu, ShuJian Huang
This paper presents our work in WMT 2021 Quality Estimation (QE) Shared Task.
no code implementations • WMT (EMNLP) 2021 • Hao Yang, Zhanglin Wu, Zhengzhe Yu, Xiaoyu Chen, Daimeng Wei, Zongyao Li, Hengchao Shang, Minghan Wang, Jiaxin Guo, Lizhi Lei, Chuanfei Xu, Min Zhang, Ying Qin
This paper describes the submission of Huawei Translation Service Center (HW-TSC) to WMT21 biomedical translation task in two language pairs: Chinese↔English and German↔English (Our registered team name is HuaweiTSC).
no code implementations • WMT (EMNLP) 2021 • Hengchao Shang, Ting Hu, Daimeng Wei, Zongyao Li, Jianfei Feng, Zhengzhe Yu, Jiaxin Guo, Shaojun Li, Lizhi Lei, Shimin Tao, Hao Yang, Jun Yao, Ying Qin
This paper presents the submission of Huawei Translation Services Center (HW-TSC) to WMT 2021 Efficiency Shared Task.
no code implementations • 4 May 2022 • Yi Liang, Shuai Zhao, Bo Cheng, Yuwei Yin, Hao Yang
Few-shot relation learning refers to infer facts for relations with a limited number of observed triples.
1 code implementation • 28 Apr 2022 • Chun Zeng, Jiangjie Chen, Tianyi Zhuang, Rui Xu, Hao Yang, Ying Qin, Shimin Tao, Yanghua Xiao
To this end, we propose a plug-in algorithm for this line of work, i. e., Aligned Constrained Training (ACT), which alleviates this problem by familiarizing the model with the source-side context of the constraints.
no code implementations • 25 Apr 2022 • Hao Ouyang, Bo Zhang, Pan Zhang, Hao Yang, Jiaolong Yang, Dong Chen, Qifeng Chen, Fang Wen
We propose pose-guided multiplane image (MPI) synthesis which can render an animatable character in real scenes with photorealistic quality.
2 code implementations • 30 Mar 2022 • Dengpan Fu, Dongdong Chen, Hao Yang, Jianmin Bao, Lu Yuan, Lei Zhang, Houqiang Li, Fang Wen, Dong Chen
Since theses ID labels automatically derived from tracklets inevitably contain noises, we develop a large-scale Pre-training framework utilizing Noisy Labels (PNL), which consists of three learning modules: supervised Re-ID learning, prototype-based contrastive learning, and label-guided contrastive learning.
Ranked #5 on
Person Re-Identification
on CUHK03
no code implementations • 30 Mar 2022 • Pei Wang, Zhaowei Cai, Hao Yang, Gurumurthy Swaminathan, Nuno Vasconcelos, Bernt Schiele, Stefano Soatto
This is enabled by a unified architecture, Omni-DETR, based on the recent progress on student-teacher framework and end-to-end transformer based object detection.
1 code implementation • Findings (ACL) 2022 • Yang Wu, Yanyan Zhao, Hao Yang, Song Chen, Bing Qin, Xiaohuan Cao, Wenting Zhao
Through further analysis of the ASR outputs, we find that in some cases the sentiment words, the key sentiment elements in the textual modality, are recognized as other words, which makes the sentiment of the text change and hurts the performance of multimodal sentiment models directly.
Automatic Speech Recognition
Multimodal Sentiment Analysis
+1
no code implementations • 13 Jan 2022 • Lan Wang, Yusan Lin, Yuhang Wu, Huiyuan Chen, Fei Wang, Hao Yang
Today's cyber-world is vastly multivariate.
no code implementations • 1 Jan 2022 • Hao Yang, Min Wang, Zhengfei Yu, Yun Zhou
Extensive experiments on well-known white- and black-box attacks show that MFDV-SNN achieves a significant improvement over existing methods, which indicates that it is a simple but effective method to improve model robustness.
no code implementations • 22 Dec 2021 • Minghan Wang, Jiaxin Guo, Yuxia Wang, Daimeng Wei, Hengchao Shang, Chang Su, Yimeng Chen, Yinglu Li, Min Zhang, Shimin Tao, Hao Yang
In this paper, we aim to close the gap by preserving the original objective of AR and NAR under a unified framework.
no code implementations • 22 Dec 2021 • Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Yuxia Wang, Zongyao Li, Zhengzhe Yu, Zhanglin Wu, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang
An effective training strategy to improve the performance of AT models is Self-Distillation Mixup (SDM) Training, which pre-trains a model on raw data, generates distilled data by the pre-trained model itself and finally re-trains a model on the combination of raw data and distilled data.
no code implementations • 22 Dec 2021 • Zhengzhe Yu, Jiaxin Guo, Minghan Wang, Daimeng Wei, Hengchao Shang, Zongyao Li, Zhanglin Wu, Yuxia Wang, Yimeng Chen, Chang Su, Min Zhang, Lizhi Lei, Shimin Tao, Hao Yang
Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but it reaches the upper bound of translation quality when the number of encoder layers exceeds 18.
1 code implementation • 6 Dec 2021 • Yinglin Zheng, Hao Yang, Ting Zhang, Jianmin Bao, Dongdong Chen, Yangyu Huang, Lu Yuan, Dong Chen, Ming Zeng, Fang Wen
In this paper, we study the transfer performance of pre-trained models on face analysis tasks and introduce a framework, called FaRL, for general Facial Representation Learning in a visual-linguistic manner.
Ranked #1 on
Face Parsing
on CelebAMask-HQ
(using extra training data)
1 code implementation • NeurIPS 2021 • Wenqing Zheng, Qiangqiang Guo, Hao Yang, Peihao Wang, Zhangyang Wang
This paper presents the Delayed Propagation Transformer (DePT), a new transformer-based model that specializes in the global modeling of CPS while taking into account the immutable constraints from the physical world.
no code implementations • 29 Sep 2021 • Hao Zhu, Mahashweta Das, Mangesh Bendre, Fei Wang, Hao Yang, Soha Hassoun
In this work, we propose an adversarial training based modification to the current state-of-the-arts link prediction method to solve this problem.
no code implementations • ICCV 2021 • Yangyu Huang, Hao Yang, Chong Li, Jongyoo Kim, Fangyun Wei
On the other hand, AAM is an attention module which can get anisotropic attention mask focusing on the region of point and its local edge connected by adjacent points, it has a stronger response in tangent than in normal, which means relaxed constraints in the tangent.
Ranked #2 on
Face Alignment
on 300W
no code implementations • 31 Aug 2021 • Javid Ebrahimi, Hao Yang, Wei zhang
Adversarial training (AT) is one of the most reliable methods for defending against adversarial attacks in machine learning.
no code implementations • 15 Aug 2021 • Yuhang Wu, Mengting Gu, Lan Wang, Yusan Lin, Fei Wang, Hao Yang
Modeling inter-dependencies between time-series is the key to achieve high performance in anomaly detection for multivariate time-series data.
no code implementations • 9 Aug 2021 • Minghan Wang, Yuxia Wang, Chang Su, Jiaxin Guo, Yingtao Zhang, Yujia Liu, Min Zhang, Shimin Tao, Xingshan Zeng, Liangyou Li, Hao Yang, Ying Qin
This paper describes our work in participation of the IWSLT-2021 offline speech translation task.
no code implementations • 27 Jul 2021 • Hao Yang, Tavan Eftekhar, Chad Esselink, Yan Ding, Shiqi Zhang
Everyday tasks are characterized by their varieties and variations, and frequently are not clearly specified to service agents.
1 code implementation • journal 2021 • Qing Ding, Liquan Shen, Liangwei Yu, Hao Yang, Mai Xu
To overcome these limitations, we propose a patch-wise spatial-temporal quality enhancement network which firstly extracts spatial and temporal features, then recalibrates and fuses the obtained spatial and temporal features.
1 code implementation • NeurIPS 2021 • Prince Osei Aboagye, Jeff Phillips, Yan Zheng, Chin-Chia Michael Yeh, Junpeng Wang, Wei zhang, Liang Wang, Hao Yang
Learning a good transfer function to map the word vectors from two languages into a shared cross-lingual word vector space plays a crucial role in cross-lingual NLP.
1 code implementation • The Web Conference 2021 • Yang Liu1, Xiang Ao, Zidi Qin, Jianfeng Chi, Jinghua Feng, Hao Yang, Qing He
Graph-based fraud detection approaches have escalated lots of attention recently due to the abundant relational information of graph-structured data, which may be beneficial for the detection of fraudsters.
Ranked #3 on
Node Classification
on Amazon-Fraud
no code implementations • 1 Apr 2021 • Xu Wang, Shuai Zhao, Bo Cheng, Jiale Han, Yingting Li, Hao Yang, Ivan Sekulic, Guoshun Nan
Question Answering (QA) models over Knowledge Bases (KBs) are capable of providing more precise answers by utilizing relation information among entities.
no code implementations • 1 Apr 2021 • Hao Yang, Youzhi Jin, Ziyin Li, Deng-Bao Wang, Lei Miao, Xin Geng, Min-Ling Zhang
During the training process, DLT records the loss value of each sample and calculates dynamic loss thresholds.
1 code implementation • CVPR 2021 • Chulin Xie, Chuxin Wang, Bo Zhang, Hao Yang, Dong Chen, Fang Wen
In this paper, we proposed a novel Style-based Point Generator with Adversarial Rendering (SpareNet) for point cloud completion.
Ranked #1 on
Point Cloud Completion
on ShapeNet
(Earth Mover's Distance metric)
no code implementations • 9 Feb 2021 • Liuyihan Song, Pan Pan, Kang Zhao, Hao Yang, Yiming Chen, Yingya Zhang, Yinghui Xu, Rong Jin
In the last decades, extreme classification has become an essential topic for deep learning.
no code implementations • ICCV 2021 • Ahmed Abusnaina, Yuhang Wu, Sunpreet Arora, Yizhen Wang, Fei Wang, Hao Yang, David Mohaisen
We present the first graph-based adversarial detection method that constructs a Latent Neighborhood Graph (LNG) around an input example to determine if the input example is adversarial.
no code implementations • ICLR 2021 • Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, Jakob Grue Simonsen
Various Position Embeddings (PEs) have been proposed in Transformer based architectures~(e. g. BERT) to model word order.
no code implementations • 31 Dec 2020 • Yuhang Wu, Sunpreet S. Arora, Yanhong Wu, Hao Yang
Adversarial examples are input examples that are specifically crafted to deceive machine learning classifiers.
1 code implementation • CVPR 2021 • Dengpan Fu, Dongdong Chen, Jianmin Bao, Hao Yang, Lu Yuan, Lei Zhang, Houqiang Li, Dong Chen
In this paper, we present a large scale unlabeled person re-identification (Re-ID) dataset "LUPerson" and make the first attempt of performing unsupervised pre-training for improving the generalization ability of the learned person Re-ID feature representation.
Ranked #2 on
Person Re-Identification
on Market-1501
(using extra training data)
no code implementations • COLING 2020 • Xu Wang, Shuai Zhao, Jiale Han, Bo Cheng, Hao Yang, Jianchang Ao, Zhenzi Li
The structural information of Knowledge Bases (KBs) has proven effective to Question Answering (QA).
no code implementations • 27 Nov 2020 • Hao Yang, Wojciech Roga, Jonathan D. Pritchard, John Jeffers
We use the continuous-variable Gaussian quantum information formalism to show that quantum illumination is better for object detection compared with coherent states of the same mean photon number, even for simple direct photodetection.
Object Detection
Quantum Physics
no code implementations • 5 Aug 2020 • Weijian Huang, Hao Yang, Xinfeng Liu, Cheng Li, Ian Zhang, Rongpin Wang, Hairong Zheng, Shan-Shan Wang
Multi-contrast magnetic resonance (MR) image registration is useful in the clinic to achieve fast and accurate imaging-based disease diagnosis and treatment planning.
no code implementations • 2 Aug 2020 • Ruimin Ke, Zhiyong Cui, Yanlong Chen, Meixin Zhu, Hao Yang, Yinhai Wang
It is among the first efforts in applying edge computing for real-time traffic video analytics and is expected to benefit multiple sub-fields in smart transportation research and applications.
no code implementations • WS 2020 • Minghan Wang, Hao Yang, Yao Deng, Ying Qin, Lizhi Lei, Daimeng Wei, Hengchao Shang, Ning Xie, Xiaochun Li, Jiaxian Guo
The paper presents details of our system in the IWSLT Video Speech Translation evaluation.
no code implementations • 18 Jun 2020 • Hu Liu, Jing Lu, Hao Yang, Xiwei Zhao, Sulong Xu, Hao Peng, Zehua Zhang, Wenjie Niu, Xiaokun Zhu, Yongjun Bao, Weipeng Yan
Existing algorithms usually extract visual features using off-the-shelf Convolutional Neural Networks (CNNs) and late fuse the visual and non-visual features for the finally predicted CTR.
1 code implementation • 5 Jun 2020 • Aravind Sankar, Yanhong Wu, Yuhang Wu, Wei zhang, Hao Yang, Hari Sundaram
We study the problem of making item recommendations to ephemeral groups, which comprise users with limited or no historical activities together.
no code implementations • 21 May 2020 • Adit Krishnan, Mahashweta Das, Mangesh Bendre, Hao Yang, Hari Sundaram
The rapid proliferation of new users and items on the social web has aggravated the gray-sheep user/long-tail item challenge in recommender systems.
no code implementations • 13 May 2020 • Maryam Moosaei, Yusan Lin, Hao Yang
There are a few approaches that consider an entire outfit, but these approaches have limitations such as requiring rich semantic information, category labels, and fixed order of items.
no code implementations • 24 Mar 2020 • Dinh-Luan Nguyen, Sunpreet S. Arora, Yuhang Wu, Hao Yang
While feasible, digital attacks have limited applicability in attacking deployed systems, including face recognition systems, where an adversary typically has access to the input and not the transmission channel.
no code implementations • 17 Mar 2020 • Hao Yang, Dan Yan, Li Zhang, Dong Li, YunDa Sun, ShaoDi You, Stephen J. Maybank
It transmits the high-level semantic features to the low-level layers and flows temporal information stage by stage to progressively model global spatial-temporal features for action recognition; (3) The FGCN model provides early predictions.
Ranked #18 on
Skeleton Based Action Recognition
on NTU RGB+D 120
1 code implementation • ICLR 2020 • Hao Li, Pratik Chaudhari, Hao Yang, Michael Lam, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto
Our findings challenge common practices of fine-tuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning.
no code implementations • 13 Feb 2020 • Xialei Liu, Hao Yang, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto
For the difficult cases, where the domain gaps and especially category differences are large, we explore three different exemplar sampling methods and show the proposed adaptive sampling method is effective to select diverse and informative samples from entire datasets, to further prevent forgetting.
3 code implementations • CVPR 2020 • Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Fang Wen, Baining Guo
For this reason, face X-ray provides an effective way for detecting forgery generated by most existing face manipulation algorithms.
8 code implementations • 31 Dec 2019 • Lingzhi Li, Jianmin Bao, Hao Yang, Dong Chen, Fang Wen
We propose a novel attributes encoder for extracting multi-level target face attributes, and a new generator with carefully designed Adaptive Attentional Denormalization (AAD) layers to adaptively integrate the identity and the attributes for face synthesis.
no code implementations • 22 Aug 2019 • Manoj Reddy Dareddy, Mahashweta Das, Hao Yang
Supervised machine learning tasks in networks such as node classification and link prediction require us to perform feature engineering that is known and agreed to be the key to success in applied machine learning.
no code implementations • ICCV 2019 • Hao Yang, Hao Wu, Hao Chen
However, these methods require fully annotated object bounding boxes for training, which are incredibly hard to scale up due to the high annotation cost.
1 code implementation • 23 Jul 2019 • Yaxiong Wang, Hao Yang, Xueming Qian, Lin Ma, Jing Lu, Biao Li, Xin Fan
Then, an attention mechanism is proposed to model the relations between the image region and blocks and generate the valuable position feature, which will be further utilized to enhance the region expression and model a more reliable relationship between the visual image and the textual sentence.
1 code implementation • 16 Jul 2019 • Kehan Qi, Hao Yang, Cheng Li, Zaiyi Liu, Meiyun Wang, Qiegen Liu, Shan-Shan Wang
Recently, approaches based on deep learning and methods for contextual information extraction have served in many image segmentation tasks.
2 code implementations • 16 Jul 2019 • Hao Yang, Weijian Huang, Kehan Qi, Cheng Li, Xinfeng Liu, Meiyun Wang, Hairong Zheng, Shan-Shan Wang
To address these challenges, this paper proposes a Cross-Level fusion and Context Inference Network (CLCI-Net) for the chronic stroke lesion segmentation from T1-weighted MR images.
no code implementations • 16 Jul 2019 • Yusan Lin, Hao Yang
Fashion is a large and fast-changing industry.
1 code implementation • CVPR 2019 • Jinpeng Lin, Hao Yang, Dong Chen, Ming Zeng, Fang Wen, Lu Yuan
It uses hierarchical local based method for inner facial components and global methods for outer facial components.
1 code implementation • 31 May 2019 • Bonggun Shin, Hao Yang, Jinho D. Choi
Recent advances in deep learning have facilitated the demand of neural models for real applications.
Ranked #2 on
Sentiment Analysis
on MPQA
no code implementations • CVPR 2019 • Shuyang Gu, Jianmin Bao, Hao Yang, Dong Chen, Fang Wen, Lu Yuan
Portrait editing is a popular subject in photo manipulation.
no code implementations • 4 Feb 2019 • Zhongliang Yang, Hao Yang, Yuting Hu, Yongfeng Huang, Yu-Jin Zhang
To solve these two challenges, in this paper, combined with the sliding window detection algorithm and Convolution Neural Network we propose a real-time VoIP steganalysis method which based on multi-channel convolution sliding windows.
2 code implementations • 22 Dec 2018 • Aravind Sankar, Yanhong Wu, Liang Gou, Wei zhang, Hao Yang
Learning latent representations of nodes in graphs is an important and ubiquitous task with widespread applications such as link prediction, node classification, and graph visualization.
no code implementations • WS 2018 • Sizhen Li, Shuai Zhao, Bo Cheng, Hao Yang
With huge amount of information generated every day on the web, fact checking is an important and challenging task which can help people identify the authenticity of most claims as well as providing evidences selected from knowledge source like Wikipedia.
no code implementations • ECCV 2018 • Qingyi Tao, Hao Yang, Jianfei Cai
Object detection is one of the major problems in computer vision, and has been extensively studied.
no code implementations • 27 Jul 2017 • Qingyi Tao, Hao Yang, Jianfei Cai
Object detection without bounding box annotations, i. e, weakly supervised detection methods, are still lagging far behind.
Ranked #15 on
Weakly Supervised Object Detection
on PASCAL VOC 2012 test
(using extra training data)
no code implementations • CVPR 2017 • Hao Yang, Joey Tianyi Zhou, Jianfei Cai, Yew Soon Ong
As the proposed PI loss is convex and SGD compatible and the framework itself is a fully convolutional network, MIML-FCN+ can be easily integrated with state of-the-art deep learning networks.
no code implementations • 4 Aug 2016 • Hao Yang, Joey Tianyi Zhou, Jianfei Cai
Experimental results demonstrate the effectiveness of the proposed semantic descriptor and the usefulness of incorporating the structured semantic correlations.
no code implementations • CVPR 2016 • Hao Yang, HUI ZHANG
We propose a method to recover the shape of a 3D room from a full-view indoor panorama.
no code implementations • 18 Jan 2016 • Ying Huang, Hong Zheng, Haibin Ling, Erik Blasch, Hao Yang
Bird strikes present a huge risk for aircraft, especially since traditional airport bird surveillance is mainly dependent on inefficient human observation.
no code implementations • CVPR 2016 • Hao Yang, Joey Tianyi Zhou, Yu Zhang, Bin-Bin Gao, Jianxin Wu, Jianfei Cai
With strong labels, our framework is able to achieve state-of-the-art results in both datasets.
Ranked #13 on
Multi-Label Classification
on PASCAL VOC 2007
no code implementations • 19 May 2014 • Chao Zhang, Hong-cen Mei, Hao Yang
A large number of experimental data shows that Support Vector Machine (SVM) algorithm has obvious advantages in text classification, handwriting recognition, image classification, bioinformatics, and some other fields.