Search Results for author: Yu Shi

Found 35 papers, 13 papers with code

基于义原表示学习的词向量表示方法(Word Representation based on Sememe Representation Learning)

no code implementations CCL 2021 Ning Yu, Jiangping Wang, Yu Shi, Jianyi Liu

“本文利用知网(HowNet)中的知识, 并将Word2vec模型的结构和思想迁移至义原表示学习过程中, 提出了一个基于义原表示学习的词向量表示方法。首先, 本文利用OpenHowNet获取义原知识库中的所有义原、所有中文词汇以及所有中文词汇和其对应的义原集合, 作为实验的数据集。然后, 基于Skip-gram模型, 训练义原表示学习模型, 进而获得词向量。最后, 通过词相似度任务、词义消歧任务、词汇类比和观察最近邻义原, 来评价本文提出的方法获取的词向量的效果。通过和基线模型比较, 发现本文提出的方法既高效又准确, 不依赖大规模语料也不需要复杂的网络结构和繁多的参数, 也能提升各种自然语言处理任务的准确率。”

Representation Learning

Dynamic Scene Deblurring Base on Continuous Cross-Layer Attention Transmission

no code implementations23 Jun 2022 Xia Hua, Junxiong Fei, Mingxin Li, ZeZheng Li, Yu Shi, JiangGuo Liu, Hanyu Hong

The deep convolutional neural networks (CNNs) using attention mechanism have achieved great success for dynamic scene deblurring.

Deblurring

Multilevel Hierarchical Network with Multiscale Sampling for Video Question Answering

1 code implementation9 May 2022 Min Peng, Chongyang Wang, Yuan Gao, Yu Shi, Xiang-Dong Zhou

With a multiscale sampling, RMI iterates the interaction of appearance-motion information at each scale and the question embeddings to build the multilevel question-guided visual representations.

Natural Language Processing Question Answering +2

Benchmarking Graphormer on Large-Scale Molecular Modeling Datasets

1 code implementation9 Mar 2022 Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, Tie-Yan Liu

This technical note describes the recent updates of Graphormer, including architecture design modifications, and the adaption to 3D molecular dynamics simulation.

An Empirical Study of Graphormer on Large-Scale Molecular Modeling Datasets

no code implementations28 Feb 2022 Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, Tie-Yan Liu

This technical note describes the recent updates of Graphormer, including architecture design modifications, and the adaption to 3D molecular dynamics simulation.

Florence: A New Foundation Model for Computer Vision

1 code implementation22 Nov 2021 Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella, Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang, Boxin Li, Chunyuan Li, Ce Liu, Mengchen Liu, Zicheng Liu, Yumao Lu, Yu Shi, Lijuan Wang, JianFeng Wang, Bin Xiao, Zhen Xiao, Jianwei Yang, Michael Zeng, Luowei Zhou, Pengchuan Zhang

Computer vision foundation models, which are trained on diverse, large-scale dataset and can be adapted to a wide range of downstream tasks, are critical for this mission to solve real-world computer vision applications.

Action Classification Action Recognition +12

Temporal Pyramid Transformer with Multimodal Interaction for Video Question Answering

1 code implementation10 Sep 2021 Min Peng, Chongyang Wang, Yuan Gao, Yu Shi, Xiang-Dong Zhou

Targeting these issues, this paper proposes a novel Temporal Pyramid Transformer (TPT) model with multimodal interaction for VideoQA.

Natural Language Understanding Question Answering +1

A Joint and Domain-Adaptive Approach to Spoken Language Understanding

no code implementations25 Jul 2021 Linhao Zhang, Yu Shi, Linjun Shou, Ming Gong, Houfeng Wang, Michael Zeng

In this paper, we attempt to bridge these two lines of research and propose a joint and domain adaptive approach to SLU.

Domain Adaptation Intent Detection +2

Research on Portfolio Liquidation Strategy under Discrete Times

no code implementations29 Mar 2021 Qixuan Luo, Yu Shi, Handong Li

The permanent impact generated by an asset in the portfolio during the liquidation will affect all assets, and the temporary impact generated by one asset will only affect itself.

Generating Human Readable Transcript for Automatic Speech Recognition with Pre-trained Language Model

no code implementations22 Feb 2021 Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Sefik Eskimez, Liyang Lu, Hong Qu, Michael Zeng

Many downstream tasks and human readers rely on the output of the ASR system; therefore, errors introduced by the speaker and ASR system alike will be propagated to the next task in the pipeline.

Automatic Speech Recognition Data Augmentation +1

Improving Zero-shot Neural Machine Translation on Language-specific Encoders-Decoders

no code implementations12 Feb 2021 Junwei Liao, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, Michael Zeng

However, the performance of using multiple encoders and decoders on zero-shot translation still lags behind universal NMT.

Denoising Machine Translation +1

Speech-language Pre-training for End-to-end Spoken Language Understanding

no code implementations11 Feb 2021 Yao Qian, Ximo Bian, Yu Shi, Naoyuki Kanda, Leo Shen, Zhen Xiao, Michael Zeng

End-to-end (E2E) spoken language understanding (SLU) can infer semantics directly from speech signal without cascading an automatic speech recognizer (ASR) with a natural language understanding (NLU) module.

 Ranked #1 on Spoken Language Understanding on Fluent Speech Commands (using extra training data)

Language Modelling Natural Language Understanding +1

Deterministic generation of multidimensional photonic cluster states using time-delay feedback

no code implementations19 Jan 2021 Yu Shi, Edo Waks

Cluster states are useful in many quantum information processing applications.

Quantum Physics

A Novel Method for Inference of Acyclic Chemical Compounds with Bounded Branch-height Based on Artificial Neural Networks and Integer Programming

1 code implementation21 Sep 2020 Naveed Ahmed Azam, Jianshen Zhu, Yanming Sun, Yu Shi, Aleksandar Shurbevski, Liang Zhao, Hiroshi Nagamochi, Tatsuya Akutsu

In the second phase, given a target value $y^*$ of property $\pi$, a feature vector $x^*$ is inferred by solving an MILP formulated from the trained ANN so that $\psi(x^*)$ is close to $y^*$ and then a set of chemical structures $G^*$ such that $f(G^*)= x^*$ is enumerated by a graph search algorithm.

Data Structures and Algorithms Computational Engineering, Finance, and Science 05C92, 92E10, 05C30, 68T07, 90C11, 92-04

Recognizing Micro-Expression in Video Clip with Adaptive Key-Frame Mining

1 code implementation19 Sep 2020 Min Peng, Chongyang Wang, Yuan Gao, Tao Bi, Tong Chen, Yu Shi, Xiang-Dong Zhou

As a spontaneous expression of emotion on face, micro-expression reveals the underlying emotion that cannot be controlled by human.

DeepPrognosis: Preoperative Prediction of Pancreatic Cancer Survival and Surgical Margin via Contrast-Enhanced CT Imaging

no code implementations26 Aug 2020 Jiawen Yao, Yu Shi, Le Lu, Jing Xiao, Ling Zhang

We present a multi-task CNN to accomplish both tasks of outcome and margin prediction where the network benefits from learning the tumor resection margin related features to improve survival prediction.

Survival Analysis Survival Prediction

Deep learning to estimate the physical proportion of infected region of lung for COVID-19 pneumonia with CT image set

no code implementations9 Jun 2020 Wei Wu, Yu Shi, Xukun Li, Yukun Zhou, Peng Du, Shuangzhi Lv, Tingbo Liang, Jifang Sheng

For the segmented masks of intact lung and infected regions, the best method could achieve 0. 972 and 0. 757 measure in mean Dice similarity coefficient on our test benchmark.

Computed Tomography (CT)

Improving Readability for Automatic Speech Recognition Transcription

no code implementations9 Apr 2020 Junwei Liao, Sefik Emre Eskimez, Liyang Lu, Yu Shi, Ming Gong, Linjun Shou, Hong Qu, Michael Zeng

In this work, we propose a novel NLP task called ASR post-processing for readability (APR) that aims to transform the noisy ASR output into a readable text for humans and downstream tasks while maintaining the semantic meaning of the speaker.

Automatic Speech Recognition Grammatical Error Correction +1

BERT-AL: BERT for Arbitrarily Long Document Understanding

no code implementations ICLR 2020 Ruixuan Zhang, Zhuoyu Wei, Yu Shi, Yining Chen

When we apply BERT to long text tasks, e. g., document-level text summarization: 1) Truncating inputs by the maximum sequence length will decrease performance, since the model cannot capture long dependency and global information ranging the whole document.

Pretrained Language Models Text Summarization

Meta-Graph Based HIN Spectral Embedding: Methods, Analyses, and Insights

no code implementations29 Sep 2019 Carl Yang, Yichen Feng, Pan Li, Yu Shi, Jiawei Han

In this work, we propose to study the utility of different meta-graphs, as well as how to simultaneously leverage multiple meta-graphs for HIN embedding in an unsupervised manner.

Discovering Hypernymy in Text-Rich Heterogeneous Information Network by Exploiting Context Granularity

1 code implementation4 Sep 2019 Yu Shi, Jiaming Shen, Yuchen Li, Naijing Zhang, Xinwei He, Zhengzhi Lou, Qi Zhu, Matthew Walker, Myunghwan Kim, Jiawei Han

Extensive experiments on two large real-world datasets demonstrate the effectiveness of HyperMine and the utility of modeling context granularity.

Knowledge Graphs

A Novel Apex-Time Network for Cross-Dataset Micro-Expression Recognition

1 code implementation7 Apr 2019 Min Peng, Chongyang Wang, Tao Bi, Tong Chen, Xiangdong Zhou, Yu Shi

As researchers working on such topics are moving to learn from the nature of micro-expression, the practice of using deep learning techniques has evolved from processing the entire video clip of micro-expression to the recognition on apex frame.

Micro-Expression Recognition

Easing Embedding Learning by Comprehensive Transcription of Heterogeneous Information Networks

1 code implementation10 Jul 2018 Yu Shi, Qi Zhu, Fang Guo, Chao Zhang, Jiawei Han

To cope with the challenges in the comprehensive transcription of HINs, we propose the HEER algorithm, which embeds HINs via edge representations that are further coupled with properly-learned heterogeneous metrics.

Feature Engineering Network Embedding

Training of photonic neural networks through in situ backpropagation

no code implementations25 May 2018 Tyler W. Hughes, Momchil Minkov, Yu Shi, Shanhui Fan

Recently, integrated optics has gained interest as a hardware platform for implementing machine learning algorithms.

AspEm: Embedding Learning by Aspects in Heterogeneous Information Networks

no code implementations5 Mar 2018 Yu Shi, Huan Gui, Qi Zhu, Lance Kaplan, Jiawei Han

Therefore, we are motivated to propose a novel embedding learning framework---AspEm---to preserve the semantic information in HINs based on multiple aspects.

Link Prediction Network Embedding

Gradient Boosting With Piece-Wise Linear Regression Trees

1 code implementation15 Feb 2018 Yu Shi, Jian Li, Zhize Li

We show that PL Trees can accelerate convergence of GBDT and improve the accuracy.

Ensemble Learning

mvn2vec: Preservation and Collaboration in Multi-View Network Embedding

1 code implementation19 Jan 2018 Yu Shi, Fangqiu Han, Xinwei He, Xinran He, Carl Yang, Jie Luo, Jiawei Han

With experiments on a series of synthetic datasets, a large-scale internal Snapchat dataset, and two public datasets, we confirm the validity and importance of preservation and collaboration as two objectives for multi-view network embedding.

Network Embedding

PReP: Path-Based Relevance from a Probabilistic Perspective in Heterogeneous Information Networks

no code implementations5 Jun 2017 Yu Shi, Po-Wei Chan, Honglei Zhuang, Huan Gui, Jiawei Han

We also identify, from real-world data, and propose to model cross-meta-path synergy, which is a characteristic important for defining path-based HIN relevance and has not been modeled by existing methods.

Cannot find the paper you are looking for? You can Submit a new open access paper.