no code implementations • 28 Nov 2016 • Shi Baoxu, Yang Lin, Weninger Tim
Similarity search is a fundamental problem in social and knowledge networks like GitHub, DBLP, Wikipedia, etc.
no code implementations • 12 Dec 2017 • Shuang Liu, Mete Ozay, Takayuki Okatani, Hongli Xu, Kai Sun, Yang Lin
In the experiments, we first evaluate performance of the proposed detection module on UDID and its deformed variations.
no code implementations • 20 Sep 2018 • Wang Shidan, Wang Tao, Yang Lin, Yi Faliu, Luo Xin, Yang Yikun, Gazdar Adi, Fujimoto Junya, Wistuba Ignacio I., Yao Bo, Lin ShinYi, Xie Yang, Mao Yousheng, Xiao Guanghua
By identifying cells and classifying cell types, this pipeline can convert a pathology image into a spatial map of tumor, stromal and lymphocyte cells.
no code implementations • 1 Nov 2018 • Xu Chu, Yang Lin, Jingyue Gao, Jiangtao Wang, Yasha Wang, Leye Wang
However, the shallow models leveraging bilinear forms suffer from limitations on capturing complicated nonlinear interactions between drug pairs.
no code implementations • 27 Nov 2018 • Shi Xiaoshuang, Xing Fuyong, Zhang Zizhao, Sapkota Manish, Guo Zhenhua, Yang Lin
Based on this significant discovery and the proposed strategy, we introduce a scalable symmetric discrete hashing algorithm that gradually and smoothly updates each batch of binary codes.
1 code implementation • International Joint Conference on Neural Networks (IJCNN) 2021 • Yang Lin, Irena Koprinska, Mashud Rana
TCAN requires less number of convolutional layers than TCNN for an extended receptive field, is faster to train and is able to visualize the most important timesteps for the prediction.
Multivariate Time Series Forecasting Probabilistic Time Series Forecasting +1
1 code implementation • 27 Nov 2021 • Yang Lin, Tianyu Zhang, Peiqin Sun, Zheng Li, Shuchang Zhou
Network quantization significantly reduces model inference complexity and has been widely used in real-world deployments.
Ranked #1 on Quantization on ImageNet
no code implementations • 19 Dec 2021 • Yang Lin, Irena Koprinska, Mashud Rana
In this paper, we present SSDNet, a novel deep learning approach for time series forecasting.
1 code implementation • 7 Feb 2022 • Huajun Zhou, Yang Lin, Lingxiao Yang, JianHuang Lai, Xiaohua Xie
In recent years, deep network-based methods have continuously refreshed state-of-the-art performance on Salient Object Detection (SOD) task.
no code implementations • 20 Apr 2022 • Kelly Payette, Hongwei Li, Priscille de Dumast, Roxane Licandro, Hui Ji, Md Mahfuzur Rahman Siddiquee, Daguang Xu, Andriy Myronenko, Hao liu, Yuchen Pei, Lisheng Wang, Ying Peng, Juanying Xie, Huiquan Zhang, Guiming Dong, Hao Fu, Guotai Wang, ZunHyan Rieu, Donghyeon Kim, Hyun Gi Kim, Davood Karimi, Ali Gholipour, Helena R. Torres, Bruno Oliveira, João L. Vilaça, Yang Lin, Netanell Avisdris, Ori Ben-Zvi, Dafna Ben Bashat, Lucas Fidon, Michael Aertsen, Tom Vercauteren, Daniel Sobotka, Georg Langs, Mireia Alenyà, Maria Inmaculada Villanueva, Oscar Camara, Bella Specktor Fadida, Leo Joskowicz, Liao Weibin, Lv Yi, Li Xuesong, Moona Mazher, Abdul Qayyum, Domenec Puig, Hamza Kebiri, Zelin Zhang, Xinyi Xu, Dan Wu, Kuanlun Liao, Yixuan Wu, Jintai Chen, Yunzhi Xu, Li Zhao, Lana Vasung, Bjoern Menze, Meritxell Bach Cuadra, Andras Jakab
Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context.
no code implementations • 8 Nov 2022 • Jincheng Hu, Yang Lin, Liang Chu, Zhuoran Hou, Jihan Li, Jingjing Jiang, Yuanjian Zhang
RL has received continuous attention and research, but there is still a lack of systematic analysis of the design elements of RL-based EMS.
no code implementations • 18 Dec 2022 • Jincheng Hu, Yang Lin, Jihao Li, Zhuoran Hou, Dezong Zhao, Quan Zhou, Jingjing Jiang, Yuanjian Zhang
The empirical analysis is developed in four aspects: algorithm, perception and decision granularity, hyperparameters, and reward function.
no code implementations • 27 Jun 2023 • Yang Lin, Paul Mos, Andrei Ardelean, Claudio Bruschini, Edoardo Charbon
To explore the ultimate limits of the approach, we derived the Cramer-Rao lower bound of the measurement, showing that RNN yields lifetime estimations with near-optimal precision.
1 code implementation • 23 Oct 2023 • Minghao Tang, Yongquan He, Yongxiu Xu, Hongbo Xu, Wenyuan Zhang, Yang Lin
By leveraging the guiding semantics of boundary offsets, BOPN establishes connections between non-entity and entity spans, enabling non-entity spans to function as additional positive samples for entity detection.
1 code implementation • 23 Oct 2023 • Minghao Tang, Yongquan He, Yongxiu Xu, Hongbo Xu, Wenyuan Zhang, Yang Lin
Fine-grained entity typing (FET) is an essential task in natural language processing that aims to assign semantic types to entities in text.
no code implementations • 30 Oct 2023 • Yang Lin
In this paper, we introduce ProNet, an novel deep learning approach designed for multi-horizon time series forecasting, adaptively blending autoregressive (AR) and non-autoregressive (NAR) strategies.
no code implementations • 30 Oct 2023 • Yang Lin
This knowledge transfer is facilitated through two key mechanisms: 1) outcome-driven KD, which dynamically weights the contribution of KD losses from the teacher models, enabling the shallow NAR decoder to incorporate the ensemble's diversity; and 2) hint-driven KD, which employs adversarial training to extract valuable insights from the model's hidden states for distillation.
1 code implementation • 5 Apr 2024 • Xinyu Ma, Xu Chu, Zhibang Yang, Yang Lin, Xin Gao, Junfeng Zhao
With the increasingly powerful performances and enormous scales of Pretrained Language Models (PLMs), promoting parameter efficiency in fine-tuning has become a crucial need for effective and efficient adaptation to various downstream tasks.
no code implementations • 15 Apr 2024 • Yang Lin, Xinyu Ma, Xu Chu, Yujie Jin, Zhibang Yang, Yasha Wang, Hong Mei
We then demonstrate the theoretical mechanism of our LoRA Dropout mechanism from the perspective of sparsity regularization by providing a generalization error bound under this framework.
1 code implementation • ICML 2020 • Xu Chu, Yang Lin, Xiting Wang, Xin Gao, Qi Tong, Hailong Yu, Yasha Wang
Distance metric learning (DML) is to learn a representation space equipped with a metric, such that examples from the same class are closer than examples from different classes with respect to the metric.