no code implementations • WMT (EMNLP) 2020 • Jiayi Wang, Ke Wang, Kai Fan, Yuqi Zhang, Jun Lu, Xin Ge, Yangbin Shi, Yu Zhao
We also apply an imitation learning strategy to augment a reasonable amount of pseudo APE training data, potentially preventing the model to overfit on the limited real training data and boosting the performance on held-out data.
no code implementations • WMT (EMNLP) 2021 • Jiayi Wang, Ke Wang, Boxing Chen, Yu Zhao, Weihua Luo, Yuqi Zhang
Quality Estimation, as a crucial step of quality control for machine translation, has been explored for years.
no code implementations • WMT (EMNLP) 2021 • Ke Wang, Shuqin Gu, Boxing Chen, Yu Zhao, Weihua Luo, Yuqi Zhang
We used tags to mark and add the term translations into the matched sentences.
no code implementations • 23 May 2023 • Jiayi Wang, Ke Wang, Yuqi Zhang, Yu Zhao, Pontus Stenetorp
We explore whether such non-parametric models can improve machine translation models at the fine-tuning stage by incorporating statistics from the kNN predictions to inform the gradient updates for a baseline translation model.
1 code implementation • 19 May 2023 • Yu Zhao, Hao Fei, Wei Ji, Jianguo Wei, Meishan Zhang, Min Zhang, Tat-Seng Chua
With an external 3D scene extractor, we obtain the 3D objects and scene features for input images, based on which we construct a target object-centered 3D spatial scene graph (Go3D-S2G), such that we model the spatial semantics of target objects within the holistic 3D scenes.
1 code implementation • 19 May 2023 • Yu Zhao, Yike Wu, Xiangrui Cai, Ying Zhang, Haiwei Zhang, Xiaojie Yuan
Our approach captures the unified correlation pattern of two kinds of information between entities, and explicitly models the fine-grained interaction between original entity information.
no code implementations • 14 Mar 2023 • Yuansong Zhu, Yu Zhao
Diffusion models have become a powerful family of deep generative models, with record-breaking performance in many applications.
no code implementations • 7 Feb 2023 • Lucas Maystre, Daniel Russo, Yu Zhao
We study the problem of optimizing a recommender system for outcomes that occur over several weeks or months.
1 code implementation • 27 Jan 2023 • Tianyi Zhang, Zhiling Yan, Chunhui Li, Nan Ying, Yanli Lei, Yunlu Feng, Yu Zhao, Guanglei Zhang
To further understand the pathology characteristics and make up effective pseudo samples, we propose the CellMix framework with a novel distribution-based in-place shuffle strategy.
1 code implementation • 19 Jan 2023 • Pan Deng, Yu Zhao, Junting Liu, Xiaofeng Jia, Mulan Wang
In addition, due to the disturbance of incomplete observations in the data, random contextual conditions lead to spurious correlations between data and features, making the prediction of the model ineffective in special scenarios.
1 code implementation • 19 Jan 2023 • Yu Zhao, Pan Deng, Junting Liu, Xiaofeng Jia, Mulan Wang
Recent works overemphasize spatio-temporal correlations of traffic flow, ignoring the physical concepts that lead to the generation of observations and their causal relationship.
no code implementations • 17 Dec 2022 • Shaopeng Wei, Yu Zhao, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu, Gang Kou
Different from previous surveys on graph learning, we provide a holistic review that analyzes current works from the perspective of graph structure, and discusses the latest applications, trends, and challenges in graph learning.
no code implementations • 28 Nov 2022 • Yu Zhao, Huaming Du, Qing Li, Fuzhen Zhuang, Ji Liu, Gang Kou
In contrast, this paper attempts to provide a systematic literature survey of enterprise risk analysis approaches from Big Data perspective, which reviews more than 250 representative articles in the past almost 50 years (from 1968 to 2023).
no code implementations • 27 Nov 2022 • Yu Guo, Zhilong Xie, Xingyan Chen, Huangen Chen, Leilei Wang, Huaming Du, Shaopeng Wei, Yu Zhao, Qing Li, Gang Wu
We address the problem by introducing a novel joint method on top of BERT which explicitly models the multiple sub-tokens features after wordpiece tokenization, thereby contributing to the two tasks.
no code implementations • 16 Nov 2022 • Xin Ge, Ke Wang, Jiayi Wang, Nini Xiao, Xiangyu Duan, Yu Zhao, Yuqi Zhang
The leader board finally shows that our submissions are ranked first in three of four language directions in the Naive TS task of the WMT22 Translation Suggestion task.
no code implementations • 16 Nov 2022 • Mingcai Chen, Yu Zhao, Bing He, Zongbo Han, Bingzhe Wu, Jianhua Yao
Then, we refurbish the noisy labels using the estimated clean probabilities and the pseudo-labels from the model's predictions.
1 code implementation • 14 Nov 2022 • Ke Wang, Xin Ge, Jiayi Wang, Yu Zhao, Yuqi Zhang
Human translators perform post editing on machine translations to correct errors in the scene of computer aided translation.
1 code implementation • 30 Oct 2022 • Yuxiang Wu, Yu Zhao, Baotian Hu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel
Experiments on various knowledge-intensive tasks such as question answering and dialogue datasets show that, simply augmenting parametric models (T5-base) using our method produces more accurate results (e. g., 25. 8 -> 44. 3 EM on NQ) while retaining a high throughput (e. g., 1000 queries/s on NQ).
Ranked #4 on
Question Answering
on KILT: ELI5
1 code implementation • 20 Oct 2022 • Yu Zhao, Jianguo Wei, Zhichao Lin, Yueheng Sun, Meishan Zhang, Min Zhang
Accordingly, we manually annotate a dataset to facilitate the investigation of the newly-introduced task and build several benchmark encoder-decoder models by using VL-BART and VL-T5 as backbones.
1 code implementation • 17 Oct 2022 • Yu Zhao, Xiangrui Cai, Yike Wu, Haiwei Zhang, Ying Zhang, Guoqing Zhao, Ning Jiang
Based on these embeddings, in the inference phase, we first make modality-split predictions and then exploit various ensemble methods to combine the predictions with different weights, which models the modality importance dynamically.
1 code implementation • COLING 2022 • Yike Wu, Yu Zhao, Shiwan Zhao, Ying Zhang, Xiaojie Yuan, Guoqing Zhao, Ning Jiang
In this work, we define the training instances with the same question type but different answers as \textit{superficially similar instances}, and attribute the language priors to the confusion of VQA model on such instances.
1 code implementation • 14 Aug 2022 • Tianyi Zhang, Youdan Feng, Yunlu Feng, Yu Zhao, Yanli Lei, Nan Ying, Zhiling Yan, Yufang He, Guanglei Zhang
The rapid on-site evaluation (ROSE) technique can signifi-cantly accelerate the diagnosis of pancreatic cancer by im-mediately analyzing the fast-stained cytopathological images.
no code implementations • 2 Aug 2022 • Qiong Wu, Yu Zhao, Qiang Fan
In this paper, we construct the time-dependent model to evaluate the platooning communication performance at the intersection based on the initial movement characteristics.
1 code implementation • 2 Aug 2022 • Qiong Wu, Yu Zhao, Qiang Fan, Pingyi Fan, Jiangzhou Wang, Cui Zhang
In addition, we consider the mobility of vehicles and propose a deep reinforcement learning algorithm to obtain the optimal cooperative caching location for the predicted popular contents in order to optimize the content transmission delay.
2 code implementations • 5 Jul 2022 • Jiawei Yang, Hanbo Chen, Yu Zhao, Fan Yang, Yao Zhang, Lei He, Jianhua Yao
We evaluate ReMix on two public datasets with two state-of-the-art MIL methods.
no code implementations • 17 Jun 2022 • Yu Zhao, Yunxin Li, Yuxiang Wu, Baotian Hu, Qingcai Chen, Xiaolong Wang, Yuxin Ding, Min Zhang
To mitigate this problem, we propose a medical response generation model with Pivotal Information Recalling (MedPIR), which is built on two components, i. e., knowledge-aware dialogue graph encoder and recall-enhanced generator.
no code implementations • 17 Jun 2022 • Yu Zhao, Xinshuo Hu, Yunxin Li, Baotian Hu, Dongfang Li, Sichao Chen, Xiaolong Wang
In this paper, we propose a general Multi-Skill Dialog Framework, namely MSDF, which can be applied in different dialog tasks (e. g. knowledge grounded dialog and persona based dialog).
1 code implementation • 1 Feb 2022 • Yu Zhao, Shaopeng Wei, Yu Guo, Qing Yang, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu, Gang Kou
This study for the first time considers both types of risk and their joint effects in bankruptcy prediction.
1 code implementation • 11 Jan 2022 • Yu Zhao, Huaming Du, Ying Liu, Shaopeng Wei, Xingyan Chen, Fuzhen Zhuang, Qing Li, Ji Liu, Gang Kou
Stock Movement Prediction (SMP) aims at predicting listed companies' stock future price trend, which is a challenging task due to the volatile nature of financial markets.
no code implementations • 30 Dec 2021 • Jiayi Wang, Ke Wang, Boxing Chen, Yu Zhao, Weihua Luo, Yuqi Zhang
Quality Estimation, as a crucial step of quality control for machine translation, has been explored for years.
1 code implementation • 27 Dec 2021 • Tianyi Zhang, Yunlu Feng, Yu Zhao, Guangda Fan, Aiming Yang, Shangqin Lyu, Peng Zhang, Fan Song, Chenbin Ma, Yangyang Sun, Youdan Feng, Guanglei Zhang
Pancreatic cancer is one of the most malignant cancers in the world, which deteriorates rapidly with very high mortality.
1 code implementation • 24 Dec 2021 • Yu Zhao, Shaopeng Wei, Huaming Du, Xingyan Chen, Qing Li, Fuzhen Zhuang, Ji Liu, Gang Kou
To address this issue, we propose a novel Dual Hierarchical Attention Networks (DHAN) based on the bi-typed multi-relational heterogeneous graphs to learn comprehensive node representations with the intra-class and inter-class attention-based encoder under a hierarchical mechanism.
no code implementations • 14 Dec 2021 • Zhimin Li, Cheng Zou, Yu Zhao, Boxun Li, Sheng Zhong
Human-Object Interaction (HOI) detection is a fundamental task in high-level human-centric scene understanding.
no code implementations • 4 Dec 2021 • Yu Zhao, Shang Xiang, Long Li
The recurrence rebuild and retrieval method (R3M) is proposed in this paper to accelerate the electromagnetic (EM) validations of large-scale digital coding metasurfaces (DCMs).
no code implementations • 29 Oct 2021 • Yu Zhao, Jia Song, Huali Feng, Fuzhen Zhuang, Qing Li, Xiaojie Wang, Ji Liu
Keyphrase provides accurate information of document content that is highly compact, concise, full of meanings, and widely used for discourse comprehension, organization, and text retrieval.
no code implementations • Findings (EMNLP) 2021 • Ke Wang, Yangbin Shi, Jiayi Wang, Yuqi Zhang, Yu Zhao, Xiaolin Zheng
Quality Estimation (QE) plays an essential role in applications of Machine Translation (MT).
no code implementations • 1 Jul 2021 • Yunxin Li, Yu Zhao, Baotian Hu, Qingcai Chen, Yang Xiang, Xiaolong Wang, Yuxin Ding, Lin Ma
Previous works indicate that the glyph of Chinese characters contains rich semantic information and has the potential to enhance the representation of Chinese characters.
1 code implementation • CVPR 2021 • Cheng Zou, Bohan Wang, Yue Hu, Junqi Liu, Qian Wu, Yu Zhao, Boxun Li, Chenguang Zhang, Chi Zhang, Yichen Wei, Jian Sun
We propose HOI Transformer to tackle human object interaction (HOI) detection in an end-to-end manner.
Ranked #22 on
Human-Object Interaction Detection
on HICO-DET
(using extra training data)
no code implementations • 28 Oct 2020 • Yu Zhao, Chung-Kuei Lee
Experimental results of semantic segmentation and image super resolution indicate that task-specific search achieves better performance than transferring slim models, demonstrating the wide applicability and high efficiency of DCSS.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Jiayi Wang, Ke Wang, Niyu Ge, Yangbing Shi, Yu Zhao, Kai Fan
With the advent of neural machine translation, there has been a marked shift towards leveraging and consuming the machine translation results.
1 code implementation • ACL 2020 • Yu Zhao, Anxiang Zhang, Ruobing Xie, Kang Liu, Xiaojie Wang
In this paper, we propose a novel approach for KG entity typing which is trained by jointly utilizing local typing knowledge from existing entity type assertions and global triple knowledge from KGs.
no code implementations • 24 Jun 2020 • Xiaobin Hu, Yanyang Yan, Wenqi Ren, Hongwei Li, Yu Zhao, Amirhossein Bayat, Bjoern Menze
To well exploit global structural information and texture details, we propose a novel biomedical image enhancement network, named Feedback Graph Attention Convolutional Network (FB-GACN).
no code implementations • CVPR 2020 • Yu Zhao, Fan Yang, Yuqi Fang, Hailing Liu, Niyun Zhou, Jun Zhang, Jiarui Sun, Sen Yang, Bjoern Menze, Xinjuan Fan, Jianhua Yao
In this paper, we propose a multiple instance learning method based on deep graph convolutional network and feature selection (FS-GCN-MIL) for histopathological image classification.
no code implementations • 8 Dec 2019 • Yi Zhong, Xueyu Chen, Yu Zhao, Xiaoming Chen, Tingfang Gao, Zuquan Weng
We propose an end-to-end model to predict drug-drug interactions (DDIs) by employing graph-augmented convolutional networks.
no code implementations • 18 Nov 2019 • Yunyi Li, Li Liu, Yu Zhao, Xiefeng Cheng, Guan Gui
The popular L_2-norm and M-estimator are employed for standard image CS and robust CS problem to fit the data respectively.
1 code implementation • 15 May 2019 • Haoran Niu, Jiangnan Li, Yu Zhao
Although the bullet-screen video websites have provided filter functions based on regular expression, bad bullets can still easily pass the filter through making a small modification.
no code implementations • 23 Mar 2019 • Yunyi Li, Fei Dai, Yu Zhao, Xiefeng Cheng, Guan Gui
Group-prior based regularization method has led to great successes in various image processing tasks, which can usually be considered as a low-rank matrix minimization problem.
Image and Video Processing
no code implementations • 18 Feb 2019 • Yu Zhao, Ji Liu
Knowledge graph (KG) refinement mainly aims at KG completion and correction (i. e., error detection).
1 code implementation • 5 Nov 2018 • Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki, Marcel Prastawa, Esther Alberts, Jana Lipkova, John Freymann, Justin Kirby, Michel Bilello, Hassan Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Benedikt Wiestler, Rivka Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-Andre Weber, Abhishek Mahajan, Ujjwal Baid, Elizabeth Gerstner, Dongjin Kwon, Gagan Acharya, Manu Agarwal, Mahbubul Alam, Alberto Albiol, Antonio Albiol, Francisco J. Albiol, Varghese Alex, Nigel Allinson, Pedro H. A. Amorim, Abhijit Amrutkar, Ganesh Anand, Simon Andermatt, Tal Arbel, Pablo Arbelaez, Aaron Avery, Muneeza Azmat, Pranjal B., W Bai, Subhashis Banerjee, Bill Barth, Thomas Batchelder, Kayhan Batmanghelich, Enzo Battistella, Andrew Beers, Mikhail Belyaev, Martin Bendszus, Eze Benson, Jose Bernal, Halandur Nagaraja Bharath, George Biros, Sotirios Bisdas, James Brown, Mariano Cabezas, Shilei Cao, Jorge M. Cardoso, Eric N Carver, Adrià Casamitjana, Laura Silvana Castillo, Marcel Catà, Philippe Cattin, Albert Cerigues, Vinicius S. Chagas, Siddhartha Chandra, Yi-Ju Chang, Shiyu Chang, Ken Chang, Joseph Chazalon, Shengcong Chen, Wei Chen, Jefferson W. Chen, Zhaolin Chen, Kun Cheng, Ahana Roy Choudhury, Roger Chylla, Albert Clérigues, Steven Colleman, Ramiro German Rodriguez Colmeiro, Marc Combalia, Anthony Costa, Xiaomeng Cui, Zhenzhen Dai, Lutao Dai, Laura Alexandra Daza, Eric Deutsch, Changxing Ding, Chao Dong, Shidu Dong, Wojciech Dudzik, Zach Eaton-Rosen, Gary Egan, Guilherme Escudero, Théo Estienne, Richard Everson, Jonathan Fabrizio, Yong Fan, Longwei Fang, Xue Feng, Enzo Ferrante, Lucas Fidon, Martin Fischer, Andrew P. French, Naomi Fridman, Huan Fu, David Fuentes, Yaozong Gao, Evan Gates, David Gering, Amir Gholami, Willi Gierke, Ben Glocker, Mingming Gong, Sandra González-Villá, T. Grosges, Yuanfang Guan, Sheng Guo, Sudeep Gupta, Woo-Sup Han, Il Song Han, Konstantin Harmuth, Huiguang He, Aura Hernández-Sabaté, Evelyn Herrmann, Naveen Himthani, Winston Hsu, Cheyu Hsu, Xiaojun Hu, Xiaobin Hu, Yan Hu, Yifan Hu, Rui Hua, Teng-Yi Huang, Weilin Huang, Sabine Van Huffel, Quan Huo, Vivek HV, Khan M. Iftekharuddin, Fabian Isensee, Mobarakol Islam, Aaron S. Jackson, Sachin R. Jambawalikar, Andrew Jesson, Weijian Jian, Peter Jin, V Jeya Maria Jose, Alain Jungo, B Kainz, Konstantinos Kamnitsas, Po-Yu Kao, Ayush Karnawat, Thomas Kellermeier, Adel Kermi, Kurt Keutzer, Mohamed Tarek Khadir, Mahendra Khened, Philipp Kickingereder, Geena Kim, Nik King, Haley Knapp, Urspeter Knecht, Lisa Kohli, Deren Kong, Xiangmao Kong, Simon Koppers, Avinash Kori, Ganapathy Krishnamurthi, Egor Krivov, Piyush Kumar, Kaisar Kushibar, Dmitrii Lachinov, Tryphon Lambrou, Joon Lee, Chengen Lee, Yuehchou Lee, M Lee, Szidonia Lefkovits, Laszlo Lefkovits, James Levitt, Tengfei Li, Hongwei Li, Hongyang Li, Xiaochuan Li, Yuexiang Li, Heng Li, Zhenye Li, Xiaoyu Li, Zeju Li, Xiaogang Li, Wenqi Li, Zheng-Shen Lin, Fengming Lin, Pietro Lio, Chang Liu, Boqiang Liu, Xiang Liu, Mingyuan Liu, Ju Liu, Luyan Liu, Xavier Llado, Marc Moreno Lopez, Pablo Ribalta Lorenzo, Zhentai Lu, Lin Luo, Zhigang Luo, Jun Ma, Kai Ma, Thomas Mackie, Anant Madabushi, Issam Mahmoudi, Klaus H. Maier-Hein, Pradipta Maji, CP Mammen, Andreas Mang, B. S. Manjunath, Michal Marcinkiewicz, S McDonagh, Stephen McKenna, Richard McKinley, Miriam Mehl, Sachin Mehta, Raghav Mehta, Raphael Meier, Christoph Meinel, Dorit Merhof, Craig Meyer, Robert Miller, Sushmita Mitra, Aliasgar Moiyadi, David Molina-Garcia, Miguel A. B. Monteiro, Grzegorz Mrukwa, Andriy Myronenko, Jakub Nalepa, Thuyen Ngo, Dong Nie, Holly Ning, Chen Niu, Nicholas K Nuechterlein, Eric Oermann, Arlindo Oliveira, Diego D. C. Oliveira, Arnau Oliver, Alexander F. I. Osman, Yu-Nian Ou, Sebastien Ourselin, Nikos Paragios, Moo Sung Park, Brad Paschke, J. Gregory Pauloski, Kamlesh Pawar, Nick Pawlowski, Linmin Pei, Suting Peng, Silvio M. Pereira, Julian Perez-Beteta, Victor M. Perez-Garcia, Simon Pezold, Bao Pham, Ashish Phophalia, Gemma Piella, G. N. Pillai, Marie Piraud, Maxim Pisov, Anmol Popli, Michael P. Pound, Reza Pourreza, Prateek Prasanna, Vesna Prkovska, Tony P. Pridmore, Santi Puch, Élodie Puybareau, Buyue Qian, Xu Qiao, Martin Rajchl, Swapnil Rane, Michael Rebsamen, Hongliang Ren, Xuhua Ren, Karthik Revanuru, Mina Rezaei, Oliver Rippel, Luis Carlos Rivera, Charlotte Robert, Bruce Rosen, Daniel Rueckert, Mohammed Safwan, Mostafa Salem, Joaquim Salvi, Irina Sanchez, Irina Sánchez, Heitor M. Santos, Emmett Sartor, Dawid Schellingerhout, Klaudius Scheufele, Matthew R. Scott, Artur A. Scussel, Sara Sedlar, Juan Pablo Serrano-Rubio, N. Jon Shah, Nameetha Shah, Mazhar Shaikh, B. Uma Shankar, Zeina Shboul, Haipeng Shen, Dinggang Shen, Linlin Shen, Haocheng Shen, Varun Shenoy, Feng Shi, Hyung Eun Shin, Hai Shu, Diana Sima, M Sinclair, Orjan Smedby, James M. Snyder, Mohammadreza Soltaninejad, Guidong Song, Mehul Soni, Jean Stawiaski, Shashank Subramanian, Li Sun, Roger Sun, Jiawei Sun, Kay Sun, Yu Sun, Guoxia Sun, Shuang Sun, Yannick R Suter, Laszlo Szilagyi, Sanjay Talbar, DaCheng Tao, Zhongzhao Teng, Siddhesh Thakur, Meenakshi H Thakur, Sameer Tharakan, Pallavi Tiwari, Guillaume Tochon, Tuan Tran, Yuhsiang M. Tsai, Kuan-Lun Tseng, Tran Anh Tuan, Vadim Turlapov, Nicholas Tustison, Maria Vakalopoulou, Sergi Valverde, Rami Vanguri, Evgeny Vasiliev, Jonathan Ventura, Luis Vera, Tom Vercauteren, C. A. Verrastro, Lasitha Vidyaratne, Veronica Vilaplana, Ajeet Vivekanandan, Qian Wang, Chiatse J. Wang, Wei-Chung Wang, Duo Wang, Ruixuan Wang, Yuanyuan Wang, Chunliang Wang, Guotai Wang, Ning Wen, Xin Wen, Leon Weninger, Wolfgang Wick, Shaocheng Wu, Qiang Wu, Yihong Wu, Yong Xia, Yanwu Xu, Xiaowen Xu, Peiyuan Xu, Tsai-Ling Yang, Xiaoping Yang, Hao-Yu Yang, Junlin Yang, Haojin Yang, Guang Yang, Hongdou Yao, Xujiong Ye, Changchang Yin, Brett Young-Moxon, Jinhua Yu, Xiangyu Yue, Songtao Zhang, Angela Zhang, Kun Zhang, Xue-jie Zhang, Lichi Zhang, Xiaoyue Zhang, Yazhuo Zhang, Lei Zhang, Jian-Guo Zhang, Xiang Zhang, Tianhao Zhang, Sicheng Zhao, Yu Zhao, Xiaomei Zhao, Liang Zhao, Yefeng Zheng, Liming Zhong, Chenhong Zhou, Xiaobing Zhou, Fan Zhou, Hongtu Zhu, Jin Zhu, Ying Zhuge, Weiwei Zong, Jayashree Kalpathy-Cramer, Keyvan Farahani, Christos Davatzikos, Koen van Leemput, Bjoern Menze
This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i. e., 2012-2018.
no code implementations • 22 Oct 2018 • Xiaobin Hu, Hongwei Li, Yu Zhao, Chao Dong, Bjoern H. Menze, Marie Piraud
Based on the same start-of-the-art network architecture, the accuracy of nested-class (enhancing tumor) is reasonably improved from 69% to 72% compared with the traditional Softmax-based method which blind to topological prior.
no code implementations • 27 Sep 2018 • Yu Zhao, Zhenhui Shi, Jingyang Zhang, Dong Chen, Lixu Gu
The proposed method serves as a heuristic means to select high-value samples of high scalability and generality and is implemented through a three-step process: (1) the transformation of the sample selection to sample ranking and scoring, (2) the computation of the self-adaptive weights of each criterion, and (3) the weighted aggregation of each sample rank list.
no code implementations • 31 May 2018 • Yu Zhao, Xiang Li, Wei zhang, Shijie Zhao, Milad Makkie, Mo Zhang, Quanzheng Li, Tianming Liu
Simultaneous modeling of the spatio-temporal variation patterns of brain functional network from 4D fMRI data has been an important yet challenging problem for the field of cognitive neuroscience and medical image analysis.
no code implementations • 18 Mar 2018 • Ke Ren, Haichuan Yang, Yu Zhao, Mingshan Xue, Hongyu Miao, Shuai Huang, Ji Liu
The positive-unlabeled (PU) classification is a common scenario in real-world applications such as healthcare, text classification, and bioinformatics, in which we only observe a few samples labeled as "positive" together with a large volume of "unlabeled" samples that may contain both positive and negative samples.
no code implementations • 24 Oct 2017 • Milad Makkie, Heng Huang, Yu Zhao, Athanasios V. Vasilakos, Tianming Liu
In recent years, analyzing task-based fMRI (tfMRI) data has become an essential tool for understanding brain function and networks.
4 code implementations • 22 Aug 2017 • Yu Zhao, Rennong Yang, Guillaume Chevalier, Maoguo Gong
Human activity recognition (HAR) has become a popular topic in research because of its wide application.
no code implementations • 19 Aug 2017 • Yu Zhao, Rennong Yang, Guillaume Chevalier, Rajiv Shah, Rob Romijnders
In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy.
no code implementations • 7 Apr 2016 • Ayana, Shiqi Shen, Yu Zhao, Zhiyuan Liu, Maosong Sun
Recently, neural models have been proposed for headline generation by learning to map documents to headlines with recurrent neural networks.
no code implementations • 14 Feb 2014 • Chong Zhang, Yu Zhao, Jochen Triesch, Bertram E. Shi
We extend the framework of efficient coding, which has been used to model the development of sensory processing in isolation, to model the development of the perception/action cycle.